id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
263130653
pes2o/s2orc
v3-fos-license
Temporally-Evolving Generalised Networks and their Reproducing Kernels This paper considers generalised network, intended as networks where (a) the edges connecting the nodes are nonlinear, and (b) stochastic processes are continuously indexed over both vertices and edges. Such topological structures are normally represented through special classes of graphs, termed graphs with Euclidean edges. We build generalised networks in which topology changes over time instants. That is, vertices and edges can disappear at subsequent time instants and edges may change in shape and length. We consider both cases of linear or circular time. For the second case, the generalised network exhibits a periodic structure. Our findings allow to illustrate pros and cons of each setting. Generalised networks become semi-metric spaces whenever equipped with a proper semi-metric. Our approach allows to build proper semi-metrics for the temporally-evolving topological structures of the networks. Our final effort is then devoted to guiding the reader through appropriate choice of classes of functions that allow to build proper reproducing kernels when composed with the temporally-evolving semi-metrics topological structures. Introduction 1.Context Data complexity is certainly one of the main aspects to address in the Data Science revolution framework.In particular, we call 3D complexities those aspects related to Data structure, Data dimension and Data domain.The present paper concerns the latter of these aspects and delves into the problem of graphs whose nodes and edges evolve dynamically over time. Data analysis on graphs became ubiquitous in both Statistics and Machine Learning communities.For the first, recent contributions as in Anderes et al. [2020], Moradi and Mateu [2020], Baddeley et al. [2021] and Rakshit et al. [2017] witness the importance of graph structures for georeferenced data, being realisations of either geostatistical or point processes.As for the machine learning community, the amount of literature is huge.Open Graph Benchmark (OGB) is a comprehensive set of challenging and realistic benchmark datasets with the aim to facilitate scalable, robust and reproducible graph machine learning (ML) research [Hu et al., 2020].An overview of ML methodological approaches on graphs is provided by Chami et al. [2022].Topological complexities under the framework of data analytics are discussed in Stanković et al. [2020].Excellent surveys about ML on graphs are oriented to angles as different as large scale challenges [Hu et al., 2021], automated ML [Zhang et al., 2021], representation learning [Hamilton et al., 2017], relational ML for knowledge graphs [Nickel et al., 2015] and higher order learning [Agarwal et al., 2006], to mention just a few. In the great majority of contributions, the process is assumed to be defined exclusively over the vertices of the graph.The extension to processes that are continuously defined over both graphs and edges requires substantial mathematical work. This paper focuses on graphs with Euclidean edges [Anderes et al., 2020], being an ingenious topological structure that allows to generalise linear networks to nonlinear edges.Further, the process defined over such structures can have realisations over any point over the edges, and not only in the nodes.Roughly, these are graphs where each edge is associated with an abstract set in bijective correspondence with a segment of the real line.This provides each edge with a Cartesian coordinate system to measure distances between any two points on that edge. Reproducing kernel Hilbert space (RKHS) methods [Hofmann et al., 2008] had a considerable success in both ML and Statistics community, and the reader is referred to Hofmann et al. [2006], Kung [2014] and to Pillonetto et al. [2014] for excellent overviews. RKHS methods require kernels, being positive semidefinite functions defined over a suitable input space.A customary assumption for such kernels is that of isotropy, i.e. the kernel depends only on the distance between any pair of points belonging to the input space.There is a rich literature at hand for the case of the input space being a d-dimensional Euclidean space [see the celebrated work of Schoenberg, 1942], and the reader is referred to the recent review by Porcu et al. [2023a].Non Euclidean domains have a more recent literature, and we mention Porcu et al. [2016] as well as Borovitskiy et al. [2023] and Borovitskiy et al. [2021] for recent contributions.For such cases, the distance between the points is no longer the Euclidean distance, but the geodesic distance. The tour de force by Anderes et al. [2020] has allowed to define isotropic reproducing kernels for generalised network, by working with two metrics: the geodesic and the resistance metric.Elegant isometric embedding arguments therein allow to provide sufficient conditions for given classes of functions to generate a legitimate reproducing kernel through composition with either of the two metrics.See the discussion in Section 2. Graphs cross time and temporally-evolving graphs Data on generalised networks are usually repeatedly observed over time.For such a case, it is customary to consider the input space, X, as a product (semi) metric space, with two separate metrics: the geodesic (or the resistance) metric for the graph, and the temporal separation for time.This approach has considerable advantages as it simplifies the mathematical architecture considerably.Unfortunately, under such a setting, the graph topology is invariant over time.This fact implies that nodes cannot disappear (nor new nodes can appear at arbitrary future instant times), and the shape and length of the edges do not evolve over time.Reproducing kernels for such a case have been recently proposed by Porcu et al. [2023b] and by Tang and Zimmerman [2020].When the input space is a product space equipped with separate metrics, the kernel is component-wise isotropic when it depends, on the one hand, on a suitable distance over the graph and, on the other hand, on temporal separation.For details, the reader is referred to Porcu et al. [2023b].For the case of static metric graphs, we mention the impressive approach in Bolin and Lindgren [2011]. Outside the reproducing kernel framework, scientists have been mainly focused on the topological structure of temporally dynamical networks [Hanneke et al., 2010].This fact boosted for a wealth of related approaches, ranging from community detection methods [Mankad andMichailidis, 2013, Cherifi et al., 2019] to link prediction [Lim et al., 2019, Divakaran andMohan, 2020] to structural changes detection [Rossi et al., 2013].The common feature of the above contributions is that they consider the stochastic evolution of the structure (nodes and edges) of the graphs as the basis for the inference, whilst they do not usually allow for processes defined over those graphs. Linear or circular time? Might graphs be periodic? This paper consider time-evolving graphs.While the choice of linear time does not need any argument-this is what most of the literature does-this paper argues that periodically-evolving graphs have a reason to exist.Our first argument to advocate in favour of a periodic construction is that it suits perfectly to several real-world phenomena, where both linear-time evolution (e.g.long-term trends) and cyclic oscillations (e.g.seasonal components) might happen.To make an example, consider temperatures in a given geographical area: apparently there might be strong correlations between: (i) contiguous spatial points at a given time, which are represented by means of spatial edges; (ii) the same points considered at contiguous times, which are represented by means of temporal edges between temporal layers and (iii) the same points considered at the same periods of the year, which are considered in the model as they are exactly the same point in the temporally evolving graph.In many real-world applications, the network underlying a system is only partially observable.As a consequence, it could be hard or impossible to specify the whole time-evolving (not periodic) network in cases of long time series.The periodic assumption, when all in all reasonable, may be a great help in this circumstance as well. Our contribution This paper provides the following contributions. 1. We provide a mathematical construction for graphs with Euclidean edges having a topology that evolves over time.That is, the number of vertices can change over time, as well as the shape and length of the edges.Remarkably, our construction allows for stochastic processes that are continuously indexed over both vertices and edges. 2. We start by considering the case of linear time.After providing the structure for a temporally evolving graph, we devote substantial mathematical effort to build a suitable semi-metric over it.This is achieved at the expense of a sophisticated construction through a Gaussian bridge that is interpolated through the edges in the spatio-temporal domain. 3. As an implication of the previous points, we obtain suitable second order properties (hence, the reproducing kernel) associated with such a process. 4. The previous steps are then repeated for a periodic time-evolving graph. 5. We prove that the construction with linear time might have a counterintuive property: adding temporal layers can change the inter-space distances between points in the graph.This might be a problem in terms of statistical inference, as carefully explained through the paper.We show that the periodic construction does not present such an inconvenience. 6. Our findings culminate by guiding the reader through handy constructions for kernels defined over these graphs. We should mention that our contribution differentiates with respect to earlier literature in several direction.In particular: 1. We allow the topology of the graph to evolve over time (whatever linear or circular).Previous contributions where graphs with Euclidean edges are considered use either a static graph [Anderes et al., 2020, Bolin et al., 2022] or a graph having a topology that is invariant with respect to time [Porcu et al., 2023b, Tang andZimmerman, 2020].Hence, we provide a very flexible framework in comparison with earlier literature. 2. We allow stochastic processes to be continuously defined over the graph.This is a substantial innovation with respect to a massive literature from both Statistics and ML, where the graph topology can vary according to some probability law associated with the nodes, but not to processes defined over the nodes. The structure of the paper is the following.Section 2 recalls the main mathematical objects that will be used.Section 3 builds the skeleton of our construction, i.e. time-evolving graphs, which are exploited in Sections 4 and 5, where time-evolving graphs with Euclidean edges are defined for the linear time and circular time cases, respectively.Section 6 illustrates how it is possible to build kernels on such a structure and present some examples.Finally, Section 7 concludes the paper. In addition, in Appendix A we recall some mathematical definitions used throughout.In Appendix B, we recall and then extend some significant results by Anderes et al. [2020] about the definition of kernels on arbitrary domains that are used in this manuscript, and present them under a general and easy-to-handle perspective.As proofs are rather technical, we defer them to Appendix C for a neater exposition of the main text. Mathematical background This material is largely expository and provides the necessary mathematical background needed to understand the concept illustrated in the main text.For the unfamiliar reader, Appendix A provides basic definitions and concepts used in network theory. Gaussian random fields over semi-metric spaces Let us begin with a brief introduction about Gaussian random fields [Stein, 1999] Let X be a non-empty set and let k : X × X → R. Then k is a positive semi-definite function (or a kernel, or a covariance function) if and only if, for all n ∈ N + , x 1 , . . ., x n ∈ X and a 1 , . . ., a n ∈ R, n i=1 n j=1 a i a j k(x i , x j ) ≥ 0. (1) If, in addition, whenever the above relation is an equality, then necessarily For X as above, we denote Z a real-valued random field, videlicet: for each x ∈ X, Z(x) is a real-valued random variable.Then Z is called Gaussian if, for all n ∈ N + and x 1 , . . ., x n ∈ X, the random vector Z := (Z(x 1 ), . . ., Z(x n )) ⊤ , with ⊤ denoting the transpose operator, follows a nvariate Gaussian distribution. A Gaussian random field Z on X is completely determined by its first two moments: the mean function with E denoting stochastic expectation, and the covariance function (kernel) A necessary and sufficient condition for a function k Z to be a covariance function (a kernel) of some random field Z is to be positive semi-definite.For X as above, we define a mapping d : X × X → R. Then (X, d) is called semi-metric space (or, equivalently, d is called a semi-metric on X) if the following conditions hold for each x, y ∈ X: In addition, (X, d) is called a metric space (or, equivalently, d is called a metric on X) if it is a semi-metric space and the triangle inequality holds, namely, for all x, y, z ∈ X: The covariance function k Z is called isotropic for the semi-metric space (X, d) if there exists a mapping ψ : D d X → R such that k Z (x, y) = ψ(d(x, y)), for x, y ∈ X.Here, D d X := {d(x 1 , x 2 ) : x 1 , x 2 ∈ X} is the diameter of X. See Appendix B on how to construct isotropic kernels on arbitrary domains.For a Gaussian random field Z on X, we define its variogram γ with Var denoting variance.The celebrated work of Schoenberg [1942] proves that γ Z is a variogram if and only if the mapping exp Let (X 1 , d 1 ) and (X 2 , d 2 ) be two semi-metric spaces.Then, the triple (X 1 × X 2 , d 1 , d 2 ) is called a product semi-metric space.Menegatto et al. [2020] define isotropy over a product semi-metric space through continuous functions ψ : is positive definite.The above definition naturally arises from spatio-temporal settings: suppose we have a static semi-metric space (X, d) that represents some spatial structure and (T, d T ) representing time, where T ⊆ R and, usually, In such a case, Equation (3) can be re-adapted to define kernels.This is the setting adopted by Porcu et al. [2023b] and by Tang and Zimmerman [2020]. Remark 1.We deviate from earlier literature and we instead consider a metric space X t that evolves over time, t ∈ T .Hence, our domain is written as where t describes time, and the graph coordinate x t is constrained on the space X t .Such a framework entails a way more sophisticated construction to equip such a space with a proper metric. Graphs with Euclidean edges We start with a formal definition of graphs with Euclidean edges.We slightly deviate from the definition provided by Anderes et al. [2020], for the reasons that will be clarified subsequently.For a definition of graph, see 5 in the Appendix. Definition 1 (Graph with Euclidean edges).Consider a simple, connected and weighted graph G = (V, E, w), where w : E → R + represents the weight mapping.Then, G is called a graph with Euclidean edges provided that the following conditions hold. 1. Edge sets: Each edge e ∈ E is associated to the compact segment (also denoted by e) [0, ℓ(e)], where ℓ(e) := w(e) −1 may be interpreted as the length of the edge e. Henceforth, we shall assume the existence of a total order relation on the set of vertices V and that every edge is represented through the ordered pair (v 1 , v 2 ), where v 1 < v 2 .In particular, for each u ∈ e, the endpoints of e, u and u satisfy the relation u < u. A relevant fact is that our setting deviates from Anderes et al. [2020].In particular, our Definition 1 does not require any distance consistency opposed to Anderes et al. [2020, Definition 1, (d)].The reason is that our setting does not need a bridge between geodesic and resistance metrics.A second relevant fact is that we have restricted the space of possible bijections from each edge onto closed intervals with orientation.We restrict to linear bijections: the main reason is that the focus of this paper is not to explore isometric embeddings, but to provide suitable topological structures evolving over time, and attach to them stochastic processes. As a final remark, we stress that the framework introduced through Definition 1 is way more general than linear networks, for at least two reasons: (a) in our framework, the weights of each edge be chosen independently from the others, and (b) our framework needs no restriction on the network structure, e.g., as shown in Figure 1 (right), edges may cross without sharing the crossing point. Graph laplacian and resistance metric The resistance metric has been widely used in graph analysis, as it is more natural than the shortest-path metric when considering flows or transport networks, where multiple roads between two given points may share the total flow.In order to define the classic effective resistance distance for an undirected and connected graph, we briefly report its definition and its mathematical construction. Let G = (V, E, w) be a simple, weighted and connected graph (see Definition 5 in the Appendix) and let W its adjacency matrix, that is: , where we set w((v 1 , v 2 )) = 0 whenever v 1 ̸ ∼ v 2 .In addition, for each node v ∈ V , we define its degree as the sum of the weights of the edges adjacent to it.Let D be the degree matrix of G, i.e. the diagonal matrix where each diagonal element is the degree of the corresponding vertex.Then, the laplacian matrix (or simply laplacian) of G is the matrix (4) Laplacian matrices enjoy several properties (see, for instance, Devriendt [2022]): they are symmetric, diagonally dominant, positive semidefinite and singular with exactly one null eigenvalue, corresponding to the eigenvector 1 n .Furthermore, they have non-positive off-diagonal entries and positive main-diagonal entries. A graph G = (V, E) is called a resistor graph if the edges e ∈ E represent electrical resistors and the nodes represent contact points.Given a resistor graph, the effective resistance distance R between two vertices is defined as the voltage drop between them when injecting one Ampere of current in one and withdrawing one Ampere from the other. Several mathematical formulations of this concept have been provided, and the reader is reminded, among many others, to Jorgensen and Pearse [2010, Subsection 2.1].Throughout, we follow Ghosh et al. [2008].Let G = (V, E) be a resistor graph.For each denote the resistance of the resistor that connects v 1 and v 2 .In addition, for each v 1 , v 2 ∈ V , define the weight (which plays the role of the physical conduttance) Let L be the laplacian matrix of G with the above-defined weights, L + its Moore-Penrose generalised inverse (see Definition 6 in the Appendix).Finally let e v i denote the vector with all zeroes, except a one at position v i .Then the effective resistance distance R between two nodes v 1 and v 2 enjoys the following expression: 3 Time-evolving graphs with Euclidean edges Defining a time-evolving graph with Euclidean edges requires some mathematical formalism.While keeping such a formalism below, we shall then provide some narrative in concert with some graphical representation to have an intuition of how these graphs work. Even though the underlying rationale of our construction is quite natural and intuitive, the mathematical description of such an object is quite involved as it requires several of steps and a substantial formalism.Hence, we provide a sketch of our procedure to help the reader in the Box below. A sketch of our construction 1. Define a time-evolving graph as a properly defined sequence of graphs indexed by discrete time instants; Some comments are in order. Step 1 is completely general and does not require any topological structure on every marginal graph G t , for a given time t.Yet, having graphs with Euclidean edges that evolve over time requires some more work, and this fact justifies Step 2, which allows for connectivity, being one of the properties sine qua non of a graph with Euclidean edges. Step 4 is not mathematically necessary to guarantee the validity of the structure, but it is justified by computational and intuitive reasons as explained throughout. Step 1 starts with a formal definition. Definition 2 (Time-evolving graph).Let T = {0, ..., m − 1} be a (finite) collection of time instants.To every time instant t ∈ T we associate a simple undirected and weighted graph For an edge e t ∈ E t , the corresponding weight is denoted w(e t ) := w t (e t ).We use n t := |V t | for the number of vertices at time t. Let G = {G 0 , ..., G m−1 } be the associate finite collection of these graphs.Call V := t V t the set of vertices, n := |V | the total number of vertices, and E S := t E t the set of spatial edges.Finally, if v ∈ V , whenever convenient we shall write t(v) for the unique value t such that v ∈ V t . Let s : V → S be a mapping from V , where S is a set of labels, such that s(v 1 ) ̸ = s(v 2 ) whenever v 1 and v 2 are two distinct vertices belonging to the same graph We call the triple G = (T, G, s) a time-evolving graph. While Definition 2 provides a flexible framework to manage graphs that evolve over time, we are going to merge its underlying idea with the one of graph with Euclidean edges presented in Subsection 2.2.Step 2 of our routine intends to complete the time-evolving graph as in Definition 2 so to ensure spatio-temporal connectivity. Let G be a time-evolving graph.We define its equivalent simple timeevolving graph, G = (V, E) as the graph with edges E := E S ∪ E T , with E T a set of additional edges (called temporal edges throughout) that connect the same nodes at different time instants.More precisely, To each new edge e = (v 1 , v 2 ) ∈ E T a weight w(e) > 0 is assigned, while all the other weights remain unchanged.One possibility is to choose w(e) := α |t(v 1 ) − t(v 2 )| −1 , with α > 0 a given scale factor.Although we assume this particular expression in all the following examples, we stress that any choice leads to a valid model as long as w(e) > 0. The intuitive idea behind the construction of an equivalent simple timeevolving graph is to consider m layers, each representing a different temporal instant (namely a graph G t ), and connect them by means of additional intratime edges, which account for the time-dependency of the graphs.Figure 2 depicts an example of time-evolving graph and the resulting equivalent simple graph.Henceforth, we will consider each connected component of the equivalent simple graph separately. Step 2 ensures that it becomes feasible to assign, to each temporal label t ∈ T , a graph with Euclidean edges to the equivalent simple graph associated with a given time-evolving graph (Step 3).We note that the choice of the temporal edges needs care.Indeed, the set of possible temporal edges for a fixed label s ∈ S may grow quadratically in the number of considered temporal instants m. Our proposal is to connect every node that exists at adjacent times, so that there will be no temporal edge connecting non-adjacent times.Hence, we propose a temporally Markovian structure for the graph (Step 4).This is formalised below. Definition 3 (Time-evolving Markov graph).A time-evolving Markov graph is a time-evolving graph G = (V, E), where We stress that temporal Markovianity is not needed to prove the mathematical results following subsequently, which work even for the case of non adjacent layers.Yet, Markovianity simplifies our job considerably.In fact, it allows for a plain representation of an equivalent simple graph.Furthermore, there are non-negligible computational reasons.Indeed, for large networks, Markovianity allows for sparse Laplacian matrices (block tridiagonal) of the associated equivalent simple graph.This entails huge computational savings in both terms of storage and computation.Finally, allowing edges between non-adjacent layers could lead to a huge number of weights α's.In particular, under the assumption of temporally homogeneous weights (the weight of a temporal edges depends only on the temporal distance between the layers it connects), we would have m − 1 possible weights if Markovianity is not assumed.For a large collection of time instants, this can become computationally unfeasible. We call a time-evolving Markov graph G temporally complete when the set E T is identically equal to the set in the right hand side of ( 6).Albeit such a property is not required to prove our theoretical results, it is operationally useful as it allows to avoid removing or adding temporal edges. Resistance metrics for linear time We start this section by noting that defining the classical resistance metrics between nodes of the temporally evolving graph is not an issue.Yet, we are dealing with a graph where distances should be computed between any pair of points lying continuously over the edges.This is a major challenge that requires some work as follows. For the case of static graphs with Euclidean edges, Anderes et al. [2020] provide an ingenious construction that allows for a suitable continuouslydefined metric on the basis of Brownian bridges and their variograms. The idea is to follow a similar path, by defining a Gaussian process that is continuously indexed over the edges of an equivalent simple connected graph associated with a given time-evolving graph. Before going into technical details, we present a brief outline.Following Anderes et al. [2020], we are going to define a distance on all the points of the graph, namely its vertices and the points on its edges.To this aim, we define a Gaussian process Z on every point of the time-evolving equivalent simple Markovian graph and then define the distance between two points as the variogram of such process, i.e., for each u 1 , u 2 ∈ G: with γ Z as being defined through (2).In such a way, we can directly apply Theorem 1 stated in Appendix B to obtain kernels.Here, Z := Z V + Z E is the sum of two independent Gaussian processes defined on the equivalent simple graph G.The process Z V accounts for the structure of the graph (namely its vertices and the weights of its edges) and plays the role of major source of variability (i.e., the distance), whilst Z E adds some variability on the edges and accounts for the temporal relationship between the same edge at different times. Formal construction of Z V and Z E We start by defining the process Z V through where u = (u, u, δ e (u)), with e = (u, u) and δ e (u) as in Definition 1. Further, at the vertices of G, Z V is defined as a multivariate normal random variable, denoted Z V V ∼ N (0, L + ), being L the laplacian matrix associated to the graph G.The intuitive interpretation is that outside the vertices the process Z V is obtained through a sheer linear interpolation.The construction of Z E is a bit more complex, as Z E is piecewise defined on a suitable partition of E. A formalisation of this concept follows. For each e = (v 1 , v 2 ) ∈ E S , we define the lifespan of e, written ls(e), as the maximal connected set of time instants, t, for which the edge e exists.More formally, ls(e) is defined as the maximal (with respect to the inclusion partial order) subset of T such that: • ls(e) is connected, that is ∀t 1 < t 2 ∈ ls(e), {t 1 , . . ., t 2 } ⊆ ls(e); • ∀t ∈ ls(e), there exists 2 allows to visualise the situation.The lifespan of the edge (A, B) at time t = 0 is {0, 1, 2}; the lifespan of (C, E) at time t = 1 is {0, 1} and the one of (B, C) at time t = 2 is {2}. We now define the life of e (denoted lf(e)) as the set of edges that represent e at different times and have the same lifespan ls(e).Formally, we have For convenience, we define the life for temporal edges as well: if e ∈ E T , lf(e) := {e}. It is clear that the set {lf(e) : e ∈ E S } forms a partition of all the spatial edges E S , and that {lf(e) : e ∈ E T } is a partition of E T .The main idea is to consider the life of each spatial and temporal edge and define a suitable process on it, being independent from the others. Let us consider a spatial edge e ∈ E S and its lifespan ls(e).Consider now the set ls(e) × [0, 1] and define on it a zero-mean Gaussian process B(t, δ) whose covariance function is given by with t 1 , t 2 ∈ ls(e) and δ 1 , δ 2 ∈ [0, 1].Here k T is a temporal kernel defined on N such that k T (0) = 1 and k BB (δ 1 , δ 2 ) := min(δ 1 , δ 2 )−δ 1 δ 2 is the kernel of the standard Brownian bridge on [0, 1].Notice that the spatial marginals of the process B are standard Brownian bridges.We stress that the process B(t, δ) is only needed for the definition of the process Z E on lf(e), as it is better explained below.Now, we define the process Z E on lf(e), denoted Z E lf(e) , as follows: given an edge e ′ = (u, u) ∈ lf(e) and given a point u = (u, u, δ) on it, Z E lf(e) (u) := ℓ(e ′ ) B(t(u), δ). Finally, for each temporal edge e = [0, ℓ(e)] ∈ E T , we define the process Z E on it as an independent (from both Z V and Z E on E S ) Brownian bridge on [0, ℓ(e)], having covariance function given by This concludes the construction of the process on the whole set of the edges. Mathematical properties of the construction We remind the reader that the process Z is Gaussian, being the sum of two independent Gaussian processes.Hence, the finite dimensional distribution of Z is completely specified through the second order properties, namely the covariance function.The following result provides an analytical expression for the covariance function associated with Z. While noting that this construction is completely general, we also point out that Markovianity properties, whenever aimed, can be achieved through a proper choice of the temporal kernel k T .A reasonable choice for k T is the correlation function of an autoregressive process of order one, which is given by: where λ ∈ (−1, 1) is a free parameter and h ∈ Z is the lag. Notice that the special case λ = 0, for which k T (h) = 1 h=0 , corresponds to the static resistance metric provided by Anderes et al. [2020].Figure 3 depicts some realisations for the process Z E over an edge for different values of the parameter λ.The parameter λ for the edges plays a similar role of the parameter α for the nodes: their are both closely related to the inter-dependency of the process at different times.Indeed, λ measures how much the process Z E is correlated between two times t 1 , t 2 ∈ ls(e).Analogously, the value α as weight of the temporal edge e ∈ E T , is related to the partial correlation of the endpoints of e given everything else.As a consequence, it is natural to choose a high value of λ for high values of α and vice-versa.We notice that it is natural to choose non-negative values for λ, as we usually expect a non-negative correlation between the values of Z E for close times. Using equation ( 7), we get the following expression for the distance between any to points u 1 , u 2 ∈ G.A 0 : An example of equivalent simple graph for which the semi-distance defined at (13) does not satisfy the triangle inequality. The formal statement below provides a complete description of the space ( G, d), that is, the time-evolving graph G equipped with the metric d. Proposition 2. Let d be the mapping defined at (13).Then, the pair ( G, d) is a semi-metric space. One might ask whether a stronger assertion holds for the pair ( G, d) as defined above.The next statement provides a negative answer. A counterexample is given by the graph depicted in Figure 4, where the length of the top edge (A 2 , B 2 ) becomes vanishingly small, while length of the bottom edge (A 0 , B 0 ) grows to infinity.For such a graph, we have d(P, Q) + d(Q, R) < d(P, R) (see the proof of Proposition 3 in the Appendix for more details). Proposition 3. Let d be the mapping defined at (13).Then, the pair ( G, d) is not a metric space.Remark 2. Although in general our extension to the classic resistance distance is not a metric, it retains some of its properties. 2. For all t, (G t , d Gt ) coincides with the restriction on G t of the resistance metric of Anderes et al. [2020] computed on the whole graph G, hence it is a metric and it is invariant to splitting edges and merging edges at degree 2 vertices [Anderes et al., 2020, Propositions 2 and 3].Anderes et al. [2020] defined graphs with Euclidean edges as a generalisation of linear networks, and Euclidean trees with a given number of leaves.For both cases, edges are linear.This case is not especially exciting for the framework proposed in this paper.The reason is that a simple isometric embedding arguments as in Tang and Zimmerman [2020] proves that one can embed a time-evolving linear network in R × R 2 = R 3 , where the first component indicates time.As a consequence, it is immediate to build a vast class of covariance functions on a time-evolving linear network by a sheer restriction of a given covariance function defined on R 3 .However, such a method does not take into account the structure of the graph: two points that are close in R 3 but far in the time-evolving graph could have a high correlation.Section 6 illustrates how to build kernels over the special topologies proposed in this paper.Apparently, the choices are more restrictive than the ones available for the case of linear networks, but they ensure that the spatio-temporal structure is taken into account. Circular time and periodic graphs Perhaps the main drawback of using the resistance distance in the layer graphs that express the spatio-temporal variability is that, when adding one or more new layers, the distances between the points of the previous layers may change (more specifically: they may decrease).Indeed, whenever new paths between a couple of points are added, the effective distance between such points decreases, as the current meets less resistance.This presents a critical interpretation problem: for a given time series, let new data be added on a daily basis.Then, inference routines may provide different results when compared to the results of same inference techniques applied to the updated time series.Indeed, as the distances may vary, the covariances between the same space-time points may vary as well. Here, we consider the alternative of time-evolving periodic networks, i.e. time-evolving networks whose evolution repeats after a fixed amount of time instants (number of layers).Not only does this construction solve the abovementioned issue, but it also suits many phenomena whose evolution present both linear and periodic components. Definition 4 (Time-evolving periodic graph).Let G = {G 0 , G 1 , . ..} be a countable sequence of graphs.Then, G is a time-evolving periodic graph if there exists a natural number m ≥ 3 such that, for all t ∈ N, G t = G t+m .Its equivalent simple Markovian periodic graph G is built by connecting G 0 , . . ., G m−1 through the set of edges (14) Each point in the resulting space-time is denoted by its true time t ∈ R + 0 , by the endpoints of the edge e it lies on and by the relative distance δ e (u) from the first one: we write u = (t, u, u, δ e (u)), where e = (u, u).Notice that t ∈ N whenever u belongs to a temporal layer, while t is not integer if u belongs to the inner part of a temporal edge e ∈ E T .Given a point u ∈ V ∪ E S , we sometimes write τ (u) as the unique layer τ ∈ {0, . . ., m − 1} that contains u.Clearly τ (u) ≡ t(u) (mod m). We start by noting that the previously-mentioned issue about linear timeevolving graphs is overcome by this construction.Indeed, once the full periodic structure has been established, the Laplacian matrix needs be computed only once, regardless of how many new time points are added. A second remark comes from the metric construction, which necessarily needs to be adapted to a periodic process.Otherwise, some counter-intuitive properties can arise.Suppose the distance d(u 1 , u 2 ) is defined as in (7).Then, for any couple of points u 1 = (t 1 , u 1 , u 1 , δ e (u 1 )) and u 2 = (t 2 , u 2 , u 2 , δ e (u 2 )) with u 1 = u 2 , u 1 = u 2 and δ e (u 1 ) = δ e (u 2 ), even when t 1 ̸ = t 2 , the distance would be identically equal to zero.Hence, a different definition for the process Z is necessary. The definition of life of any edge e ∈ E remains unchanged: The definition of Z E is now identical to the one of the linear-time graph, exception made for the choice of the temporal kernel k T .Indeed, we ought to consider that the time is now cyclic in the dependence structure of the temporal layers τ ∈ {0, ..., m − 1}.It is reasonable to model the process underlying the temporal kernel k T by means of a graphical model, as it embodies the idea of conditional independence.For a given spatial edge e ∈ E S , we distinguish two cases: whether the lifespan of e is the whole temporal set T = {0, ..., m − 1} or not. The lifespan coincides with T In this case, we define the covariance matrix of a zero-mean Gaussian random vector Z T : {0, ..., m − 1} → R via its precision matrix.More precisely, let G T be the circulant graph with m nodes (labelled by τ ∈ {0, ..., m − 1}) and m edges between adjacent nodes, as shown in Figure 6.We associate each edge with a given weight ρ ∈ 0, 1 2 , which represents the partial correlation between subsequent times.As a consequence, the precision matrix Θ Z T is the circulant matrix that follows. The correlations of Z T for some values of ρ and m = 8. The correlations of Z T for some values of ρ and m = 20.Here, κ > 0 is a normalising constant which role is to make the covariance matrix Σ Z T := (Θ Z T ) −1 a correlation matrix (namely the variances of every entry of Z T should be 1).Notice that the matrix Σ Z T is a symmetric circulant matrix: as a consequence, it is possible to store only its first column, which will be denoted by σ Z T ∈ R m .In Figure 7, the values of the vector σ Z T are plotted for some values of m and ρ. The lifespan does not coincide with T In this case, the life of the edge e is interrupted.Thus, it is reasonable to consider the different parts of the life of e as independent.To this aim, we consider the subgraph of G T that represents the evolution of the edge e. More precisely, we remove from G T all the nodes τ for which the edge e does not exist and we remove from G T all the edges whose at least one endpoint has been eliminated.Next, we consider all the connected components of the so-obtained graph and define an autoregressive model on each of them, independently from the others (similarly to the linear case of Section 4.2).More precisely, we define the covariance matrix of the process Z T as a block-diagonal matrix whose diagonal blocks are of the form , being λ ∈ (−1, 1) the lag-1 correlation and j the number of times τ that belong to ls(e), i.e. j := |ls(e)|. Second-order properties of Z in the circular case The following result illustrates the analytic expression for the covariance function associated with Z in the construction of the metric associated with G in the periodic case. Then the kernel of the process Z defined on G enjoys the following representation: where Notice that the unique differences with the expression (11) are the different choices for the temporal kernel k T and the additional addend β 2 min(t 1 , t 2 ).The latter ensures that the same points at different times have a strictly positive distance, as, combining equations ( 13) and ( 16) for such points u 1 and u 2 , we get We conclude this section with a formal assertion regarding the mapping d as being introduced for the case of a periodic graph G. Type ψ(x) Parameter range Power exponential Here, K r denotes the modified Bessel function of the second kind. General construction principles Variograms can be composed with certain classes of functions to create reproducing kernels associated with semi-metric spaces.A function ψ : [0, +∞) → R is called completely monotone if it is continuous on [0, +∞), infinitely differentiable on (0, +∞) and for each i ∈ N it holds (−1) i ψ (i) (x) ≥ 0, where ψ (i) denotes the i th derivative of ψ and ψ (0) := ψ.By the celebrated Bernstein's theorem [Bernstein, 1929], completely monotone functions are the Laplace transforms of positive and bounded measures.Some examples of parametric families of completely monotone functions are listed in Table 1. The result below comes straight by using similar arguments as in Anderes et al. [2020], which have been reported in Theorem 1 in Appendix B. Proposition 6.Let ψ : [0, ∞) → R be continuous, completely monotonic on the positive real line, and with ψ(0) < ∞.Let d : G × G → R be the mapping defined at (13).Then, the function is a strictly positive definite function.Proposition 6 provides a very easy recipe to build kernels over time evolving graphs, whatever the temporal structure (linear or periodic).Any element from the Table 1 is a good candidate for such a composition.We do not report the corresponding algebraic forms for obvious reasons.Instead, we concentrate on illustrating how these covariance works through two practical examples.We believe that the free parameters and the large number of analytically-tractable completely monotone functions provide a wide range of models that could fit several real-world frameworks. Linear time We start by considering the graph in Figure 8.Here, we have m = 3 time instants.Further, We focus on the distances as well as the covariances between the points A 0 = (A 0 , B 0 , δ A 0 = 0), P := (C 0 , D 0 , δ P = 0.8) and Q := (C 2 , D 2 , δ Q = 0.5).All the spatial edges E S have weight 1, whilst the temporal edges have weight α > 0. Finally, we use k T as in ( 12), with λ free parameter.The adjacency matrix follows (here • stands for 0). Figure 9 clearly shows the effect the temporal edge parameter α plays on the distances: while it has a considerable impact on the distances d(A 0 , Q) and d(P, Q), it shows a negligible effect on d(A 0 , P ).This is reasonable given the graph structure: A 0 and P belong to the same layer (t = 0) and they are connected from both the paths A 0 , D 0 , P and A 0 , B 0 , C 0 , P , which completely lie on t = 0 (and therefore they do not change with α).On the other hand, Q can be connected to A 0 and P only via paths that include temporal edges.As a consequence, if α → 0 + , both d(A 0 , Q) and d(P, Q) will go to infinity. The plot on the right of Figure 9 shows the effect of the correlation parameter λ as well.Whilst it does not influence the distances concerning A 0 (since it is a vertex), as it increase, it reduces the distances between P and Q.Clearly, the effect is more significant for large values of the parameter α.Indeed, when α is small, the distances between nodes at different time instants are large.As a consequence, the second line of equation ( 11) becomes negligible when compared to the first one. Figure 10 shows the resulting effect of the parameter α on the correlations between A 0 , P and Q generated by the composition of two completely monotone functions taken from Table 1 and the distances shown in Figure 9. Figure 10: Generated covariances between the points A 0 , P and Q. Left: exponential kernel with parameters (α = 1, β = 1) (see Table 1).Right: generalised Cauchy kernel with parameters (α = 1, β = 5, ξ = 0.5) (see Table 1).11 between the points P 0 and P t for ρ = 0.45 and α = 1.Covariances have been generated via the exponential kernel with parameters α = 0.5 and β = 0.5 (see Table 1).11 between the points P 0 and P t for α = 10 and β = 0.3.Covariances have been generated via the Dagum kernel with parameters α = 1, β = 2 and ξ = 0.5 (see Table 1).the spikes the covariance functions show: they perfectly embody the periodic setting of a process, as introduced in Subsection 1.3.It is in order to remark that, although isotropic covariances are decreasing functions of the spatial distances, Figures 12 and 13 show valid covariance functions, as the distance of our setting is completely different from the Euclidean distance on R n as it takes into account the spatio-temporal structure of the time-evolving graph. In Figure 13, it is possible to visualise the effect of the partial correlation parameter ρ ∈ [0, 1 2 ).First, notice that its role is particularly significant when the weight α is high.Indeed, for low α's, the covariance structure of the vertices given by the inverse laplacian matrix is dominant.Yet, when α is high, the nodes at different time instants are considered close to each other and the resulting distances given by the sheer first line of ( 16) are low.Thus, the parameter ρ (which enters in the second line of ( 16)) has a greater influence.Clearly, the greater is ρ, the lower is the distance, as it correlates different Brownian bridge realisations.strictive assumptions.Crime data on networks have been considered with strong emphasis on computer and algorithmic complexity [Chen et al., 2004].More recently there has been a considerable interest in the statistical assessment of such data over networks, and the reader is referred to Bernasco and Elffers [2010], and the more recent paper by Firinguetti et al. [2023], with the references therein.The connections (correlations and causalities) studied through generalised networks are of main interests in psychometrics [Epskamp et al., 2017] and in genetics [Yip and Horvath, 2007], and the literature is starting to produce contributions where the interconnections are continuously defined over the edges and not only at the nodes. Several fundamental research questions are then inspired on the contribution in this paper.To mention a few, regularity properties of temporally evolving stochastic processes defined over generalised networks can now be studied thanks to the framework provided in this paper.Another field of research that will benefit from our research is that of stream flows, where the graphs are oriented (think about a river with currents in a given direction).In finance, there is a fertile literature on causality, with special emphasis on graph causality, and the reader is referred to the monumental effort in Lopez de Prado [2022].Studying causality under continuity-as in our approach-is definitely a major challenge for the future. Other major theoretical challenges involve the definition of multivariate processes over generalised networks.Further, it is imperative to relax the assumptions of stationarity and isotropy.All these will be major challenges for future researches. B Definition of isotropic kernels on arbitrary domains In this brief Section, we state and enrich some crucial results of Anderes et al. [2020] that can be used in a variety of different frameworks.While Theorem 1 furnishes a straightforward recipe for the definition of kernels as compositions of variograms and completely monotone functions, Proposition 7 characterises the separation property and the triangle inequality for a variogram.As a sheer application of the former, we obtain the proof of Proposition 6. Theorem 1.Let Z be a stochastic process defined on a set X such that E (Z 2 (x)) < +∞ for all x ∈ X. Define In addition, let ψ be a non-constant completely monotone function on [0, +∞). Proposition 7. Let Z, X and d as in Theorem 1. Then: (19) Notice that the process Z E is defined on all the vertices V (it is zero) and that the expression ( 19) is meaningful even when any of the points u 1 and u 2 belongs to V .Indeed, if, say, u 1 ∈ V , then δ 1 ∈ {0, 1} regardless of which incident edge (u, v) is taken in the expression u 1 = (u, v, δ).As a consequence, the last factor in (19) vanishes and the covariance is therefore null.Finally, notice that, since Z V and Z E are independent, the covariance function of Z is simply the sum of ( 18) and ( 19). Proof of Proposition 3. Consider the equivalent simple graph represented in Figure 4, where all the weights of are 1, exception made for the edges (A 0 , B 0 ) and A 2 , B 2 , which have weights ε and 1 ε respectively, for a sufficiently small ε > 0. Considering the vertices in the order A 0 , B 0 , A 1 , . . ., B 2 , the laplacian matrix L is . Figure 1 : Figure 1: Left: a linear network.Right: a graph with Euclidean edges, where the bijections between the edges e 1 and e 2 and their respective real segments [0, ℓ(e 1 )] and [0, ℓ(e 2 )] are stressed. Figure 3 : Figure 3: Draws from the process Z E on an edge with lifespan {0, 1, 2, 3}, for several values of the parameter λ. Figure 5 : Figure 5: An example of an equivalent simple graph for a periodic timeevolving graph with m = 4 and S = {A, B, C, D}.The coloured edges belong to E S , whilst the black ones belong to E T . Figure 6 : Figure 6: Conditional dependence structure of the process Z T for m = 8. Figure 8 : Figure 8: Equivalent simple graph taken as an example for the linear-time case. Figure 9 : Figure9: Distances between the points A 0 , P and Q.Notice that while d(A 0 , P ) and d(A 0 , Q) (left) do not depend on λ, the distance d(P, Q) (right) decreases as λ increases. Figure 12 : Figure12: Distances (left) and covariances (right) for the graph in Figure11between the points P 0 and P t for ρ = 0.45 and α = 1.Covariances have been generated via the exponential kernel with parameters α = 0.5 and β = 0.5 (see Table1). 2. Define connected equivalent simple time-evolving graphs by completing a time evolving graph through a set of edges that connect the same nodes at different time instants; 3.Over the connected equivalent simple graph, we can now define, for every time t, a graph with Euclidean edges, G t ;4.Define a time-evolving Markov graph to exploit computational advantages.
2023-09-29T06:41:13.115Z
2023-09-22T00:00:00.000
{ "year": 2023, "sha1": "8184d2a8752ae173f8885320265e17d412ff07ea", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8184d2a8752ae173f8885320265e17d412ff07ea", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
220363928
pes2o/s2orc
v3-fos-license
The purity phenomenon for symmetric separated set-systems Let $n$ be a positive integer. A collection $\cal S$ of subsets of $[n]=\{1,\ldots,n\}$ is called {\it symmetric} if $X\in {\cal S}$ implies $X^\ast\in {\cal S}$, where $X^\ast:=\{i\in [n]\colon n-i+1\notin X\}$. We show that in each of the three types of separation relations: {\it strong}, {\it weak} and {\it chord} ones, the following"purity phenomenon"takes place: all inclusion-wise maximal symmetric separated collections in $2^{[n]}$ have the same cardinality. These give"symmetric versions"of well-known results on the purity of usual strongly, weakly and chord separated collections of subsets of $[n]$, and in the case of weak separation, this extends a recent result due to Karpman on the purity of symmetric weakly separated collections in $\binom{[n]}{n/2}$ for $n$ even. the other contains j (equivalently, either A − B < B − A or B − A < A − B or A = B). Sets A, B ⊆ [n] are called chord separated if there are no four elements i < j < k < ℓ of [n] such that one of A − B and B − A contains i, k, and the other contains j, ℓ. Chord separated sets A, B ⊆ [n] are called weakly separated if the following additional condition holds: if A surrounds B then |A| ≤ |B|, and if B surrounds A then |B| ≤ |A| (where |A ′ | is the number of elements in A ′ ). Accordingly, a collection A ⊆ 2 [n] of subsets of [n] is called strongly (weakly, chord) separated if any two members of A are strongly (resp. weakly, chord) separated. The notions of strong and weak separations were introduced by Leclerc and Zelevinsky [10], and the notion of chord separation by Galashin [6]. The first two notions appeared in [10] in connection with the problem of characterizing quasi-commuting flag minors of a quantum matrix. (In particular, one shows there that in the quantized coordinate ring O q (M m,n (K)) of m × n matrices over a field K, where q ∈ K * , two flag minors [ ) For a discussion on this and wider relations between the weak/strong separation and quantum minors, see also [1,Sect. 8]). For brevity we will refer to strongly, weakly, and chord separated collections as s-, w-, and c-collections, respectively. The sets of such collections in 2 [n] are denoted by S n , W n , and C n , respectively. As is shown in [10], (1.1) the maximal possible sizes of strongly and weakly separated collections in 2 [n] are the same and equal to s n := n 2 + n 1 + n 0 (= 1 2 n(n + 1) + 1). When dealing with one or another set (class) C of collections in 2 [n] , one says that C is pure if any maximal by inclusion collection in it is maximal by size (viz. number of members). Leclerc and Zelevinsky showed in [10] that the set S n is pure and conjectured that W n is pure as well. This was affirmatively answered in [2], by proving that (1.2) any w-collection in 2 [n] can be extended to a w-collection of size s n . (One application of the purity of W n mentioned in [10] concerns the dual canonical basis in O q (M m,n (K)) containing all quasi-commuting monomials.) For other interesting classes of w-collections with the purity behavior, see [11,12,3].) The corresponding purity result for chord separation was obtained by Galashin [6]: (1.3) any c-collection in 2 [n] can be extended to a c-collection of size c n := n 3 + n 2 + n 1 + n 0 . Clearly (i) and (ii) are involutions: (i • ) • = i and A = A. And (iii) is viewed as the composition of these two involutions (which commute); so it is an involution as well: (A * ) * = A. Involution (iii), which is of most interest to us in this paper, was introduced by Karpman [9] in the special case when n is even and |A| = n/2. (In that case, the author treats the so-called symmetric plabic graphs, which are closely related to the corresponding Lagrangian Grassmannian, the space of maximal isotropic subspaces with respect to a symplectic form.) For convenience, we will refer to involution (iii) as the K-involution and use the following definition for corresponding set-systems. Definition. A collection S ⊆ 2 [n] is called symmetric if it is closed under the Kinvolution, i.e., A ∈ S implies A * ∈ S. Using a technique of plabic tilings and relying on the purity of the set of wcollections in a "discrete Grassmannian" [n] m = {A ⊆ [n] : |A| = m} with m ∈ [n] (cf. [11]), Karpman showed the following Theorem 1.1 ([9]) For n even, all inclusion-wise maximal symmetric w-collections in [n] n/2 have the same cardinality, which is equal to n 2 /4 + 1. (This coincides with the maximum cardinality when the symmetry condition is discarded.) We give the following generalization of that result. Theorem 1.2 For any n ∈ Z >0 , all inclusion-wise maximal symmetric w-collections in 2 [n] have the same cardinality. When n is even, it is equal to s n . When n is odd, it is equal to s n − (n − 1)/2. Remark 1. Theorem 1.2 implies Theorem 1.1. Moreover, one can show a sharper result, as follows. For an even n and an integer k such that 0 ≤ k < n/2, let Λ n,k denote the union of sets [n] i over n/2 − k ≤ i ≤ n/2 + k. We assert that: ( * ) all inclusion-wise maximal symmetric w-collections S in Λ n,k have the same size (which gives Theorem 1.1 when k = 0). To see this, given a symmetric w-collection S in Λ n,k , extend it to the collection W ⊂ 2 [n] by adding the set I of all intervals I ⊆ [n] of size ≥ n/2 + k and the set J of all co-intervals J ⊂ [n] of size ≤ n/2 − k (including the empty co-interval ∅). Hereinafter, an interval in [n] is meant to be a set of the form {a, a + 1, . . . , b} ⊆ [n], denoted as [a..b] (in particular, [n] = [1..n]), and a co-interval is the complement I = [n] − I of an interval I. One can check that: (i) each interval I is weakly separated from any set A ⊆ [n] with |A| ≤ |I|; symmetrically, each co-interval J is weakly separated from any B ⊆ [n] with |B| ≥ |J|; (ii) for any I ∈ I, I * is a co-interval in J , a vice versa; (iii) any set A ∈ 2 [n] − I with |A| > n/2 + k is not weakly separated from some I ∈ I; symmetrically, any B ∈ 2 [n] − J with |B| < n/2 − k is not weakly separated from some J ∈ J . These properties imply that S as above is inclusion-wise maximal in Λ n,k if and only if W := S ∪ I ∪ J is inclusion-wise maximal in 2 [n] , whence assertion ( * ) follows from Theorem 1.2. Also, given n, k, we can easily express |W| via |S|, and back. The proof of Theorem 1.2 relies on (1.2) and is based on a geometric approach: it attracts a machinery of combined tilings, or combies for short, which are certain planar polyhedral complexes on two-dimensional zonogons introduced and studied in [3]. It turns out that when n is even, there is a natural bijection between the maximal symmetric w-collections in 2 [n] and the so-called "symmetric combies" on the zonogon Z(n, 2) (this is analogous to the existence of a bijection between usual maximal w-collections and combies, see [3,Theorems 3.4,3.5]). As a by-product of our method of proof of Theorem 1.2, we obtain a similar purity result for the strong separation: (1.5) all inclusion-wise maximal symmetric s-collections in 2 [n] have the same cardinality; it is equal to s n when n is even, and s n − (n − 1)/2 when n is odd. The next group of results of this paper concerns symmetric chord separated collections. Our main theorem in this direction is as follows. (Note that this can be regarded as a generalization of Theorem 1.1 as well since within any domain of the form [n] k the notions of weak and chord separations coincide.) An important ingredient of the proof of this theorem is the geometric characterization of maximal chord separated collections in 2 [n] in terms of fine zonotopal tilings, or cubillages, of 3-dimensional cyclic zonotopes Z(n, 3), due to Galashin [6] (the term "cubillage" that we prefer to use in this paper appeared in [8]). We establish a symmetric analog of that nice property (valid for both even and odd cases of n): any maximal symmetric c-collection in 2 [n] can be expressed by the vertex set of a symmetric cubillage on Z(n, 3). This paper is organized as follows. Section 2 contains basic definitions and reviews some known facts. In particular, it explains the notions of combined tilings, or combies, and fine zonotopal tilings, or cubillages (in the 3-dimensional case), and recalls basic results on them needed to us. Section 3 deals with the even color case of symmetric weakly separated collections and proves Theorem 1.2 for n even. The odd color case of this theorem is studied in Section 4. Section 5 is devoted to the even color case of symmetric chord separated collections, giving the proof of Theorem 1.3 for n even. The odd case of this theorem is shown in the concluding Section 6. This section finishes with a slightly sharper version of Theorem 1.3 (in Remark 6). Also we add two more results in the ends of Sections 5 and 6 (Theorems 5.1 and 6.2) which are devoted to geometric constructions related to embeddings of maximal symmetric w-collections in maximal c-collections.) It should be noted that the above purity results do not remain true for "higher" symmetric separation. Recall that sets A, B ⊆ [n] are called (strongly) k-separated if there are no k + 2 elements i 1 < i 2 < · · · < i k+2 of [n] such that the elements with odd indexes belong to one, while those with even indexes to the other set among A − B and B − A. In particular, chord separated sets are just 2-separated ones. As is shown in [7], when k ≥ 3, a maximal by inclusion k-separated collection in 2 [n] need not be maximal by size. (In fact, a counterexample to the purity with k = 3 given there can be adjusted to the symmetric 3-separation as well.) Surprisingly, the maximal by size symmetric k-separated collections in 2 [n] possess nice structural and geometric properties; they are systematically studied in [5] in the context of higher Bruhat orders of types B and C (where some open questions and conjectures are raised as well). One important property among those is that such collections for n, k even can be connected by use of symmetric local mutations (or "flips") yielding a poset structure with one minimal and one maximal elements. More about symmetric flips will appear in a forthcoming paper. Preliminaries In this section we give additional definitions and notation and review some facts about combined tilings and cubillages needed for the proofs of Theorems 1.2 and 1.3. • For an edge e of a directed graph G without parallel edges, we write e = (u, v) if e connects vertices u and v and is directed (or "going") from u to v. A path in G is a sequence P = (v 0 , e 1 , v 1 , . . . , e k , v k ) in which each e i is an edge connecting vertices v i−1 and v i . It is called a directed path if each edge e i is directed from v i−1 to v i . When it is not confusing, we may write P = v 0 v 1 . . . v k (using notation via vertices). • Let n be a positive integer. Define m := ⌊n/2⌋; then n = 2m if n is even, and n = 2m + 1 if n is odd. Instead of colors 1, 2, . . . , n (forming [n]), it will be often more convenient to deal with the set of "symmetric colors" −i, i for i = 1, . . . , m, to which we also add color 0 when n is odd. This gives the symmetrized color sets So i • = −i for each color i, and the only self-symmetric color is 0 (when n is odd). • For A ⊆ [n] and p = 0, 1, 2, define Π p (A) to be the set of symmetric color pairs We say that the pairs in Π 0 (A), Π 1 (A), and Π 2 (A) are, respectively, poor, ordinary, and full for A. (Note that if n is odd and i = m + 1, then the "middle" pair {i, i • = i} is regarded as either poor or full in A.) In these terms, we observe a useful relationship between symmetric sets A and A * : When dealing with colors in the symmetrized form as above, the sets Π p (A) are defined accordingly. For a symmetric collection S ⊆ 2 [n] and h = 0, 1, . . . , n, define h-th level of S as Then S h consists of the sets A ∈ S with Π 1 (A) + 1 2 Π 2 (A) = h, and (2.2) implies where we extend the operator * to collections in 2 [n] in a natural way. The next two subsections review the constructions of combined tilings and cubillages, which are the key objects in our proofs of Theorems 1.2 and 1.3, respectively. Zonogon and combies. Let Ξ be a set of n vectors ξ i = (x i , y i ) ∈ R 2 such that where each δ i is a sufficiently small positive real. In addition, we assume that (2.4) (i) Ξ satisfies the strict concavity condition: for any i < j < k, there exist λ, λ ′ ∈ R >0 such that λ + λ ′ > 1 and ξ j = λξ i + λ ′ ξ k ; and (ii) the vectors in Ξ are Z 2 -independent, i.e., all 0,1-combinations of these vectors are different. The zonogon generated by Ξ is the 2n-gon being the Minkowski sum of segments [0, ξ i ], i = 1, . . . , n, i.e., the set When the choice of Ξ is not important to us (subject to (2.3),(2.4)), we may denote Z as Z(n, 2). Each subset X ⊆ [n] is identified with the point i∈X ξ i in Z (due to (2.4)(ii), different subsets are identified with different points). Besides ξ 1 , . . . , ξ n , we use the vectors ǫ ij := ξ j − ξ i for 1 ≤ i < j ≤ n. A combined tiling, or a combi for short, is a subdivision K of Z = Z(Ξ) into convex polygons specified below and called tiles. Any two intersecting tiles share a common vertex or edge, and each edge of the boundary of Z belongs to exactly one tile. We associate to K the planar graph (V K , E K ) whose vertex set V K and edge set E K are formed by the vertices and edges occurring in tiles. Each vertex is (a point identified with) a subset of [n]. And each edge is a line segment viewed as a parallel transfer of either ξ i or ǫ ij for some i < j. In the former case, it is called an edge of type or color i, or an i-edge, and in the latter case, an edge of type ij, or an ij-edge. An i-edge (ij-edge) is directed according to the direction of ξ i (resp. ǫ ij ), and (V K , E K ) is the corresponding directed graph. In particular, the left boundary of K (and of Z) is the directed path v 0 v 1 . . . v n in which each vertex v i represents the interval [i] (and the edge from v i−1 to v i has color i). And the right boundary is the directed path In what follows, for disjoint subsets A and {a, . . . , b} of [n], we will use the abbreviated notation Aa . . . b for A ∪ {a, . . . , b}, and write A − c for A − {c} when c ∈ A. There are three sorts of tiles in a combi K: ∆-tiles, ∇-tiles, and lenses. C=Bj-i II. In a lens λ, the boundary is formed by two directed paths U λ and L λ , with at least two edges in each, having the same beginning vertex ℓ λ and the same end vertex r λ ; see the right fragment of the above picture. The upper boundary U λ = (v 0 , e 1 , v 1 , . . . , e p , v p ) is such that v 0 = ℓ λ , v p = r λ , and v k = Xi k for k = 0, . . . , p, where p ≥ 2, X ⊂ [n] and i 0 < i 1 < · · · < i p (so k-th edge e k is of type i k−1 i k ). And the lower boundary L λ = (u 0 , e ′ 1 , u 1 , . . . , e ′ q , u q ) is such that u 0 = ℓ λ , u q = r λ , and u m = Y − j m for m = 0, . . . , q, where q ≥ 2, Y ⊆ [n] and j 0 > j 1 > · · · > j q (so m-th edge e ′ m is of type j m j m−1 ). Then Y = Xi 0 j 0 = Xi p j q , implying i 0 = j q and i p = j 0 . Note that X as well as Y need not be a vertex in K. Due to the concavity condition (2.4)(i), λ is a convex polygon of which vertices are exactly the vertices of U λ ∪ L λ . A. A quasi-combi K differs from a combi by the condition that in each lens λ, either the upper boundary U λ or the lower boundary L λ (not both) can consist of only one edge; we refer to λ as a lower semi-lens in the former case, and as an upper semi-lens in the latter case. If, in addition, no two upper semi-lenses can share an edge, and similarly for the lower semi-lenses, then we say that a quasi-combi K is fine. Typically, a fine quasi-combi is produced from a combi by subdividing each lens λ of the latter into two semi-lenses of different types (lower and upper ones) along the segment [ℓ λ , r λ ]. See the left fragment of the picture. The second derivative is of most use in this paper. It was introduced in [4] under the name of a fully triangulated quasi-combi, that we will abbreviate as an ftq-combi. In an ftq-combi K, all lenses are semi-lenses and, moreover, they are triangles. So an upper semi-lens is a triangle U = U(ABC) formed by three vertices A, B, C and three directed edges of types ij, jk, ik such that i < j < k. The vertices are expressed as A = Xi, B = Xj and C = Xk for some X ⊂ [n], called the root of U (which is not necessarily a vertex of K). And a lower semi-lens is a triangle L = L(A ′ B ′ C ′ ) formed by vertices A ′ , B ′ , C ′ and directed edges of types , called the root of L. See the middle and right fragments of the above picture. Typically, an ftq-combi is produced from a combi by subdividing each lens into one lower and one upper semi-lenses (forming a fine quasi-combi as in A) and then subdividing the former into lower triangles, and the latter into upper ones. Conversely, starting from an ftq-combi, if we choose, step by step, a pair of semi-lenses that share an edge and have the same type and replace them by their union, then we eventually obtain a fine quasi-combi (which preserves the set of vertices and does not depend on the choice of pairs in the process). The picture below illustrates fragments of a combi (left) and an ftq-combi (right); here lenses (λ and λ ′ ) and triangular semi-lenses are drawn bold. In the definition of a combi given in [3], the generators ξ i are assumed to have equal euclidean lengths. However, taking generators subject to (2.3) does not affect, in essence, the structure of combies, as well as results on them, and we may vary generators, with a due care, when needed. To simplify visualizations, it is convenient to think of edges of type i as "almost vertical", while of those of type ij as "almost horizontal" (since the values δ i in (2.3) are small). Note that any rhombus tiling turns into a combi without lenses in a natural way: each rhombus is subdivided into two "semi-rhombi" ∆ and ∇ by drawing the "almost horizontal" diagonal in it. Note that from axioms (2.3),(2.4)(i) it follows that (2.5) all vectors ξ 1 , . . . , ξ n and ǫ ij , 1 ≤ i < j ≤ n, are different. In particular, this implies that if some ∆-tile and ∇-tile share an "almost horizontal" edge (i.e. they are of the form ∆(A|BC) and ∇(A ′ |B ′ C ′ ) with BC = B ′ C ′ ) then their union is a parallelogram. As a consequence, any ftq-combi without semi-lenses is equivalent to a rhombus tiling (and vice versa). The central result on combies shown in [3] (which in turn relies on the purity of W n shown in [2]) is that there is a one-to-one correspondence between the set of combies K on Z(n, 2) and the set W n of maximal w-collections W in 2 [n] ; it is given by K → V K =: W. As a consequence, (2.6) for any ftq-combi K on Z(n, 2), the set V K of vertices (regarded as subsets of [n]) forms a maximal w-collection in 2 [n] , and conversely, any w-collection in 2 [n] is representable by the vertex set V K of some ftq-combi K. Next, in order to handle symmetric ws-collections, it is convenient to assume that the set Ξ of generating vectors ξ i = (x i , y i ) is symmetric, in the sense that: cf. (2.3). Note that in this case we may assume that conditions (2.4)(i),(ii) continue to hold. (To provide this, we first assign Z 2 -independent numbers x i for i = 1, . . . , m = ⌊n/2⌋ so that x 1 < · · · < x m < 0, and accordingly define x m+1 , . . . , x n by symmetry (where x m+1 = 0 if n is odd). Then assign symmetric y 1 , . . . , y n (with y i = y i • ) so as to satisfy the concavity condition (2.4)(i). Then slightly perturbing the values y i , if needed, we ensure that all 0,1,2-combinations of the numbers y 1 , . . . , y m are different. One can see that the resulting vectors ξ 1 , . . . , ξ n are Z 2 -independent, yielding (2.4)(ii).) When n is even, (2.7) implies that the zonogon Z := Z(Ξ) admits the reflection with respect to the horizontal line M := {(x, y) ∈ Z : y = y 1 + · · · + y n/2 }, (2.8) called the middle line of Z. Moreover, we observe that (2.9) for any A ⊆ [n], the sets A and A * are symmetric w.r.t. M, or M-symmetric for short, which means that their corresponding points ( In other words, M contains merely self-symmetric sets A = A * and all these. Also the middle line M enables us to define an important class of ftq-combies (extending the notion of M-symmetry to subsets of points in Z in a natural way). Definition. Let n be even. An ftq-combi K on Z is called symmetric if for any tile of K, its M-symmetric tile belongs to K as well. In particular, V K is symmetric. We shall see in Sect. 3 that such ftq-combi do exist, and moreover, they just give rise to all maximal symmetric w-collections in 2 [n] . On the other hand, no "symmetric ftq-combi" can be devised when n is odd, as we explain in Sect. 4. (Note that for our purposes, for a vector v = (a, b, c) ∈ R 3 , it is more convenient to interpret b as the vertical coordinate (height), a as the left-to-right coordinate, and c as the depth of v. So all vectors in Θ have the unit height.) An example with n = 5 is illustrated in the picture (where The zonotope Z(Θ) generated by Θ is the Minkowski sum of line segments [0, θ i ], i = 1, . . . , n. Then a fine zonotopal tiling, or a cubillage, in terminology of [8], is (the polyhedral complex determined by) a subdivision Q of Z(Θ) into 3-dimensional parallelotopes such that: any two intersecting ones share a common face, and each face of the boundary of Z(Θ) is entirely contained in some of these parallelotopes. For brevity, we refer to these parallelotopes as cubes, and to Q as a cubillage. Note that the choice of one or another cyclic configuration Ξ (subject to (2.11)) is not important to us in essence, and we usually write Z(n, 3) rather than Z(Θ), referring to it as the (cyclic 3-dimensional) zonotope with n colors. Like the case of zonogons and combies, each vertex v of a cubillage Q (i.e., a vertex of some cube in it) is viewed as i∈X θ i for some subset X ⊆ [n], and we identify such v and X. The set of vertices of Q (as subsets of [n]) is called the spectrum of Q and denoted as V Q . One shows that |V Q | is equal to c n as in (1.3), and an important result due to Galashin establishes a relation of cubillages to chord separation. For a closed subset U of points in Z = Z(n, 3), the front (rear ) side of U, denoted as U fr (resp. U rear ), is defined to be the set of points i.e., consisting of the points of U with locally minimal (resp. maximal) depths. In particular, Z fr (Z rear ) denotes the front (rear) side of the entire zonotope Z; it is well-known that the vertices occurring in Z fr (Z rear ) are exactly the intervals (resp. co-intervals) in [n]. Next, to handle symmetric c-collections, we will deal with a symmetric set Θ of generating vectors θ i = (t i , 1, φ(t i ))), which means that and consider the corresponding symmetric zonotope Z(Θ). It turns out that, in contrast to the situation when symmetric (ftq)-combies exist only for n even, symmetric cubillages on a symmetric cyclic zonotope do exist in both even and odd cases, as we shall see in Sects. 5 and 6. Maximal symmetric w-collections: even case In this section, we throughout assume that n is even. Our goal is to prove Theorem 1.2 in this case. We consider a symmetric zonogon Z = Z(Ξ) ≃ Z(n, 2), and an important role is played by the middle line M in Z (defined in (2.8)). We know (cf. (2.10)) that all points A ⊂ [n] lying on M are self-symmetric, have size n/2, and admit only ordinary pairs. Consider two distinct points A, B in M. Since |A| = |B|, the symmetric difference A△B (= (A − B) ∪ (B − A)) has size at least 2. This is strengthened as follows (this will be used in the next section): contains exactly one elements of {j, j • }). But then |A△B| ≥ 3, a contradiction. Next we prove the theorem (with n even) as follows. Let C be a maximal by inclusion symmetric w-collection in 2 [n] and suppose, for a contradiction, that |C| < s n . Extend C to a maximal (non-symmetric) w-collection W ⊂ 2 [n] . Then |W| = s n , and in view of (2.6), there exists an ftq-combi K on Z whose vertex set V K is exactly W. Let R = (R 0 , R 1 , . . . , R q ) be the sequence of vertices of K occurring in M and ordered from left to right. Note that R is nonempty, since it contains the vertex [n/2] of the left boundary of Z, which is just R 0 (and the vertex [(n/2 + 1)..n] of the right boundary of Z, which is R q ). Consider two possible cases. (It should be noted that an idea of our analysis in items II and III below is borrowed from Karpman's work [9].) Case 1 : Assume that the middle line M of Z is covered by edges of K. Then (by the planarity and the construction of ftq-combies) for each p = 1, . . . , q, the pair e p = (R p−1 , R p ) forms an edge of K. Moreover, M separates the tiles of K into two subsets T and T ′ , where the former (latter) consists of the tiles lying in the half of Z below (resp. above) M. The intersection of these halves is just M, and they are M-symmetric to each other. Now for each tile τ ∈ T , take the M-symmetric triangle τ * . Then the set T ′′ of such tiles gives a subdivision of the half of Z above M, and combining T and T ′′ (which have the same set of edges within M), we obtain a symmetric ftq-combi K on Z. Note also that if A ∈ C is a vertex in T ′ , then A is a vertex of T ′′ as well (since the symmetric set A * must be a vertex in T ). Thus, V K is a symmetric w-collection of size s n including C, contradicting the maximality of C. Case 2 : Now assume that for some 1 ≤ p ≤ q, the segment σ of M between the points R p−1 and R p is not an edge of K. Then there is a tile τ of K with vertices A, B, C such that one vertex, A say, is R p−1 , and σ meets the edge connecting B and C at an interior point. Let for definiteness the point B lies above M, and C below M. Our aim is to show that this is not the case. A priori, τ can be one of the following shapes: ∆-tile, ∇-tile, upper semi-lens, or lower semi-lens (defined in Sect. 2.1). For reasons of symmetry, it suffices to consider the cases when τ is either a ∇-tile or an upper semi-lens. In the former case τ is viewed as ∇(C|AB), while the latter case falls into two subcases depending on the location of the vertex A; namely, τ is either U(ABC) or U(CAB). So we have to consider three situations; they are illustrated in the picture (from left to right) and described in items I, II, III below. In order to analyze these situations, we use two auxiliary assertions. I: II: III: is such that each of Π 0 (S) and Π 2 (S) consists of at most one pair, then S and S * are weakly separated. 2)); so the assertion is immediate when some of Π 0 (S) and Π 2 (S) is empty. And if |Π 0 (S)| = |Π 2 (S)| = 1, then |S − S * | = |S * − S| = 2, whence S and S * have the same size and one of them surrounds the other, again yielding the assertion. and from the set S * , then S * is weakly separated from C as well, implying that C ∪ {S, S * } is a symmetric w-collection. I. We first consider the case τ = ∇(C|AB). Then |A| = |B| = m and |C| = m − 1, where m = n/2, and A, B are expressed as A = Ca and B = Cb for some a, b ∈ [n]. Obviously, a < b. Moreover, since A lies on M, while B above M, we have y a = y a • < y b = y b • . Then, by the the concavity condition (2.4)(i), the pair {a, a • } surrounds {b, b • }, and therefore Note that the relations A = Ca, B = Cb and Π 0 (A) = Π 2 (A) = ∅ give ). It follows that C −B * contains the element b • , whereas B * −C contains the elements a, a • surrounding b • (by (3.5)). This together with |B * | = m > |C| implies that B * and C are not weakly separated. Therefore, B * is not in W. It follows that B * ∩ D = {a, a • , c • }. Then C − B * contains b • , c and B * − C contains a, a • , implying that B * and C are not weakly separated since a < b • < a • < c (cf. (3.6)). On the other hand, one can see that Π 0 (B) = {{a, a • }} and Π 2 (B) = {{b, b • }}; therefore, B and B * are weakly separated, by (3.2). As in the previous case, we obtain that C ∪ {B, B * } is symmetric and weakly separated, yielding B, B * ∈ C (by the maximality of C), contrary to B * / ∈ W. III. Finally, consider the case τ = U(CAB). Then A = Xa, B = Xb and C = Xc for some X ⊂ [n] and elements c < a < b. Arguing as above, we observe that Then C − B * contains b • , c and B * − C contains a, a • , whence B * and C are not weakly separated, in view of c < a < b • < a • (cf. (3.7)). On the other hand, C ∪{B, B * } is symmetric and weakly separated, yielding B, B * ∈ C. This completes the proof of Theorem 1.2 when n is even. Remark 3. Assertion (1.5) on the purity of symmetric strongly separated collections in 2 [n] with n even is proved in a similar way (and even simpler, using an observation that if A ⊆ [n] is strongly separated from A * , then at least one of Π 0 (A) and Π 2 (A) must be empty). On this way, given a maximal by inclusion symmetric s-collection A ⊂ 2 [n] , we extend it to a maximal s-collection S and take the combi K without lenses (equivalent to a rhombus tiling) with V K = S. The situation when the middle line M is not fully covered by edges of K is again impossible (now an analysis of the only case τ = ∇(C|AB) is sufficient, repeating part I of the above proof). And when M is covered by edges of K, we replace the subcombi of K above M in a due way (as described in Case 1 of the proof), obtaining a symmetric combi without lenses (viz. rhombus tiling) whose vertex set includes A. This gives (1.5) in the even case: (3.8) when n is even, all inclusion-wise maximal symmetric s-collections in 2 [n] have the same cardinality, which is equal to s n . We finish this section with one more assertion that will be used in the next section: (3.9) for n even, if an ftq-combi K on the symmetric zonogon Z(n, 2) has a path P covering the middle line M, then this path contains exactly n/2 edges. Indeed, let R 0 , R 1 , . . . , R q be the sequence of vertices of P , and let e p denote the edge from R p−1 to R p . Then |R p−1 △R p | = 2, and by (3.1), e p is congruent to the vector ǫ ii • = ξ i • − ξ i for some i ∈ [n/2]. The sum of these vectors over M is equal to the difference of R q and R 0 (regarded as vectors), namely, n i=n/2+1 ξ i − n/2 i=1 ξ i . This is just equal to (ǫ ii • : i ∈ [n/2]), whence the result easily follows. Maximal symmetric w-collections: odd case In this section we prove Theorem 1.2 when n is odd, n = 2m + 1. It is convenient to deal with the set of colors in the symmetrized form, using notation For convenience, we will denote the K-involution on sets in [−m..m] − with symbol ♮ (to differ from the K-involution * for [−m..m]). We observe that (4.2) the collection D is ♮-symmetric (i.e., symmetric w.r.t. ♮) and weakly separated. Indeed, C is partitioned into symmetric pairs {A, A * }, where A ∈ C ′ and A * ∈ C ′′ . Then A ∈ D ′ and A * − 0 ∈ D ′′ . One can see that A * − 0 is just A ♮ . Therefore, D is ♮-symmetric. Next, let A, B ∈ D. Obviously, A, B are weakly separated if both are either in D ′ or in D ′′ . So assume that A ∈ D ′ and B ∈ D ′′ . Then A ∈ C ′ and B0 ∈ C ′′ . Since C is a w-collection and |A| ≤ m < |B0| (by (4.1)), either A and B0 are strongly separated, or they are weakly separated and A surrounds B0. In the former case, A and B are strongly separated, while in the latter case, the inequality |A| ≤ |B| ensures that A and B are weakly separated. We call D the contraction of C, and call the operation of getting rid of color 0 as above the contraction operation on C. We call E := E ′ ∪ E ′′ the expansion of D using color 0. The following properties are valid: (4.3) E is symmetric and weakly separated; .m] is a symmetric w-collection, D is the contraction of C, and E is the expansion of D using color 0, then E = C. We check these properties and simultaneously finish the proof of the theorem by using the geometric construction from the previous section. More precisely, we extend the contraction D of C to a maximal ♮-symmetric wcollection W in 2 [−m..m] − and take a ♮-symmetric ftq-combi K on the zonogon Z = Z(Ξ) such that V K = W. Let D ′ , D ′′ be defined as above. Then D ′ represents a subset of vertices of K in the lower half Z low of Z (up to M), and D ′′ is the set M-symmetric to D ′ , which lies in the upper half Z up of Z. Let K low and K up be the parts (subcomplexes) of K contained in Z low and Z up , respectively. Then K low ∩ K up gives a directed path P on M consisting of m + 1 vertices and m edges, say, P = R 0 R 1 · · · R m (cf. (3.9)). Moreover, by (3.1), each edge Now consider the larger zonogon Z ′ = Z(Ξ ′ ), where Ξ ′ is obtained by adding to Ξ the vertical vector ξ 0 = (0, y 0 ). Equivalently, Z ′ is formed by splitting Z along M, keeping the part Z low of Z, moving the part Z up by y 0 units in the vertical direction, and filling the gap between Z low and Z up + ξ 0 by the rectangle F congruent to M × ξ 0 . Accordingly, the part K up of K is moved by ξ 0 , thus transforming each vertex A of K up into A0, and we subdivide F into the sequence of rectangles F 1 , . . . , F m , where F p is congruent to e p × ξ 0 . This results in a "pseudo-combi" K ′ , with the natural involution on the vertices which brings each A ∈ V K ′ to its M ′ -symmetric vertex A ′ , where M ′ is the updated middle line M ′ with y M ′ = y M + y 0 /2. Clearly A ′ = A * , and the converse transformation Z ′ → Z returns the ftq-combi K. Finally, we can transform K ′ into a correct, though not symmetric, ftq-combi K ′′ on Z ′ , by subdividing each rectangle F p into four triangles. Namely, using the fact that the edge e p of P has type ii • for some −m ≤ i ≤ −1, the devised triangles are viewed as (using notation from Sect. 2.1) Remark 4. The above method can be applied (in a simpler form) to strongly separated collections for n odd. Namely, for a symmetric s-collection C ⊂ 2 [−m..m] , we form the collections C ′ , C ′′ , D ′ , D ′′ , E ′ , E ′′ as described above. (Note that properties (4.1) and (4.2) easily follow from the fact that Π 2 (A) = ∅ for each A ∈ C ′ , which is provided by the strong separation of C.) Extending the contraction D = D ′ ∪ D ′′ of C to a maximal symmetric s-collection S in 2 [−m..m] − , we take the corresponding symmetric rhombus tiling T on Z with V T = S. Acting as above, we move the "upper" parts of Z and T (lying above M) by the vector ξ 0 and for each vertex R i on M, add the vertical edge u i connecting R i and R i + ξ 0 . The difference with the above construction for ftq-combies is that, instead of replicating the horizontal edges (R i−1 , R i ) lying on M, we now simply split each symmetric rhombus between R i−1 and R i into two triangles and move the upper one (of ∆ type) by ξ 0 . This together with the vertical edges u i−1 and u i produces a symmetric hexagon. As a result, we obtain a "pseudo" rhombus tiling T ′ in which the middle tiles are formed by symmetric hexagons, not rhombi. The transformation T → T ′ is illustrated in the picture where m = 2. Note that T ′ can be transformed into a rhombus tiling (which is not symmetric) by subdividing each middle hexagon into three rhombi (by one of two possible ways); such a subdivision is shown by dotted lines in the picture. Note that the subdivision adds m new vertices. Maximal symmetric c-collections: even case In this section we prove Theorem 1.3 when the number n of colors is even, n = 2m. Our method of proof uses a reduction to symmetric weakly separated collections and combies. We will deal with the set of colors given in the symmetrized form, namely, [−m. More precisely, we consider the zonotope Z = Z(Θ) generated by the set Θ of vectors θ i = (t i , 1, φ(t i )), i ∈ [−m..m] − , subject to (2.11). Recall that the second coordinate of a point in R 3 is thought of as the height of this point. From the symmetry conditions on Θ (namely, i • = −i, t i • = −t i and φ(t i • ) = φ(t i )) it follows that: (a) Z is centrally symmetric w.r.t. the point ζ Z that is the half sum of vectors in Θ, called the center of Z (i.e., ζ Z = (0, m, φ(t 1 ) + · · · + φ(t m ))); more precisely, v = (a, b, c Let L be the line segment in Z going through the center ζ Z orthogonal to the plane as in (b). Then L connects the vertices of Z representing the sets (intervals) [−m.. One more object important to us is the section Ω of Z by the horizontal plane H m := {(a, b, c) ∈ R 3 : b = m}; it contains the center ζ Z and axis L and we call it the plate in Z. This Ω divides Z into two halves Z low and Z up , which lie below and above Ω, respectively, and intersect by Ω. The symmetry µ swaps Z low and Z up . In its turn, the axis L divides the plate (disk) Ω into two halves, symmetric to each other by µ, and the boundary of Ω is partitioned into two piece-wise linear paths Ω fr and Ω rear connecting the vertices [−m.. − 1] and [1..m], where the former lies in Z fr , and the latter in Z rear . In fact, we wish to transform the cubillage Q as above into a symmetric cubillage Q ′ keeping the collection C in its spectrum, where we call a cubillage on Z symmetric if it is stable under µ. Then V Q ′ is symmetric, whence V Q ′ = C, and we are done. The task of constructing the desired Q ′ is reduced to handling certain weakly separated collections and ftq-combies. This relies on a method involving weak membranes in fragmentized cubillages developed in [4,Sect. 6]. We now interrupt our description for a while to briefly review the notions and constructions needed to us. Fragmentation and weak membranes. For an arbitrary n, let Q be a cubillage on the zonotope Z ≃ Z(n, 3) generated by vectors θ 1 , . . . , θ n as in (2.11). A cube C in Q may be denoted as (X|T ), where X ⊂ [n] is the lowest vertex, and T the triple of edge colors, i < j < k say, in C. By the fragmentation of Q we mean the complex Q ≡ obtained by cutting Q by the horizontal planes H ℓ := {(a, b, c) ∈ R 3 : b = ℓ} for ℓ = 1, . . . , n − 1. These planes subdivide each cube C = (X|T ) into 3 pieces C ≡ 1 , C ≡ 2 , C ≡ 3 , where C ≡ h = C h is the portion of C between H |X|+h−1 and H |X|+h . So C ≡ 1 is a simplex with the bottom vertex X, C ≡ 3 is a simplex with the top vertex X ∪T , and C ≡ 2 is an octahedron; see the picture (where the objects are slightly slanted). This is a triangle of one of two sorts: either a horizontal triangle (section) S h (C) := C ∩ H |X|+h , h = 1, 2, or a half of the parallelogram forming a face of C, that we conditionally call a "vertical" triangle. Note that the triple of vertices of a horizontal facet F are of the form either Xi, Xj, Xk or Y − i, Y − j, Y − k for some X, Y ⊆ [n] and i < j < k, and i, j, k are just the colors of the cube containing F as a section. This implies that (5.2) any horizontal facet F in Q ≡ determines both fragments sharing F , as well as the cube separated by F (among all cubillages whose fragmentation contains F ). Definition. A 2-dimensional subcomplex ("surface") N in Q ≡ is called a weak membrane, or a w-membrane for short, of Q if π ρ bijectively projects N onto Z ′ . When N has only vertical triangles (but no horizontal ones), N is called a strong membrane. Additional explanations: Let N contain a facet F of a fragment C ≡ h of a cube C = (X | T = (i < j < k)). Then: (a) if F is the section S 1 (C) (resp. S 2 (C)), then π ρ (F ) is an upper (resp. lower) semi-lens with edges of types ij, jk, ik; and (b) if F is the lower (upper) half of a facet ("rhombus") of C, then π ρ (F ) is a ∇-tile (resp. ∆-tile) with the same edge colors as those of F . As is explained in [4,Sects. 6.3,7], w-membranes are closely related to ftq-combies: (5.4) for a cubillage Q on Z and a w-membrane N in Q ≡ , the projection π ρ (N) (regarded as a complex) forms an ftq-combi on the zonogon Z ′ , and conversely, for any ftq-combi K on Z ′ , there exist a cubillage Q on Z and a w-membrane N in Q ≡ such that π ρ (N) = K. Particular cases of membranes are the front side Z fr and the rear side Z rear of Z (see Sect. 2.1), regarded as complexes in which each facet ("rhombus") is subdivided (fragmented) into two halves; both membranes are strong. The picture below illustrates the simplest case when the cubillage consists of only one cube C, i.e., n = 3 and Z = C; here there are four w-membranes N 1 ≃ C fr , N 2 , N 3 , N 4 ≃ C rear , and the horizontal facets in N 2 and N 3 are dark. Next we return to the initial cubillage Q with C ⊂ V Q on the symmetric zonotope Z = Z(Θ) with n = 2m colors. In its fragmentation Q ≡ , we distinguish the particular weak membrane N ♦ as follows: where Q Ω is formed by the facets of Q ≡ lying on the plate Ω; Z fr ↑ is the part of the front side Z fr of Z contained in the upper half Z up ; and Z rear ↓ is the part of the rear side Z rear contained in the lower half Z low . This N ♦ is indeed a w-membrane (taking into account that the plate Ω becomes "fully seen" under π ρ , whence the projection π ρ is injective on N ♦ ; the fact that π ρ (N ♦ ) = Z ′ is easy). Also one can see that Z fr ↑ is symmetric to Z rear ↓ . The pieces (disks) Z fr ↑ and Q Ω share the path Ω fr (connecting the vertices [−m.. − 1] and [1. .m] and lying on Z fr ), while Q Ω and Z rear ↓ share the rear boundary path Ω rear of Ω. Then K := π ρ (N ♦ ) is an ftq-combi on Z ′ ≃ Z(2m, 2) in which the part π ρ (Z fr ↑ ) is symmetric to π ρ (Z rear ↓ ), and the image by π ρ of the axis L of Z is just the middle line M of Z ′ . In the rest of the proof we rely on the simple fact analogous to Indeed, one can see that A * is chord separated from C (using the symmetry of C). Then C ∪ {A, A * } is chord separated, and the maximality of C implies A, A * ∈ C. Next we proceed as follows. By reasonings in Sect. 3, the middle line M in K = π ρ (N ♦ ) is covered by edges of K, and combining the part K low of K below M with its M-symmetric complex (K low ) * (where * stands for the symmetry in Z ′ ), we obtain a correct symmetric ftq-combi K ′ on Z ′ . (Note that the part π ρ (Z fr ↑ ) of K low is already symmetric to the part π ρ (Z rear ↓ ) of K up ; so K ′ and K coincide within these parts.) Each set A ∈ V K low belongs to the spectrum V Q , and therefore it is chord separated from C. Also the sets A, A * are chord separated (since both belong to the same ftq-combi K ′ , whence they are even weakly separated). Then A, A * ∈ C, by (5.6). It follows that (5.7) the set of vertices of Q lying on the plate Ω, i.e., V Q Ω , is self-symmetric (w.r.t. L) and belongs to C, implying that the vertex set of K is symmetric. Remark 5. Note that a priori (5.7) does not guarantee that the subcomplex Q Ω itself (or the set of triangles it it) is self-symmetric. (The reason is that in a quasicombi, a semi-lens with more than three vertices can be triangulated in several different ways.) If this were so, then the proof could be finished as follows. Take the part Q ≡ ↓ of Q ≡ below Ω (which subdivides Z low ) and replace the part of Q ≡ above Ω by the complex symmetric to Q ≡ ↓ . These two complexes coincide within the plate Ω (due to the symmetry of Q Ω ), and we can conclude that their union gives the correct fragmentation of some symmetric cubillage Q ′ on Z. (Here we rely on the fact, due to (5.2), that any horizontal triangle in Q Ω uniquely determines the cube having this triangle as a section.) Then V Q ′ ⊇ C would imply the result. We argue in a different way, using a trick of perturbing the plate Ω described below. To slightly simplify our description, we identify the w-membrane N ♦ and its image by π ρ , the ftq-combi K = π ρ (N ♦ ). Let Γ = (V, E) be the graph whose vertices are the semi-lenses (viz. horizontal triangles) on Ω and where two semi-lenses are connected by edge in Γ if they share an edge and are of the same type, i.e., both are lower or both are upper ones. Let Φ be a connected component of Γ formed by upper semi-lenses. Considering semi-lenses sharing edges, we can conclude that all semi-lenses in Φ have the same root X ⊂ [−m..m] − , i.e., each vertex occurring in a semi-lens there is of the form Xi with i ∈ [−m..m] − (see Sect. 2.1). Each triangle τ ∈ Φ, having edges of types ij, jk, ik for i < j < k say, is the section S h (C) at level h = 1 of some cube C = C(τ ) in Q; more precisely, C is of the form (X | T = {i, j, k}), i.e., C has the lowest point at the vertex X in Q, and edges of colors i, j, k. Then τ is the horizontal facet of the fragment C ≡ 1 in Q ≡ with the bottom vertex X of height |X| = m − 1 in Z. The union of triangles τ ∈ Φ, denoted by U Φ , forms a (possibly big) upper semi-lens. This is nothing else than an upper semi-lens in the corresponding fine quasi-combi K related to K. In its turn, the union of fragments C ≡ 1 over the cubes C = C(τ ) for τ ∈ Φ forms the convex truncated cone D Φ with the bottom vertex X and the upper (horizontal) side U Φ . Symmetrically, for each connected component Ψ formed by lower semi-lenses in Γ, all semi-lenses have the same "upper" root Y (i.e., their vertices are expressed as Y −i), and each triangle σ ∈ Ψ is the section S 2 (C) at level 2 of some cube C = C(σ) with the top vertex Y (which is of height |Y | = m + 1 in Z). Then the union L Ψ := ∪(σ ∈ Ψ) is a lower semi-lens as well (which is a tile in the fine quasi-combi K). And the union F Ψ of the upper fragments (simplexes) C ≡ 3 (σ) over σ ∈ Ψ forms the truncated cone with the top vertex Y and the lower (horizontal) side L Ψ . Note that the fine quasi-combi K related to the ftq-combi K is symmetric (since it is determined by the vertex set V K , which is symmetric, by (5.7)). This implies that the involution µ on Ω sends each upper semi-lens of K to a lower one (and vice versa), which in turn implies that the corresponding lower and upper cones are symmetric to each other: F Ψ = µ(D Φ ). The picture below illustrates such cones when |Φ| = 2. Using the above constructions, we perturb the plate Ω as follows. For each component Φ of upper type in Γ, the interior of the big semi-lens U Φ is replaced by the side surface of the cone D Φ (as though squeezing out a "pit" in place of this semi-lens). Similarly, for each component Ψ of lower type, the interior of L Ψ is replaced by the side surface of the cone F Ψ (making a "peak"). Let Ω denote the resulting surface (which has the same boundary as in Ω but consists of a gathering of pits and peaks, without horizontal pieces at all). Then the above-mentioned correspondence between the lower and upper cones implies that the perturbed Ω is symmetric w.r.t. the axis L. Now take the sub-cubillage Q ′ of Q formed by all cubes whose bottom vertex is of height at least m − 1. Then Ω is exactly the lower boundary of Q ′ . The symmetry of Ω implies that the cubillage Q ′′ symmetric to Q ′ has Ω as the upper boundary. As a result, Q ′ ∪ Q ′′ is a correct symmetric cubillage on Z containing C. This contradicts the assumption that C is maximal and |C| < c 2m , completing the proof of Theorem 1.3 for n even. As a consequence of the above proof, any maximal c-collection C in 2 [−m..m] − is representable as the spectrum V Q of a symmetric cubillage Q on Z(2m, 3) (which is determined by C, due to Theorem 2.1). In particular, this implies that (5.8) the subcomplex (facet structure) Q Ω of Q ≡ on the plate Ω is symmetric. We finish this section with one more observation. As is shown in [4] (and quoted in (5.4)), for any maximal w-collection W ⊆ 2 [n] , there exist a cubillage Q and a w-membrane N in its fragmentation Q ≡ such that V N = W (moreover, one shows there how to construct such Q and N efficiently). In particular, V Q gives a maximal c-collection including W. A symmetric counterpart of such a relation between w-and c-collections is as follows. Theorem 5.1 For n even, any maximal symmetric w-collection W ⊆ 2 [n] can be represented as the spectrum (vertex set) of a symmetric w-membrane N in the fragmentation of some symmetric cubillage Q on Z(n, 3) (and such Q and N can be constructed efficiently). In particular, V Q is a maximal symmetric c-collection including W. Moreover, if K is an arbitrary symmetric ftq-combi with V K = W, then Q and N can be chosen so that K = π ρ (N). Proof For a maximal symmetric w-collection W ⊆ 2 [n] with n even, take a symmetric ftq-combi K with V K = W (which exists and can be constructed as described in Case 1 of the proof of Theorem 1.2 in Sect. 3). This K is the projection by π ρ of a w-membrane N in some cubillage Q ′ on Z = Z(n, 3); see (5.4). The symmetry of K implies that N (regarded as a complex) is stable under the involution µ on Z. This and the fact that the front side Z fr and the rear side Z rear of Z are symmetric to each other (by µ) imply that the part Z ′ of Z between Z fr and N is symmetric to the part Z ′′ between N and Z rear . Now take the sub-complex B of the fragmentation Q ′ ≡ lying in Z ′ . Then µ(B) gives a subdivision of Z ′′ respecting N. In view of (5.2), any pair of fragments of B and µ(B) sharing a horizontal facet in N must belong to the same cube. Using this, one can conclude that the union of B and µ(B) constitutes a well-defined fragmentation of a symmetric cubillage Q on Z containing N, whence the theorem follows. In what follows, we will use symbol * for the K-involution in 2 [−m..m] , and ♮ for that in 2 [−m..m] − , and similarly for the geometric versions of these involutions. The symmetry of D implies that of C. Also it is not difficult to see that (6.1) C is chord separated; we leave this, as well as verifications of assertions (6.3) and (6.5) below, to the reader as an exercise. As is shown in the proof of Theorem 1.3 for the even case in the previous section, C is extendable to the spectrum V Q of a symmetric cubillage Q on the zonotope Z − := Z(Θ) ≃ Z(2m, 3) (where, as before, Θ is a symmetric set of generators θ i , i ∈ [−m..m] − , as in (2.11),(2.12)). We are going to expand Q to a cubillage Q on the zonotope Z ≃ Z(2m + 1, 3) generated by Θ plus the vector θ 0 which, w.l.o.g., can be defined as the vertical vector (0, 1, 0). Based on properties of cubillages (see [4,Sect. 3.3]), an expansion of a cubillage to a larger one with an additional color is performed by use of a certain strong membrane. More precisely, in our case, by a strong membrane related to color 0, or a 0membrane, we mean a (closed two-dimensional) subcomplex N of Q such that the projection π 0 : R 3 → R 2 parallel to the vector θ 0 is injective on N (regarded as a set of points), and π 0 (N) = π 0 (Z − ). It divides Z − into two halves Z low − (N) and Z up − (N) consisting of the points lying below and above N (including N itself), respectively. Two additional conditions on N are needed to us, namely: such an N is called feasible. The expansion operation on Q using N acts as follows: it splits Q into two closed parts (subcomplexes) Q ′ and Q ′′ lying in Z low − (N) and Z up − (N), respectively; the part Q ′ is preserved, while Q ′′ moves by θ 0 , and the gap between Q ′ and Q ′′ + θ 0 is filled by new cubes, each being the Minkowski sum of a facet ("rhombus") of Q contained in N and the segment [0, θ 0 ]. The resulting complex is called the expansion of Q by color 0 using N. One easily shows that (6.3) if Q is symmetric and N is feasible, then the expansion of Q by color 0 using N is a symmetric cubillage on Z(2m + 1, 3) whose spectrum includes D. In light of (6.3), we will attempt to show that a feasible membrane in an appropriate Q does exist (whence the result will immediately follow). One can check that for a cube C = F + [0, θ 0 ′ ] in P ′ , where F is a facet in N ′ , its symmetric cube C • in R is viewed as (F 0 ′ ) • + [0, θ 0 ′′ ], and therefore, (F 0 ′ ) • belongs to N ′′ , and C • to P ′′ . And similarly for cubes in P ′′ . This and the identity ( imply that (6.6) the pies P ′ and P ′′ are symmetric to each other, the 0 ′ -membrane N ′ is symmetric to the 0 ′′ -membrane N ′′ + θ 0 ′′ , and similarly, N ′′ is symmetric to N ′ + θ 0 ′ . Now consider the collections E ′ and E ′′ as in (6.4). Since the sets in E ′ (in E ′′ ) do not contain colors 0 ′ , 0 ′′ (resp. contain both 0 ′ , 0 ′′ ), the vertices of R representing these sets are located (non-strictly) below (resp. above) P ′ ∪ P ′′ . We further need to use the contraction operations on pies of cubillages, which are converse, in a sense, to the expansion operations on membranes; for details, see [4,Sect. 3.3]. The contraction operation applied to P ′ removes from R the cubes lying between the membranes N ′ and N ′ + θ 0 ′ and moves the part of R above N ′ + θ 0 ′ by −θ 0 ′ , so that the images of these membranes become glued together. The contraction operation applied to P ′′ (or to the image of P ′′ after contracting P ′ ) acts similarly. These two operations commute, and applying both (in any order), we obtain a symmetric cubillage Q on Z − and two membranes N ′ and N ′′ in it, which are the images of N ′ , N ′′ and can be simultaneously thought of as 0-membranes since the vectors θ 0 , θ 0 ′ , θ 0 ′′ are close to each other. Moreover, since the vertices in E ′ contain none of colors 0 ′ , 0 ′′ (and therefore lie below the pies P ′ , P ′′ ), whereas those in E ′′ contain both 0 ′ , 0 ′′ (and lie above these pies), we observe, by comparing C ′ with E ′ , and C ′′ with E ′′ , that (6.7) C ′ lies in the region Z low − ( N ′ ) ∩ Z low − ( N ′′ ) (i.e., below both N ′ , N ′′ ), while C ′′ lies in Z up − ( N ′ ) ∩ Z up − ( N ′′ ) (i.e., above both N ′ , N ′′ ). Next, for a closed subset U of points in Z − , define U fr 0 (resp. U rear 0 ) to be the set of points u ∈ U "seen from below" (resp."seen from above"), i.e., such that U has no point of the form u − δθ 0 (resp. u + δθ 0 ) with δ > 0; we call this the 0-front (resp. 0-rear ) side of U. It is not difficult to show (cf. [4,Sect. 3.4]) that any set S of cubes in a cubillage on Z − has a minimal (maximal) element C, in the sense that there is no C ′ ∈ S such that C fr 0 and C ′ rear One can see that H ′ , H ′′ are 0-membranes in Z − , H ′′ is symmetric to H ′ and lies (non-strictly) above H ′ , and in view of (6.7), the collection C ′ lies below H ′ , while C ′′ does above H ′′ . Now the result is provided by the following Lemma 6.1 There exists a symmetric 0-membrane H lying between H ′ and H ′′ . Proof If H ′ = H ′′ , we are done. So assume that the set S of cubes of Q filling the gap ω between H ′ and H ′′ is nonempty. Let C be a maximal cube in S (in the sense explained above). Then the 0-rear side C rear 0 of C is entirely contained in H ′′ (for if a facet F of C rear 0 is not in H ′′ , then there is a cube C ′ with F ⊂ C ′ fr 0 , and this C ′ lies in ω and is greater than C). It follows that replacing in H ′′ the side (disk) C rear 0 by C fr 0 , we obtain a correct 0-membrane H ′′ lying in ω. By the symmetry of ω, the cube C ♮ symmetric to C lies in ω as well, and its 0-front side is entirely contained in H ′ . Let H ′ be the 0-membrane obtained by replacing in H ′ the 0-front side of C ♮ by its 0-rear side. An important fact is that the cubes C and C ♮ are different. This is so because if C has type {i, j, k} (the edge colors in C), then, by symmetry, C ♮ has type {i • , j • , k • }, H''Ŝ o the new 0-membranes H ′ and H ′′ are symmetric to each other, the former lies (non-strictly) below the latter, and the gap between them is smaller than ω. Repeating the procedure, step by step, we eventually obtain a pair for which the gap vanishes, yielding the desired H, as required. This completes the proof of Theorem 1.3 for the case of odd colors. Remark 6. For an integer n > 0, let us say that J ⊆ [0..n] is symmetric if k ∈ J implies n − k ∈ J. Let Λ n (J) be the collection of sets [n] k over k ∈ J. From Theorem 1.3 one can obtain a slightly sharper purity result, namely: Indeed, consider such a collection C and extend it to a maximal symmetric ccollection S in 2 [n] . From the equality |S| = c n (by Theorem 1.3) in follows that for each k ∈ [0..n], the number of sets A ∈ S with |A| = k is equal to exactly k(n − k) + 1. Also the symmetry of J implies that for each k ∈ J, the subcollection C ∩ [n] k coincides with S ∩ [n] k , and the result follows. In conclusion of this paper, we give an analog of Theorem 5.1 for n odd. Recall that in this case the maximal size of a symmetric w-collection W in 2 [n] is (n−1)/2 units less each other and their union forms the octahedral fragment of a symmetric cube). This implies the theorem. Remark 7. A similar result is valid for the strong separation, namely: for n odd, any maximal symmetric s-collection S ⊆ 2 [n] can be represented as the spectrum of a symmetric extraordinary strong membrane N in the sparse fragmentation of some symmetric cubillage Q on Z(n, 3). Here the sparse fragmentation concerns transformations of merely self-symmetric cubes C = (X | T ), i.e., such that |X| = m − 1 (where n = 2m + 1) and T = (i < 0 < i • ) for i ∈ [−m.. − 1], under which C is subdivided into two parts (symmetric to each other) sharing the vertical section (rectangle) S(C) spanning the vertices Xi, Xi • , Xi0, X0i • . Accordingly, an extraordinary strong membrane differs from a usual strong membrane by allowing to use as facets such sections S(C) (and the other facets are halves (vertical triangles) of facets of cubes, as usual). To obtain the assertion, we take the symmetric non-standard tiling T with V T = S on Z(n, 2) as described in Remark 4. In the construction of the desired Q and N, each central hexagon H in T should be the projection (by π ρ ) of the union of three facets in the sparse fragmentation of a self-symmetric cube C in Q (of which one is the rectangular section S(C)). We leave details to the reader.
2020-07-07T01:01:19.189Z
2020-07-04T00:00:00.000
{ "year": 2020, "sha1": "2e42198f1d3b272bcecf2561b4238ed1627e068d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cc37fbc3560c64293d372a70b8cee81d94eccbf0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
244117347
pes2o/s2orc
v3-fos-license
Transfer Learning Capabilities of Untrained Neural Networks for MIMO CSI Recreation Machine learning (ML) applications for wireless communications have gained momentum on the standardization discussions for 5G advanced and beyond. One of the biggest challenges for real world ML deployment is the need for labeled signals and big measurement campaigns. To overcome those problems, we propose the use of untrained neural networks (UNNs) for MIMO channel recreation/estimation and low overhead reporting. The UNNs learn the propagation environment by fitting a few channel measurements and we exploit their learned prior to provide higher channel estimation gains. Moreover, we present a UNN for simultaneous channel recreation for multiple users, or multiple user equipment (UE) positions, in which we have a trade-off between the estimated channel gain and the number of parameters. Our results show that transfer learning techniques are effective in accessing the learned prior on the environment structure as they provide higher channel gain for neighbouring users. Moreover, we indicate how the under-parameterization of UNNs can further enable low-overhead channel state information (CSI) reporting. I. INTRODUCTION Standardization discussions for the next generation of wireless communications have started and artificial intelligence and machine learning (AI/ML) solutions are being considered, especially for next generation radio access network (NG-RAN) and air interfaces, i.e., 3GPP release 18 workshop [1] may lead to a study item on AI/ML applications for the PHY layer. Moreover, reducing the overall power consumption of the system is a target for 5G advanced and 6G. In this context, we assume the use of a digital twin environment [2] where learning of the AI/ML solutions takes place, and can later aid planning, deployment and management of wireless networks. In order to leverage the potential of a digital twin for the environment, full knowledge of the channel state information (CSI) is needed such that most of the real propagation effects can be represented. Here, we propose a ML solution based on untrained neural networks (UNNs) for channel recreation/estimation at the initial operation phase (day zero) where not many CSI measurements are available. For instance, we can design a UNN to learn the environment characteristics based on a single time snapshot channel measurement. However, it is beneficial for the UNN to collect some few channel measurements over time. Our method is an enabler for PHY functionalities and other AI/ML methods which need CSI as labels, e.g., fingerprinting, CSI-prediction, and multi-modal data-aided networks (lidar, radar, environment images, etc.) [3]. Moreover, our proposed ML solution can further leverage full CSI reporting between the user equipment (UE) and the base station (BS). Untrained neural networks (UNNs) were first proposed in [4] where the term untrained relates to the fact that a huge data collection for training is not needed. Instead, the neural network can be fitted to a single data sample. Therefore, the updates of gradient descent move from the training phase to the inference phase. The solution of inverse problems, such as denoising, is feasible because the structure of deep convolutional networks act as a prior to an image-like signal [4]. The work in [5] goes further and proposes a underparameterized deep decoder architecture for UNNs. For wireless systems, this means we can fit a UNN to directly estimate the multidimensional wireless channel based on a small measurement campaign, i.e., a few time snapshots, without the need of 'true' labels. The application of UNNs for real inverse problems is quite recent. For instance, the work in [6] proposes to use a convolutional deep decoder UNN to accelerate magnetic resonance imaging which has superior performance than total variation norm minimization. In [7], a MIMO channel estimator using UNN is proposed to overcome pilot contamination. However, the authors evaluate the performance using the LTE-EPA channel model which does not reveal one of the main advantages of UNN channel estimators: prior knowledge on the propagation environment which is stored in the UNN structure. The storage of prior knowledge, the short datacollection phase and the small number of parameters of UNNs have motivated us to further investigate their use for CSI recreation. We refer to channel recreation instead of channel estimation to emphasize that channel estimation gain is not the main goal of our UNN architecture design. The UNN is optimized to estimate the wireless channel with at least the same signal to noise ratio (SNR) as the corresponding channel measurement. Therefore, the UNN learns prior knowledge about the propagation environment. In this work, we propose to take advantage of the learned prior knowledge by means of transfer learning [8]. In addition, we propose to expand the UNN structure to be able to recreate simultaneously the channel of multiple neighboring UEs. Moreover, we indicate how the UNN structure can be exploited for low-overhead full CSI reporting. In contrast to prior art, we evaluate the performance of our UNN channel estimators on geometrically modeled channels which better represent environment specific fading characteristics. In this paper, Section II presents the wireless propagation environment, Section III presents the UNN based single user CSI estimator and the transfer learning approach. After that, Section IV presents the UNN based simultaneous CSI estimator for multiple UEs. Finally, Section V present our simulation results and Section VI concludes our paper. Regarding the notation, a, a, A and A represents, respectively, scalars, column vectors, matrices and D- II. SYSTEM AND CHANNEL MODELS For the problem of channel recreation and transfer learning with UNNs, we consider an urban environ- where N ∈ C N sub ×Nsp×Nant is a zero mean circularly symmetric complex Gaussian noise process. The H C mes are further used to recreate the CSIs. After we derive the optimum weights for a UNN to recreate the CSIs of N sp different locations, we can send the UNN weights together with its structural details (such as random input rule, and number of upsampling operations) from UE to BS. Hence, the BS would be able to recreate the same CSI as the reporting UE. Since UNNs are under-parameterized, we achieve compression of the full-CSI (H C mes ). In this work, we present how to achieve CSI compression by different UNN structures that take advantage of the channel correlation between neighboring UEs. Nonetheless, as we discuss in Section III-D, it is possible to apply other compression schemes on top of the UNN weights to achieve an even higher compression rate. The use of UNN structures for CSI reporting is an alternative to variational auto-encoder (VAE) solutions, which are trained to generate codewords that represent the channels. The motivation for our method is to inherently also learn the environment, i.e., to generate a ML based digital twin or mirror world of the environment. LEARNING Due to the claim in [4] that the network structure stores the prior-information, we aim to access this prior by means of transfer learning. In wireless channel estimation, access to prior information can provide a channel estimation gain if the correlation between the channels is considered. Physically, neighboring UEs are favorable candidates as they experience similar propagation effects in an environment. Hence, a UNN is learning the propagation environment while fitting the channel measurements, without any direct knowledge of the environment map. In this section, we present the UNN channel estimator based on the deep decoder architecture [5] for a single UE. We introduce the data pre-processing, the UNN architecture, and how gradient descent is used to update the weights. Moreover, we propose to use transfer learning to take advantage of the stored prior in the UNN weights. A. Data pre-processing for UNN The input signal to a UNN is a random noise seed k 0 is the depth of the random seed which is a hyperparameter, and L is the number of layers. The input ten- defined on the interval [−a, +a] and kept fixed during the iterations to update the gradient descent. The measured channel H C mes is preprocessed as • Each time snapshot within H C mes is normalized by its Frobenius norm, and then multiplied by a scaling factor to ease convergence. • H C mes ∈ C N sub ×Nsp×Nant is rearranged by concatenating Re{H C mes } and Im{H C mes } in the dimension corresponding to the antenna elements. After those operations, H mes ∈ R N sub ×Nsp×2Nant is directly used to compute the cost function. B. UNN architecture for single UE CSI estimation The UNN deep decoder architecture is a composition of L layers which are of three types: (L−2) inner layers, one pre-output layer (L − 1) and one output layer (L). where j = [1, 2, . . . , k l ], mean and variance (var) are computed among the batch samples [10] which for UNN is one. The trainable parameters of the BatchNorm For instance, the output of the first inner layer Z 1 can be written as where [W 1 ] (4) is the 4-mode unfolding of the convolutional filters operating at the antenna elements dimension. The pre-output layer differs from the inner layers because it does not apply upsampling. Hence, it can be written as After that, the output layer is used to adjust the range of values as well as the size expected in the output k L = 2N ant , such that where W L ∈ R 1×1×k l−1 ×2Nant , and TanH is the hyperbolic tangent activation function. Since the upsamplig operations are pre-defined, the trainable param-eters reduce to the convolutional filters W l and the regularization parameters R l of the batch normalization operation. Therefore, K l = {W l , R l } is the set of trainable parameters of the l th layer, and K refers to all the trainable parameters of the L layers. C. Updating the weights of a UNN The UNN is a model P : ant is its total number of parameters used to map the input tensor Z 0 to the output tensor Z L as Z L = P (K, Z 0 ). The loss function of the UNN is the mean squared error (MSE) which is computed as The trainable parameters K are updated by I gradient descent iterations such that K * = arg min K L(K), and H est = P (K * , Z 0 ). (7) Therefore, H est is the channel estimated for a single UE by the UNN P when optimized for a specific H mes . Since there is no big data collection phase, UNNs do not have generalization capabilities. D. Transfer Learning for UNNs Since the input tensor Z 0 is fixed, the mapping weights K * are only able to recover the considered H mes during the I gradient iterations. In addition, a change in the construction of the random seed makes the estimation task impossible. For instance, if we generate Z 0 from a different seed number compared to Z 0 , H est = P (K * , Z 0 ) the output of the UNN P is something different from H est . Nonetheless, according to [4], the priors are stored in K * . Therefore, we propose to apply transfer learning in order to take advantage of the prior-knowledge for wireless channel estimation. and H 1,est = P (K * 1,0 , Z 0 ). Here, we assume K * 1,0 is the set of projection tensors which operates sequentially over Z 0 to reconstruct H 1,est . Next, we propose to estimate the channel of UE 2 H 2,est as where the weights are not initialized from random values K 2,0 , but from the weights of its neighbor, UE 1 in this case. Hence, H 2,est = P (K * 2,1 , Z 0 ) and the number of gradient iterations are the same for UE 1 and UE 2. This implies that we are constraining the gradient descent to search for a sub-space of solutions for UE 2 close to the sub-space of UE 1 since Moreover, if H 1,mes and H 2,mes are correlated, the channel gain obtained for H 2,est (K * 2,1 ) = P (K * 2,1 , Z 0 ) is higher than the gain of H 2,est (K * 2,0 ) = P (K * 2,0 , Z 0 ) as K * 1,0 is a prior to K * 2,1 . This proposal is aligned with transfer learning [8] since we derive knowledge for a 1 st task (estimate H 1,est ) and use it to solve a 2 nd task (estimate H 2,est ). Since neighboring wireless channels are likely to be correlated due to their propagation environment, the transfer learning is very suitable. Moreover, even if a channel estimation gain is not achieved, the distance between the weights are reduced which can be further leveraged by compression schemes when reporting the UNN-estimator parameters. USERS In this section, we propose to extend the UNNestimator to simultaneously estimate the channels of M multiple neighboring UEs. Since neighboring UEs tend to have correlated channels, a UNN with three dimensional convolutional kernels can be optimized to find the weights that best fit the channel measurements. Despite the expansion of dimensions, the number of trainable parameters would not explode if there is enough correlation between the UEs considered. This is different from transfer learning, as here we start from a random initialization and output channels for M UEs simultaneously. In the following subsections, we present the construction of the signals at the input and the output, as well as the architecture and its weights optimization. Moreover, the upsampling operation of the inner layers is defined by three one dimensional linear upsampling matrices: A l ∈ R 2 l b×2 l−1 b , C l ∈ R 2 l c×2 l−1 c , and D l ∈ R 2 l d×2 l−1 d . The output of the first inner layer Here, we point out that depending on the design choice of N sp , N sub and M , we can disable the upsampling matrices accordingly. For instance, we should not consider A l if just one time snapshot is available on the channel measurement. Next, the pre-output layer is and the output layer is computed as C. Optimization of the weights Let us define the UNN mapping function as V. SIMULATIONS AND RESULTS As presented in Section II, we use IlmProp to simulate a street canyon scenario as in Figure 1 where UE 1 is the closest to the building, and UE 7 is the most distant. Table I presents for single UE CSI estimation at measurement SNR = 20 dB. For reference, we plot in blue the results for each UE with random initialization. As UE 3 has the best estimation gain, we take its weights as starting point for optimizing the weights K * 2,3 and K * 4,3 for UE 2 and UE 4, respectively. After that, we estimate UE 1 initializing from K * 2,3 and UE 5 initializing from K * 4,3 , and so on. This propagation of transfer learning from UE 3 is plotted in red. We can see that the transfer learning approach was successful to find a solution K * 5,4−3 that provides CSI reconstruction within our design requirements. As the channel gain for UE 6 estimated with transfer learning from UE 3 is smaller than the estimation gain of UE 6 starting from random initialization, we take UE 6 as a second basis for transfer learning. The transfer learning results for UEs 5 and 7, and then UE 4 are plotted in pink. Hence, UE 6 is a better transfer learning basis for UEs 5 and 7. However, it does not provide advantage for UE 4. In Figure 5 we plot the Frobenius norm of the difference between the filters W l in each layer when initialized from random values (no TL) and when using initialization from the neighbor's weights. Those results indicates that equation 10 is correct, the derived sub-spaces (K * 2,3 , K * 4,3 , and K * 7,6 ) where constrained to be close to the initialization sub-spaces (K * 3,0 and K * 6,0 ). For reporting the optimal weights, the worst case requires transmission of all parameters per UE, which is only 17.45% of the full channel coefficients. However, if the filters are closer to each other, differential compression schemes can be applied to further reduce the number of parameters to be reported. Based on the previous results, we set two candidate UE-groups for simultaneous CSI recreation. We derive a UNN architecture for estimating simultaneously UEs Figure 4 using a green line. There is about 4 dB difference between the multiple-CSI estimator and the transfer learning estimator. Compared with the multiple-CSI recreation for UEs 2, 3 and 4, we need more parameters (k) and more iterations. Nonetheless, the architecture for multiple-CSI recreation of UEs 5, 6 and 7 needs only 6.77% of the number of coefficients in H C 3 mes . We change the number of iterations I = 150000 for the multiple-CSI recreation of UEs 5, 6 and 7. The result is plotted in light blue on Figure 4. There is a further 1 dB gain, but the convergence is very slow (50k iterations to improve just 1 dB). This indicates that the simultaneous channel estimation for UEs 5, 6 and 7 is more challenging. VI. CONCLUSION In this paper we propose to use transfer learning to take advantage of the prior knowledge stored in the UNN structure. Moreover, we present a UNN architecture for simultaneous CSI estimation for multiple UEs which can further reduce the number of trainable parameters. In addition, we show the compression benefits of UNN structures which can further leverage low-overhead CSI reporting. Our results show that the UNN structure is able to inherently learn the environment characteristics when fitting the measured channels. By transfer learning, we are able to access this prior knowledge and have a higher channel estimation gain. Due to the channel correlation between neighboring UEs, we can simultaneously estimate CSI for multiple UEs with a 3-d kernel
2021-11-16T02:16:21.205Z
2021-11-15T00:00:00.000
{ "year": 2021, "sha1": "faa9c01c8bdbe434519cf7cebcf4e8c51a60afd8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "faa9c01c8bdbe434519cf7cebcf4e8c51a60afd8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
208878712
pes2o/s2orc
v3-fos-license
New Satellite Selection Approach for GPS / BDS / GLONASS Kinematic Precise Point Positioning : With the development of global satellite navigation systems, kinematic Precise Point Positioning (PPP) is facing the increasing computational load of instantaneous (single-epoch) processing due to more and more visible satellites. At this time, the satellite selection algorithm that can e ff ectively reduce the computational complexity causes us to consider its application in GPS / BDS / GLONASS kinematic PPP. Considering the characteristics of di ff erent systems and satellite selection algorithms, we proposed a new satellite selection approach (NSS model) which includes three di ff erent satellite selection algorithms (maximum volume algorithm, fast-rotating partition satellite selection algorithm, and elevation partition satellite selection algorithm). Additionally, the inheritance of ambiguity was also proposed to solve the situation of constantly re-estimated integer ambiguity when the satellite selection algorithm is used in PPP. The results show that the NSS model had a centimeter-level positioning accuracy when the original PPP and optimal dilution of precision (DOP) algorithm solution were compared in kinematic PPP based on the data at five multi-GNSS Experiment (MGEX) stations. It can also reduce a huge amount of computation at the same time. Thus, the application of the NSS model is an e ff ective method to reduce the computational complexity and guarantee the final positioning accuracy in GPS / BDS / GLONASS kinematic PPP. Introduction Precise Point Positioning (PPP) is one of the recent techniques to make positioning efficient and cost effective by reducing labor and equipment costs for surveying [1]. It does not require any additional data from a reference station and can provide a solution with a centimeter to decimeter level of positional accuracy both in static and kinematic modes [2,3]. Thus, PPP can be widely used in many fields such as vehicle navigation, mining survey, crustal motion, and coastal studies. With the development of global navigation satellite system (GNSS), PPP has attracted the attention of many scholars. Wang et al. [4] applied the real-time multi-GNSS orbit and clock corrections of the CLK93 product released by Centre National d'Etudes Spatiales (CNES) for real-time multi-GNSS PPP processing, and its orbit and clock qualities were investigated. Jiao et al. [5] focused on the assessment of PPP in different systems. Yasyukevich et al. [6] analyzed the impact of the solar flares on the GNSS-based navigation. Jacobsen and Andalsvik [7] studied the impact of the disturbances on the network RTK (real-time kinematic positioning) and PPP techniques. Currently, there are four constellations available, which include two full operational systems: Global Positioning System, GPS, and Globalnaya Navigazionnaya Sputnikovaya Sistema, GLONASS, and two systems still under construction: Galileo and BeiDou Navigation Satellite System, BDS. In detail, the GPS constellation comprises 31 operational Medium Earth Orbit (MEO) satellites flying in six orbital planes at an altitude of approximately 20,200 km as of 9 January 2019. GLONASS has recovered its full constellation since October 2011, with 24 satellites in three equally spaced orbital planes at an altitude of about 19,100 km [8]. The fully deployed Galileo system will consist of 30 MEO satellites (24 operational + six active spares) in three orbital planes at an altitude of 23,222 km [9]. The BDS will complete the constellation which will eventually have five Geosynchronous Earth Orbit (GEO), three Inclined Geosynchronous Satellite Orbit (IGSO), and 27 MEO satellites by 2020 [10,11]. Now, there are 33 satellites in operation for BDS, more than GPS. Thus, compared with using only single GPS constellation, GNSS users can observe more visible satellites and obtain better reliability at the same time with the formation and improvement of the satellite constellations [12]. However, the improvement in positioning accuracy will become insignificant once the total number of satellites in view reaches a certain level. Tracking such a large number of visible satellites and processing the signals from all those satellites in a real-time kinematic process will put a heavy computational load on the receiver systems, especially for low-cost receivers [13]. In addition, it also causes excessive redundant information, which affects the real-time positioning performance of kinematic PPP services. Therefore, to quickly achieve the position, guarantee reliable positioning accuracy, and reduce the computing load of equipment, an appropriate data selection method is needed in the process of kinematic PPP. As a method that can effectively reduce the computational complexity and guarantee similar positioning accuracy with an original solution, the satellite selection algorithm is often used in Single Point Positioning (SPP), which causes us to consider its application in kinematic PPP. There are several mainstream satellite selection methods: the maximum volume algorithm [14], the optimal dilution of precision (DOP) algorithm [13,[15][16][17], the highest elevation angle satellite selection algorithm [18], and the fast-rotating partition satellite selection algorithm [19]. Recently, there have also been some studies devoted to satellite selection. Gao et al. [20] introduced condition number of the design matrix in the reference satellite selection method to improve the structure of the normal equation. Huang et al. [21] proposed an end-to-end deep learning network for satellite selection based on the PointNet and VoxelNet networks. However, compared with SPP that uses only pseudorange observations, the situation of constantly re-estimated integer ambiguity should be considered when the satellite selection algorithm is applied in kinematic PPP. The content of these mainstream satellite selection methods should also be understood, which is described in the next paragraph. In the maximum volume algorithm, if the volume of a tetrahedron composed of four satellites is maximal, the four satellites are selected. In general, more than four satellites are desirable in order to degrade the effect of the measurement errors and increase the robustness of estimating navigation solutions [18,22]. Additionally, this method is time consuming due to a large number of about 24 visible satellites in multi-GNSS. For the optimal DOP algorithm, it often selects a set of satellites whose position dilution of precision (PDOP) or geometric position dilution of precision (GDOP) is minimal. However, it also has a larger computation load to a GNSS receiver. For instance, when 12 satellites are selected from 24 satellites, 2,704,156 (C 12 24 ) GDOPs or PDOPs need to be computed, which causes a massive and time-consuming computation when other satellite selection methods are compared. The highest elevation angle satellite selection algorithm is the simplest and the least time consuming, but it often has poor geometries. Compared with the optimal DOP algorithm, the fast-rotating partition satellite selection algorithm based on the equal distribution of sky can greatly reduce the computation time and has a similar performance in positioning. The accuracy of different systems is also different in the process of positioning [23]. Therefore, it may be inappropriate to apply the same algorithm to the satellites of different constellations in the satellite selection solution. In this paper, we proposed a new satellite selection approach (NSS model) to reduce the excessive redundant information in GPS/BDS/GLONASS kinematic PPP with undifferenced and uncombined observations. The problem of constantly re-estimated ambiguity encountered in the application of satellite selection algorithm is considered. In addition, the positioning accuracy of the NSS model in GPS/BDS/GLONASS kinematic PPP is also compared with that of the original PPP and the optimal DOP algorithm PPP solution. The structure of the article is as follows. Firstly, the introduction is described. Secondly, the PPP mathematical model and satellite selection algorithms are introduced. Additionally, these selection algorithms include a new elevation partition satellite selection algorithm which is inspired by the structure of the fast-rotating partition satellite selection algorithm. Thirdly, in consideration of the accuracy and characteristic in the three systems, the NSS model that applies three different algorithms for GPS, BDS, and GLONASS is proposed. It can meet the performance of kinematic positioning and reduce the amount of computation when the NSS model is used in undifferenced and uncombined kinematic PPP. The inheritance of integer ambiguity is also described to solve the problem of constantly re-estimated ambiguity in kinematic PPP. At last, the time complexity of the optimal DOP algorithm and the NSS model is compared to measure the complexity and the amount of computation. In addition, the data from five multi-GNSS Experiment (MGEX) stations on day of year (DOY) 239 in 2017 and the measured data were used to verify the model presented in this study [24]. Through this satellite selection model, the computational load of the GNSS equipment was able to be effectively reduced and the positioning accuracy and reliability of kinematic PPP were maintained at the same time, which provides some reference for the application of satellite selection algorithms in kinematic PPP. Dual-Frequency Multi-GNSS PPP As another alternative to traditional PPP which uses the ionosphere-free combination model, undifferenced and uncombined PPP has attracted a lot of attention in the GNSS field [25][26][27]. This solution is more flexible in processing multi-frequency GNSS observations, avoids noise amplification caused by traditional linear combination, and can extract ionospheric delays [28][29][30]. In the dual-frequency undifferenced and uncombined PPP model, the receiver uncalibrated code delay (UCD) absorbed by both receiver clock offset and Line-of-Sight (LOS) ionospheric delay parameters should be considered. Additionally, the GPS and BDS are both based on the code division multiple access while GLONASS is based on frequency division multiple access. Hence, the inter-frequency bias should be considered. The linearized equations of pseudorange and carrier phase observations are written as follows [25]: where superscript r, m, T denote receiver, satellite, and satellite system, respectively; p 1,T r,1 and l 1,T r,1 denote observed minus computed (OMC) values of pseudorange and carrier phase observables on the carrier frequency f 1 , respectively; u r is the unit vector of the component from the receiver to the satellite; 1 is a vector of 2 × m rows and one column, of which each element is one, corresponding to the receiver clock parameter dt T r ; M W is the wet mapping function; x is the vector of the receiver position increments relative to the a priori position; dt is the receiver clock offsets; Z W is the zenith wet delay; I s,T r,1 is the LOS ionospheric delay on the frequency f m,T 1 ; γ T j is the frequency-dependent multiplier factor (γ T j = ( f s,T , which is independent of the satellite pseudorandom noise (PRN) code; d s,T j is the frequency-dependent satellite UCD; d s,T r,j is the frequency-dependent receiver UCD with respect to satellite s; λ s,T j is the carrier wavelength on the frequency band j; N s,T r,j is the integer phase ambiguity; b s,T r,j and b s,T j are the frequency-dependent receiver and satellite uncalibrated phase delays (UPDs); ε s,T r,j and ξ s,T r,j are the sum of measurement noise and multipath error for pseudorange and carrier phase observations; in matrix K, the element for the corresponding p 1,T r,1 is 1, while the element for l 1,T r,1 is −1, corresponding to the ionospheric parameter I T r,1 ; R 1 and R 2 is the matrix corresponding to the ambiguity parameters N T r,1 and N T r,2 , respectively, the element for the corresponding p 1,T r,1 is 0, while for l 1,T r,1 is 1; Q L denotes the stochastic model of OMC observables. Note that all the variables in Equations (1) and (2) are expressed in meters except the ambiguity and UPDs in cycles. Optimal DOP Algorithm This conventional algorithm is a brute-force algorithm that (a) computes GDOPs or PDOPs of all possible solutions by combining all the n satellites selected from all the visible satellites (N) and (b) selects a set of satellites with the minimum GDOP or PDOP. Here, the GDOP, which is used for formal errors (theoretical impact of the observational geometry), is adopted to select satellites in this paper. Assuming only GPS satellites are observed, GDOP can be described simply as follows. x n y n z n −1 However, the huge amount of computation of the optimal DOP algorithm is also prominent when the number of the selected satellites and systems increases. This method guarantees the optimal solution, but it requires large computation time. The computation time is mostly spent on matrix multiplication and inverse operation for all possible combinations [18]. To show this situation in detail, the number of all possible combinations based on daily observations (day of year: 244, in 2017) at the JFNG station is shown in Table 1. It can be seen that if 12 satellites are selected in GPS/BDS/GLONASS PPP, 5,200,300 combinations should be computed and compared, which takes up a lot of computation time. The Maximum Volume Algorithm With the certification that GDOP is inversely proportional to the volume of the tetrahedron formed by unit vectors pointing from user's position to four selected satellites, Kihara and Okada [14] introduced a heuristic algorithm, that is, the maximum volume algorithm. Compared with the Optimal DOP algorithm, the maximum volume algorithm has some advantages in computation and complexity. In this study, we made a few modifications to the maximum volume method. This algorithm can be described in the following four steps. Step 1: According to the elevation mask angle and Space Vehicle (SV) Health, select the healthy satellites. Step 2: Give all combinations when selecting four satellites from the healthy n visible satellites, C 4 n . Sort by azimuth in descending order in each combination. Step 3: Compute the volume of the tetrahedron formed by the unit vectors pointing from the user's position to the four satellites of each combination. The volume equation of a tetrahedron is as follows. where V is the volume and → a , → b , → c , and → d are the unit vectors of satellites S1, S2, S3, and S4, respectively. Step 4: Select the satellites of a combination that constitutes the largest tetrahedral volume. The Fast-Rotating Partition Satellite Selection Algorithm The fast-rotating partition satellite selection algorithm meets the GDOP requirements of positioning and can greatly reduce the computation time when compared with the traditional optimal DOP satellite selection algorithm. The computation time was almost unaffected by the number of satellites selected, and slightly affected by the number of satellites in the field of view [19]. However, the description of the original algorithm is complex and difficult to implement in programs. In this study, we provide a concise mathematical function model of this algorithm and have made a few adjustments in the steps. The algorithm consists of the following four steps. Step 1: According to the elevation mask angle and SV Health, select the healthy satellites. Step 2: As shown in Figure 1, divide the satellite sky for n partitions according to the number (n) of visible satellites. Step 3: Select the satellite that is closest to the partition midline in each partition. Calculate the absolute value of the azimuth difference between the satellite and partition midline. Calculate the average of absolute values of all partitions. The scheme is one of optional schemes. Step 4: Rotate the midline of each partition at the same time according to a certain angle (j) to form a new dividing situation of the sky. Repeat steps 1-3. The scheme with the minimum absolute value is chosen as the final satellite selection scheme. The mathematical equation is as follows. where n is the number of visible satellites; i is the number of partitions; α s i is the azimuth angle of the satellite S i ; S i is the nearest satellite to the midline in partition i; j denotes the rotation angle. Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 17 Step 4: Rotate the midline of each partition at the same time according to a certain angle ( j ) to form a new dividing situation of the sky. Repeat steps 1-3. The scheme with the minimum absolute value is chosen as the final satellite selection scheme. The mathematical equation is as follows. The Elevation Partition Satellite Selection Algorithm Inspired by the idea of the distribution of satellite elevations and the fast-rotating partition satellite selection algorithm, we proposed a new satellite selection algorithm which is based on equal distribution of elevation. The principle of this satellite selection method and the fast-rotating partition satellite selection algorithm is similar. It also can greatly reduce the computation time and is almost unaffected by the number of visible satellites selected. The steps of this algorithm and the mathematical function model are given as follows. Step 1: According to the elevation mask angle and SV Health, select the healthy satellites. Step 2: As shown in Figure 2, divide the elevation angle for n partitions according to the number ( n ) of the visible satellites. Step 3: Select the satellite which is closest to the partition midline in each partition. Calculate the absolute value of the elevation angle difference between the satellite and partition midline. Calculate the average of the absolute values of all partitions. The scheme is one of the optional schemes. Step 4: Move the midline of each partition at the same time according to a certain angle ( j ) to form a new dividing scheme of the elevation. Repeat the steps mentioned above. The scheme with the minimum absolute value is chosen as the final satellite selection scheme. The mathematical equation is as follows: The Elevation Partition Satellite Selection Algorithm Inspired by the idea of the distribution of satellite elevations and the fast-rotating partition satellite selection algorithm, we proposed a new satellite selection algorithm which is based on equal distribution of elevation. The principle of this satellite selection method and the fast-rotating partition satellite selection algorithm is similar. It also can greatly reduce the computation time and is almost unaffected by the number of visible satellites selected. The steps of this algorithm and the mathematical function model are given as follows. Step 1: According to the elevation mask angle and SV Health, select the healthy satellites. Step 2: As shown in Figure 2, divide the elevation angle for n partitions according to the number (n) of the visible satellites. Step 3: Select the satellite which is closest to the partition midline in each partition. Calculate the absolute value of the elevation angle difference between the satellite and partition midline. Calculate the average of the absolute values of all partitions. The scheme is one of the optional schemes. Step 4: Move the midline of each partition at the same time according to a certain angle (j) to form a new dividing scheme of the elevation. Repeat the steps mentioned above. The scheme with the minimum absolute value is chosen as the final satellite selection scheme. The mathematical equation is as follows: where n is the number of visible satellites; i is the number of partitions; h i is the elevation mask angle of satellite S i ; S i is the nearest satellite to the centerline in partition i; j denotes the moving angle. where n is the number of visible satellites ; i is the number of partitions; i h is the elevation mask angle of satellite i S ; i S is the nearest satellite to the centerline in partition i ; j denotes the moving angle. The Combination of Three Different Satellite Selection Algorithms To reduce the excessive redundant information and guarantee similar positioning accuracy with an original solution in GPS/BDS/GLONASS kinematic PPP, an appropriate satellite selection model which can adapt PPP technique should be proposed. However, although GNSS constellations have similar capabilities, the satellite type, geometric spatial structure, and communication mode of each constellation are not necessarily the same. For instance, GPS and GLONASS are composed of MEO satellites while BDS includes GEO, MEO, and IGSO satellites. Due to different characteristics of GNSS, a single satellite selection algorithm sometimes seems to not be suitable when it comes to GPS/BDS/GLONASS satellites. Thus, to reduce the computational load of the GNSS equipment and obtain appropriate accuracy and reliability in GPS/BDS/GLONASS kinematic PPP, we propose the NSS model that includes three different satellite selection algorithms: the maximum volume algorithm, the fast-rotating partition satellite selection algorithm, and the elevation partition satellite selection algorithm. In the process of establishing the NSS model, the accuracy and characteristics of different constellations and satellite selection algorithms were all considered. As the first and well-known GNSS system that has been fully operational, GPS has better positioning accuracy and reliability when BDS and GLONASS are compared. Thus, to ensure the positioning accuracy of the NSS model, the maximum volume algorithm is applied in the GPS satellites because GDOP is inversely proportional to the volume and better GDOP can partly reflect the theoretical impact of the observational geometry. For the BDS and GLONASS satellites, they both have rather good coverage almost worldwide. Thus, we used two satellite selection algorithms (based on azimuth and elevation, respectively) in BDS and GLONASS to avoid the situation that excessive satellites are selected at some epochs if one algorithm is used. In this study, the fast-rotating partition satellite selection algorithm based on equal distribution of sky was adopted for the GLONASS satellites, which can greatly reduce the computation time when compared to the optimal DOP algorithm. Like the fast-rotating partition satellite selection algorithm, the elevation partition satellite selection algorithm is based on equal distribution of elevation. Additionally, we applied the elevation partition satellite selection algorithm in BDS. These three different selection algorithms formed the NSS model. It is succinct and fast because the huge amount of computation is reduced in the process of satellite selection. In addition, the considered characteristics of multi-GNSS can provide some assurance for the positioning accuracy. The flowchart of satellite selection is shown in Figure 3. The Combination of Three Different Satellite Selection Algorithms To reduce the excessive redundant information and guarantee similar positioning accuracy with an original solution in GPS/BDS/GLONASS kinematic PPP, an appropriate satellite selection model which can adapt PPP technique should be proposed. However, although GNSS constellations have similar capabilities, the satellite type, geometric spatial structure, and communication mode of each constellation are not necessarily the same. For instance, GPS and GLONASS are composed of MEO satellites while BDS includes GEO, MEO, and IGSO satellites. Due to different characteristics of GNSS, a single satellite selection algorithm sometimes seems to not be suitable when it comes to GPS/BDS/GLONASS satellites. Thus, to reduce the computational load of the GNSS equipment and obtain appropriate accuracy and reliability in GPS/BDS/GLONASS kinematic PPP, we propose the NSS model that includes three different satellite selection algorithms: the maximum volume algorithm, the fast-rotating partition satellite selection algorithm, and the elevation partition satellite selection algorithm. In the process of establishing the NSS model, the accuracy and characteristics of different constellations and satellite selection algorithms were all considered. As the first and well-known GNSS system that has been fully operational, GPS has better positioning accuracy and reliability when BDS and GLONASS are compared. Thus, to ensure the positioning accuracy of the NSS model, the maximum volume algorithm is applied in the GPS satellites because GDOP is inversely proportional to the volume and better GDOP can partly reflect the theoretical impact of the observational geometry. For the BDS and GLONASS satellites, they both have rather good coverage almost worldwide. Thus, we used two satellite selection algorithms (based on azimuth and elevation, respectively) in BDS and GLONASS to avoid the situation that excessive satellites are selected at some epochs if one algorithm is used. In this study, the fast-rotating partition satellite selection algorithm based on equal distribution of sky was adopted for the GLONASS satellites, which can greatly reduce the computation time when compared to the optimal DOP algorithm. Like the fast-rotating partition satellite selection algorithm, the elevation partition satellite selection algorithm is based on equal distribution of elevation. Additionally, we applied the elevation partition satellite selection algorithm in BDS. These three different selection algorithms formed the NSS model. It is succinct and fast because the huge amount of computation is reduced in the process of satellite selection. In addition, the considered characteristics of multi-GNSS can provide some assurance for the positioning accuracy. The flowchart of satellite selection is shown in Figure 3. The Inheritance of Ambiguity With the satellite selection algorithm, the possible problems should also be considered in its application in kinematic PPP. Due to satellite movement and the application of the satellite selection algorithm, it is common that satellites selected in different epochs are not all the same. One satellite selected in the previous epoch may not be selected in the next epoch. This will lead to a circumstance that the positioning process is based on discontinuous observations although the satellite is continuously observed. At this time, if we apply the satellite selection in SPP, there are no problems in the data process because SPP only uses pseudorange observations for positioning. However, when it comes to the kinematic PPP solution, the positioning will be affected by the constantly re-estimated ambiguity because of the application of phase observations. The satellite selection algorithm will cause a poor accuracy and reliability in kinematic PPP. Thus, a method that useful ambiguity information is inherited in the data process is proposed. In the process of this method, only the ambiguity of each satellite at the previous and the next epoch are kept. It is a simple storage operation and its computational costs can be ignored when the reduced computation in the satellite selection algorithm are compared. The main contents are as follows. In The Inheritance of Ambiguity With the satellite selection algorithm, the possible problems should also be considered in its application in kinematic PPP. Due to satellite movement and the application of the satellite selection algorithm, it is common that satellites selected in different epochs are not all the same. One satellite selected in the previous epoch may not be selected in the next epoch. This will lead to a circumstance that the positioning process is based on discontinuous observations although the satellite is continuously observed. At this time, if we apply the satellite selection in SPP, there are no problems in the data process because SPP only uses pseudorange observations for positioning. However, when it comes to the kinematic PPP solution, the positioning will be affected by the constantly re-estimated ambiguity because of the application of phase observations. The satellite selection algorithm will cause a poor accuracy and reliability in kinematic PPP. Thus, a method that useful ambiguity information is inherited in the data process is proposed. In the process of this method, only the ambiguity of each satellite at the previous and the next epoch are kept. It is a simple storage operation and its computational costs can be ignored when the reduced computation in the satellite selection algorithm are compared. The main contents are as follows. In where X m−1 is the best estimation for the state vector at epoch m − 1; Q m−1 is the best estimation for the variance vector at epoch m − 1. σ is the standard deviation of the estimated parameter. Time Complexity of the NSS Model As our aim was to reduce the computational complexity and guarantee similar positioning accuracy with original solution in GPS/BDS/GLONASS kinematic PPP, the performance of the NSS model and other satellite selection algorithms should be compared. The time complexity is an important indicator of measuring the complexity and the amount of computation in an algorithm. Although we can evaluate the time complexity of algorithms with the running time of the source program, it is not appropriate because of the huge workload and the programming effect. In this study, we used asymptotical algorithm analysis to calculate the time complexity of different satellite selection algorithms. Additionally, considering the final positioning accuracy of kinematic PPP after the application of the satellite selection algorithm, we chose the optimal DOP algorithm for comparison. The positioning accuracy will be compared in the next part. The number of basic operations in these two algorithms is both assumed to be same, which is represented as n. The number of cycles is the deciding factor in the size of time complexity. As mentioned earlier, it is obvious that the number of cycles in the optimal DOP algorithm, maximum volume algorithm, fast-rotating partition satellite selection algorithm, and elevation partition satellite selection algorithm is C n 1 N , C 4 N , 360/N, and 90/N, respectively. Thus, the time complexity of the optimal DOP algorithm and the NSS model can be described clearly in Equation (12). where O(n) is asymptotic time complexity; N denotes the total visible satellites and G denotes the number of GPS satellites. With the result in Equation (12), we can know that the NSS satellite selection model has a similar performance in time complexity when maximum volume algorithm is compared. Additionally, if the same number of satellites are selected, the NSS satellite selection model has a better time complexity than that of the optimal DOP algorithm. Positioning Accuracy of the NSS Model In order to test and validate the proposed NSS model, the performance of dual-frequency GPS/BDS/GLONASS undifferenced and uncombined kinematic PPP using this satellite selection model was evaluated based on MGEX daily observations and measured GNSS data. The MGEX data collected on 27 August 2017 at five MGEX stations (DARW, GMSD, KARR, MRO1, and XMIS) were used to simulate the kinematic environment in this study. The measured GNSS data was collected on 23 January 2018 using a Trimble netR9 receiver in Wuhan University of Technology of Wuhan, Hubei province. The sampling frequency of the static data is 1 Hz and continued for 7 h. To evaluate the positioning accuracy of kinematic PPP with the NSS model, the result of the original PPP without the satellite selection algorithm and kinematic PPP with the optimal DOP algorithm (number of selected satellites: 12) were also used for comparison. For the processing strategy, all the PPP experiments were performed based on the opensource GAMP software which can be freely accessed through the GPS Toolbox webpage (i.e., https://www.ngs.noaa.gov/gps-toolbox/GAMP.htm) [31]. The precise products from Wuhan University were used and we estimated a receiver clock offset for one system and the inter-system bias parameter [32]. The carrier phase ambiguities were kept floating and estimated as constant for each continuous satellite arc. Additionally, the inheritance of ambiguity method was adopted. The weighing between the three systems was 1:1:1 in the software. Due to the worse orbit and clock quality of BDS GEO satellites [33,34], the weighing between the BDS GEO satellite and the other BDS satellite was 1:10 [35]. The related GDOP calculation also agrees with this weighing strategy. The MGEX Data In this study, the daily undifferenced and uncombined observations from five MGEX stations on DOY 239 in 2017 were processed in the kinematic PPP solution to simulate a kinematic environment. The IGS weekly solutions in Solution Independent Exchange (SINEX) format were adopted as the external reference coordinates. As an important indicator of theoretical impact of the observational geometry, the GDOP value of three PPP solutions should be analyzed. For brevity, only the results from the DARW station are presented. Figure 4 is the visible satellites using the original PPP (Left), NSS model (middle), and Optimal DOP algorithm (Right) at the last second. The daily satellite numbers of the NSS model and Optimal DOP algorithm for GPS, GLONASS, and BDS are shown in Figures 5-7, respectively. Due to different satellite selection strategies of the NSS model (Volume, Azimuth, Elevation mask angle) and Optimal DOP algorithm (GODP), the daily selected satellites are not same. Figure 8 shows the time-dependent change of satellite number and GDOP. In Figure 8, the average satellite number of the original PPP model, NSS model, and optimal model is 26, 12, and 12, respectively. If the satellite selection algorithm is not used in the process of PPP, the GDOP less than 1.5 accounts for 100% and the average GDOP value is 1.07. If the proposed NSS model is used to select satellites, the GDOP less than 2.5 accounts for 98.7%, the GDOP less than 2.0 accounts for 88.4%, and the GDOP less than 1.5 accounts for 19.1%. The average GDOP value is 1.72. Moreover, the GDOP of the optimal DOP algorithm is statistically analyzed. The GDOP value less than 2.0 accounts for 99.7%, and the GDOP less than 1.5 accounts for 75.0%. The average GDOP value is 1.43. From the above analysis of the results, it can be seen that the original PPP has the best GDOP average value and the most visible satellites. The number of satellites in the NSS model solution is greatly reduced when the average values of satellite number and GDOP are compared with other solutions. were performed based on the opensource GAMP software which can be freely accessed through the GPS Toolbox webpage (i.e., https://www.ngs.noaa.gov/gps-toolbox/GAMP.htm) [31]. The precise products from Wuhan University were used and we estimated a receiver clock offset for one system and the inter-system bias parameter [32]. The carrier phase ambiguities were kept floating and estimated as constant for each continuous satellite arc. Additionally, the inheritance of ambiguity method was adopted. The weighing between the three systems was 1:1:1 in the software. Due to the worse orbit and clock quality of BDS GEO satellites [33,34], the weighing between the BDS GEO satellite and the other BDS satellite was 1:10 [35]. The related GDOP calculation also agrees with this weighing strategy. The MGEX Data In this study, the daily undifferenced and uncombined observations from five MGEX stations on DOY 239 in 2017 were processed in the kinematic PPP solution to simulate a kinematic environment. The IGS weekly solutions in Solution Independent Exchange (SINEX) format were adopted as the external reference coordinates. As an important indicator of theoretical impact of the observational geometry, the GDOP value of three PPP solutions should be analyzed. For brevity, only the results from the DARW station are presented. Figure 4 is the visible satellites using the original PPP (Left), NSS model (middle), and Optimal DOP algorithm (Right) at the last second. The daily satellite numbers of the NSS model and Optimal DOP algorithm for GPS, GLONASS, and BDS are shown in Figures 5-7, respectively. Due to different satellite selection strategies of the NSS model (Volume, Azimuth, Elevation mask angle) and Optimal DOP algorithm (GODP), the daily selected satellites are not same. Figure 8 shows the time-dependent change of satellite number and GDOP. In Figure 8, the average satellite number of the original PPP model, NSS model, and optimal model is 26, 12, and 12, respectively. If the satellite selection algorithm is not used in the process of PPP, the GDOP less than 1.5 accounts for 100% and the average GDOP value is 1.07. If the proposed NSS model is used to select satellites, the GDOP less than 2.5 accounts for 98.7%, the GDOP less than 2.0 accounts for 88.4%, and the GDOP less than 1.5 accounts for 19.1%. The average GDOP value is 1.72. Moreover, the GDOP of the optimal DOP algorithm is statistically analyzed. The GDOP value less than 2.0 accounts for 99.7%, and the GDOP less than 1.5 accounts for 75.0%. The average GDOP value is 1.43. From the above analysis of the results, it can be seen that the original PPP has the best GDOP average value and the most visible satellites. The number of satellites in the NSS model solution is greatly reduced when the average values of satellite number and GDOP are compared with other solutions. Figure 9 shows the position residuals of GPS/BDS/GLONASS kinematic PPP with undifferenced and uncombined observations in the East (E), North (N), and Up (U) directions. In Figure 9, it can be seen that the position residuals of the NSS model and the optimal DOP algorithm are similar, while the original PPP has a better accuracy at most of epochs, as the original PPP solution uses all visible satellites. To obtain the accurate position accuracy of kinematic PPP, we calculated the RMS of the position residuals based on the results after 10 min convergence. It can be known that the RMS of the position residuals of original PPP was 2.8 cm in the E direction, 1.1 cm in the N direction, and 1.9 cm in the U direction. The RMS of the position residuals was 2.9 cm in the E direction, 2.1 cm in the N direction, and 6.9 cm in the U direction with the NSS model. For the optimal DOP algorithm, the RMS of the position residuals was 6.3 cm in the E direction, 5.6 cm in the N direction, and 2.3 cm in the U direction. Thus, the RMS of the position error ( n u ) is 3.6, 7.8, and 8.6 cm for the original solution, NSS model, and optimal DOP algorithm, respectively. It is clear that the GPS/BDS/GLONASS kinematic PPP with NSS model has a centimeter-level accuracy when the simulated kinematic data at the MGEX stations is used. Table 2 shows the average percentages of the number of selected satellites with respect to all visible satellites at five MGEX stations on DOY 239, 2017. It can be known that about 45.5% and 46.6% visible satellites will be selected in NSS model and optimal DOP method, respectively. Table 3 is the RMSs of position errors at five MGEX stations. The position accuracy of original solution, NSS model, and optimal DOP algorithm is 3.0, 7.8, and 7.3 cm, respectively. Compared with the original solution, more than half of the satellites are removed in the NSS model. Additionally, the NSS model has similar position accuracy with the optimal DOP method. Table 4 shows the computing time of daily data processing using the original PPP and NSS model, respectively. From Table 4, we can know that Figure 9, it can be seen that the position residuals of the NSS model and the optimal DOP algorithm are similar, while the original PPP has a better accuracy at most of epochs, as the original PPP solution uses all visible satellites. To obtain the accurate position accuracy of kinematic PPP, we calculated the RMS of the position residuals based on the results after 10 min convergence. It can be known that the RMS of the position residuals of original PPP was 2.8 cm in the E direction, 1.1 cm in the N direction, and 1.9 cm in the U direction. The RMS of the position residuals was 2.9 cm in the E direction, 2.1 cm in the N direction, and 6.9 cm in the U direction with the NSS model. For the optimal DOP algorithm, the RMS of the position residuals was 6.3 cm in the E direction, 5.6 cm in the N direction, and 2.3 cm in the U direction. Thus, the RMS of the position error ( √ e 2 + n 2 + u 2 ) is 3.6, 7.8, and 8.6 cm for the original solution, NSS model, and optimal DOP algorithm, respectively. It is clear that the GPS/BDS/GLONASS kinematic PPP with NSS model has a centimeter-level accuracy when the simulated kinematic data at the MGEX stations is used. Table 2 shows the average percentages of the number of selected satellites with respect to all visible satellites at five MGEX stations on DOY 239, 2017. It can be known that about 45.5% and 46.6% visible satellites will be selected in NSS model and optimal DOP method, respectively. Table 3 is the RMSs of position errors at five MGEX stations. The position accuracy of original solution, NSS model, and optimal DOP algorithm is 3.0, 7.8, and 7.3 cm, respectively. Compared with the original solution, more than half of the satellites are removed in the NSS model. Additionally, the NSS model has similar position accuracy with the optimal DOP method. Table 4 shows the computing time of daily data processing using the original PPP and NSS model, respectively. From Table 4, we can know that Table 2 shows the average percentages of the number of selected satellites with respect to all visible satellites at five MGEX stations on DOY 239, 2017. It can be known that about 45.5% and 46.6% visible satellites will be selected in NSS model and optimal DOP method, respectively. Table 3 is the RMSs of position errors at five MGEX stations. The position accuracy of original solution, NSS model, and optimal DOP algorithm is 3.0, 7.8, and 7.3 cm, respectively. Compared with the original solution, more than half of the satellites are removed in the NSS model. Additionally, the NSS model has similar position accuracy with the optimal DOP method. Table 4 shows the computing time of daily data processing using the original PPP and NSS model, respectively. From Table 4, we can know that the mean computing time of the original PPP and NSS model is 107.637 and 58.272 s, respectively. About 45.9% is improved in computing time. Thus, the centimeter-level positioning error can be accepted when it comes to the GPS/BDS/GLONASS kinematic PPP. This is because the NSS model solution reduces the computational complexity and excessive redundant information greatly at the same time. In order to prove the reliability of the proposed algorithm in more detail, GPS/BDS/GLONASS undifferenced and uncombined kinematic PPP was realized with the measured data. In this experiment, the results of GPS PPP with ambiguity resolution at static model were used as a reference truth value. Figure 10 shows the satellite number and GDOP of different satellite selection solutions. In Figure 10, the average satellite number of the original PPP model, NSS model, and optimal model is 26, 12, and 12, respectively. The average percentages of the number of selected satellites with respect to all visible satellites for the NSS model and optimal model is 46.2% and 46.8%. It means that less than half of the visible satellites are selected and the NSS model has the same average satellite number with the optimal model. For GDOP, the average values of the original PPP, NSS model, and optimal DOP algorithm are 1.16, 1.74, and 1.47, respectively. The NSS model fully meets the requirements for general kinematic positioning, such that the satellite GDOP of the position should be less than five [19]. To obtain the accurate position accuracy of kinematic PPP, we also calculated the RMS of the position residuals based on the results after 10 min convergence in Figure 11. It can be calculated that the RMS of the position residuals of original PPP was 1.6 cm in the E direction, 1.7 cm in the N direction, and 4.2 cm in the U direction. Additionally, the RMS of the position residuals was 8.1 cm in the E direction, 2.3 cm in the N direction, and 6.5 cm in the U direction with the NSS model. For the optimal DOP algorithm, the RMS of the position residuals was 4.5 cm in the E direction, 4.3 cm in the N direction, and 7.6 cm in the U direction. Thus, the RMS of the position error is 4.8, 10.6, and 9.8 cm for the original solution, NSS model, and optimal DOP algorithm, respectively. Due to the influence of multipath and cycle slips, the positioning accuracy of the measured data is not as good as that of the MGEX data. However, we can know that the GPS/BDS/GLONASS kinematic PPP with the NSS model has a similar accuracy to the optimal DOP algorithm solution when it is used in practical application. Additionally, the computing time of data processing using the original PPP and NSS model solution is 873.007 and 509.938 s, respectively. Thus, it is beneficial to the application of PPP in low-cost GNSS receivers because the NSS model has better performance on the computational complexity and redundant information. influence of multipath and cycle slips, the positioning accuracy of the measured data is not as good as that of the MGEX data. However, we can know that the GPS/BDS/GLONASS kinematic PPP with the NSS model has a similar accuracy to the optimal DOP algorithm solution when it is used in practical application. Additionally, the computing time of data processing using the original PPP and NSS model solution is 873.007 and 509.938 s, respectively. Thus, it is beneficial to the application of PPP in low-cost GNSS receivers because the NSS model has better performance on the computational complexity and redundant information. The above results mean that the computational load could be significantly reduced in undifferenced and uncombined GPS/BDS/GLONASS kinematic PPP with the use of the proposed NSS model. Additionally, the positioning accuracy of the NSS model PPP solution is still within the requirement and similar to the optimal DOP algorithm solution. This study can provide a reference for the application of satellite selection algorithms in real-time kinematic PPP in low-cost GNSS receivers. Conclusions PPP can be widely used in many fields to provide spatial and temporal information due to its high accuracy and reliability. However, with the development of GNSS constellations, the increasing visible satellites also bring a huge computation load to the GNSS equipment in a kinematic environment. Thus, computational complexity and excessive redundant information should be considered in real-time kinematic PPP. At this time, the satellite selection algorithm that can influence of multipath and cycle slips, the positioning accuracy of the measured data is not as good as that of the MGEX data. However, we can know that the GPS/BDS/GLONASS kinematic PPP with the NSS model has a similar accuracy to the optimal DOP algorithm solution when it is used in practical application. Additionally, the computing time of data processing using the original PPP and NSS model solution is 873.007 and 509.938 s, respectively. Thus, it is beneficial to the application of PPP in low-cost GNSS receivers because the NSS model has better performance on the computational complexity and redundant information. The above results mean that the computational load could be significantly reduced in undifferenced and uncombined GPS/BDS/GLONASS kinematic PPP with the use of the proposed NSS model. Additionally, the positioning accuracy of the NSS model PPP solution is still within the requirement and similar to the optimal DOP algorithm solution. This study can provide a reference for the application of satellite selection algorithms in real-time kinematic PPP in low-cost GNSS receivers. Conclusions PPP can be widely used in many fields to provide spatial and temporal information due to its high accuracy and reliability. However, with the development of GNSS constellations, the increasing visible satellites also bring a huge computation load to the GNSS equipment in a kinematic environment. Thus, computational complexity and excessive redundant information should be considered in real-time kinematic PPP. At this time, the satellite selection algorithm that can The above results mean that the computational load could be significantly reduced in undifferenced and uncombined GPS/BDS/GLONASS kinematic PPP with the use of the proposed NSS model. Additionally, the positioning accuracy of the NSS model PPP solution is still within the requirement and similar to the optimal DOP algorithm solution. This study can provide a reference for the application of satellite selection algorithms in real-time kinematic PPP in low-cost GNSS receivers. Conclusions PPP can be widely used in many fields to provide spatial and temporal information due to its high accuracy and reliability. However, with the development of GNSS constellations, the increasing visible satellites also bring a huge computation load to the GNSS equipment in a kinematic environment. Thus, computational complexity and excessive redundant information should be considered in real-time kinematic PPP. At this time, the satellite selection algorithm that can effectively reduce the computational complexity and guarantee centimeter-level positioning accuracy with original solution causes us to consider its application in kinematic PPP. Based on the characteristics of different satellite selection algorithms and GNSS constellations, we proposed a new satellite selection approach for GPS/BDS/GLONASS kinematic PPP with undifferenced and uncombined observations. The constantly re-estimated ambiguity encountered in the application of satellite selection algorithm was also considered with the method that is the inheritance of ambiguity. This NSS model includes three different algorithms (maximum volume algorithm, fast-rotating partition satellite selection algorithm, and the elevation partition satellite selection algorithm), and these algorithms were applied in GPS, GLONASS, and BDS, respectively. The principle of this classification is mainly decided by the accuracy and spatial distribution of these three constellations. The characteristics of different satellite selection algorithms was also considered. Compared with the optimal DOP algorithm, a huge amount of computation is reduced in the NSS model due to the conciseness of satellite selection algorithms and the structure of system classification. In order to verify the positioning accuracy of the NSS model, the original PPP with all visible satellites and optimal DOP algorithm was compared based on the MGEX data and the measured data. Results show that the number of visible satellites was reduced greatly, and a centimeter-level positioning accuracy can be obtained with the NSS model in GPS/BDS/GLONASS kinematic PPP while the original PPP and optimal DOP algorithm PPP solutions were compared. In general, this study provided a new satellite selection approach to reduce the computational complexity and excessive redundant information in GPS/BDS/GLONASS kinematic PPP. In addition, it proposed a method to solve the situation of constantly re-estimated integer ambiguity in PPP when the satellite selection algorithm is used. In the future, for the reduction of the amount of computation and better real-time performance, the application of the satellite selection algorithm that considers satellite quality for kinematic PPP with fixed integer ambiguity may attract more attention.
2019-12-08T17:49:31.495Z
2019-12-04T00:00:00.000
{ "year": 2019, "sha1": "ab27eff4d8d5b4683b575c854b01451de0ada8eb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/9/24/5280/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4e7c57774dd0719528399eba6f774e1b960eb261", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
98530911
pes2o/s2orc
v3-fos-license
Kinetic Study of Application of ZnO as a Photocatalyst in Heterogeneous Medium The photocatalytic degradation of 2,4-dinitrophenol over ZnO was carried out in the presence of light. Control experiments were carried out. The photocatalytic degradation of 2,4-dinitrophenol was observed spectrophotmerically. The various parameters like concentrations of substrate, pH, amounts and band gaps of semiconductor, impact of light intensity, sensitizers and radical quenchers affected the kinetics of the degradation process. A probable mechanism for this process has been proposed. Introduction The photocatalytic reactions are carried out in the presence of light and semiconductor.These reactions have been classified into two categories depending upon the nature of reactants and semiconductors, homogeneous and heterogeneous.Homogeneous photocatalysis and the generation of active species in situ by light is potentially interesting 1 .The attention has been mainly devoted to the chemistry originated when light observed by a photo redox process has grown enormously in recent years 2 and several physical methods are being adopted for the characterization of the catalyst films, particles and for the study of their reactions 3 .Heterogeneous photo catalysis by semiconductor through particulate systems has become an exciting and rapidly growing area of research in the last few years [4][5] .Lan Z. et al. reported the photoinduced hydrogen elimination reaction in phenol via the conical intersections of the dissociative 1 πσ * state with the 1 π π * state and the electronic ground state has been investigated by time-dependent quantum wave-packet calculations and described by the 1 πσ * photochemistry of phenol 6 .Want P. et al. studied quinone methide intermediates and have been investigated in organic photochemistry 7 .Shin C.T. et al. discovered the quantum efficiency of riboflavin in the presence of phenols that decreased and determined a linear relationship between the Hammett's sigma values and the rate of photodecomposition on the photochemistry of riboflavin 8 .Monti S. et al. investigated absorption and induced circular dichroism (ICD) spectra as well as photophysical (fluorescence quantum yield, fluorescence lifetime and triplet-triplet absorption) and photochemical (hydrated electron formation) properties have been measured in aqueous solutions of phenol, p-cresol, 2,6-dimethylphenol 3,5-dimethylphenol, 2,4,6-trimethylphenol, and 3,4,5-trimethylphenol in the presence of 8cyclodextrin and compared to their behaviors in pure aqueous and ethanolic solutions 9 Photocatalytic degradation of 2,4-dinitrophenol was studied by taking 10 mL solution (2.5x10 -4 M) in 100 mL beaker and 200 mg of photocatalyst (ZnO, 60 Mesh powder) was added to it.Then this solution was exposed to a 500W halogen lamp from the topside of a closed beaker.The tungsten halogen lamps develop a larger amount of ultraviolet radiation than the other general service lamps 13 . The progress of photocatalytic reaction was observed by measuring absorbance (ABS) at 360nm using spectrophotometer (spectronic-20D + ) in a glass cuvette with path length 1 cm.Graphs of 2+log ABS versus exposure time were drawn and their slopes were determined.These graphs were plotted according to the linear least squares method 14 . Results and Discussion The photocatalytic degradation of 2,4-dinitrophenol was observed at 360 nm.The results for typical run for the photocatalytic degradation of 2, 4-dinitrophenol is shown in Table 1.From the graph of 2+log ABS versus exposure time, its slope was determined.Using expression k = 2.303 x slope, the rate constant was found.The photocatalytic degradation of 2, 4-dinitrophenol was found to be of first order. Effect of 2,4-dinitrophenol concentration The effect of 2,4-dinitrophenol concentration on the rate of its photocatalytic degradation was observed by taking different concentrations of 2,4-dinitrophenol.The results are tabulated in Table 2.It was observed that as the concentrations of 2,4-dinitrophenol were increased, the value of rate constant (k) increased.Within lower concentration range, the reaction rate is proportional to its concentration.This is a normal feature of first order reaction. Effect of pH The effect of pH the rate of photocatalytic degradation of 2,4-dinitrophenol was investigated in pH range 2 to 8. The results are tabulated in Table 3.In the acidic region, when pH was raised, the rate constant value increased and at pH 6.00, the k value was highest.On still increasing pH in the alkaline region, the k value decreased.It seems that neutral species play an important role in the degradation process. Table 3.Effect of pH. Effect of amount of photocatalyst The effect of amount of photocatalyst on the rate of photocatalytic degradation of 2,4-dinitrophenol was observed by taking different amounts of semiconductor keeping all other factors identical.The results are reported in Table 4.As indicated from the data, the photocatalytic degradation of 2,4-dinitrophenol increases with the increase in the amount of semiconductor.Usually an amount of 200 mg covers the whole surface area but an additional quantity is likely to increase floating photcatalyst particles, hence increase in reaction rate is observed. Effect of light intensity The effect of light intensity on the rate of photocatalytic degradation of 2,4-dinitrophenol has been observed by varying the distance between the light surface and exposed surface of the semiconductor.The results are given in Table 5.As the intensity of light was increased, more photons would be available for excitation at the semiconductor surface and in turn more electron hole pairs will be generated. Effect of band gap of semiconductor The usual exited semiconductor has separated the hole and electron pairs that induce the photocatalytic reactions and hence the band gap energy has important role to play 15 .The effect of band gap on the photocatalytic degradation was studied in the presence of different semiconductors having different band gap values.It is observed that the value of the rate of photocatalytic degradation (k) increased as the band gap increased up to band gap of ZnO (3.20 ev), but after the band gap (ZnS, 3.80 ev) increased, the value of the rate of photocatalytic degradation (k) decreased.Results are shown in Table 6. Effect of radical quencher The presence of free radical quenchers caused marginal effects on the photocatalytic degradation of 2,4-dinitrophenol.Alcohols are known to quench free radicals present.When this photocatalytic reaction was carried out in the presence of free radical quenchers likemethanol, ethanol etc., the rate of reaction was found to decrease to a marginal level indicating the active participation of free radicals in the degradation.The results are tabulated S. No. Amount of Photocatalyst, mg k10 7. Out of the three quenchers selected, the quenching efficiency of isopropanol was found to be highest among all.The free radical quenching efficiency of alcohols is 3 0 > 2 0 > 1 0 which is observed in these reacting systems.Table 7.Effect of radical quencher.Zinc Oxide = 200 mg; pH = 6.00; [2,4-dinitrophenol] = 7.0x10 -5 M; Temperature =309K Light intensity = 3.40 mWcm -2 ; λ max = 360 nm. S. No. Radical Quencher λ max = 360 nm k10 Subscript "ads" refers to species adsorbed on the surface of semiconductor.It was observed that the products of photocatalytic degradation of 2,4-dinitrophenol in presence of ZnO were colorless gases with virtually no solid residue left in the solution after almost complete degradation.Hence, probable reactions proposed are as under. Conclusion The photocatalytic degradation of 2,4-dinitrophenol was observed.Concentration of substrate, pH of solution, amount of photocatalyst, light intensity etc showed their expected impact on the reaction rate.This method is useful to degrade 2,4-dinitrophenol completely into decomposition products other than solids.Using the kinetic parameters, the rate of reaction can be increased to a faster speed as required. C 7 HFigure 1 . Figure 1.(Typical Run) Degradation of 2,4-dinitrophenol . Joshi J. D. et al. synthesized degradation of o-nitro phenol that has been studied in the presence of semiconducting oxide 10 .Ameta S.C. et al. studied the photoelectro chemical study of picric acid has been carried out by using ZnO as 'n' type Semiconductor 11 .2,4-dinitrophenol is volatile with steam and sublimes when carefully heated .It is toxic compound readily absorbed through the intact skin and causes sweating, nausea, vomiting, collapse and may cause death 12 . Table 4 . In the degradation of 2,4-dinitrophenol, the value of k was found to increase with the increase in light intensity, a typical characteristic of a photocatalytic reaction.Effect of amount of photocatalyst. Table 5 . Effect of intensity of light. Table 8 . Certain dyes or complex compounds show the tendency to increase the rate of degradation by sensitization.In the present investigation, three compounds were taken for study i.e. methyl orange, crystal violet (methyl violet), K 3 [Fe(CN) 6 ].The results are tabulated in Table8.The experimental results show that these compounds were unable to sensitize the reaction rate.Effect of sensitizer.The hole generated is capable of oxidizing the substrate and the electron of CB is capable of reducing the substrate.Further more, the solution contains species e.g. .OH, H + , . HO 2 , H 2 O 2, O 2 , which are due to the semiconductor light-water-oxygen interactions.These species are also capable of carrying out redox reactions.The generation of super oxide radical anion .O 2 -and .OH radical can be shown as under-
2019-04-06T13:06:56.846Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "30915fa428054c79a5aebda9c3880372d7cae5ea", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jchem/2009/139753.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "30915fa428054c79a5aebda9c3880372d7cae5ea", "s2fieldsofstudy": [ "Engineering", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
70244649
pes2o/s2orc
v3-fos-license
Estimation of Vertical Jump Height Using Nintendo Wii Remote IR Camera In this article, we proposed to measure the heights of countermovement jumps which are recorded in term of vertical leap by using the Wii Remote infrared camera. According to the physical principles, positions of the movement were detected based on the rules regarding conservation of energy, motion under gravity, and coordinate system. The obtained results were compared with that of the slow-motion measurements. The experiment involved 30 basketball players whose jump results were slightly deviated from the vertical measurement of the coordinate system. Therefore, the results should be calibrated each time the new system is installed. Introduction Sports science has been an integral part of many sports including the aspects of nutritional requirement, sportswear, and playing techniques that make the most capable athlete. Sport science innovations helps to reduce resistance between skin and the water for swimsuits [1], improving technique that helps to achieve higher jumps for basketball [2], volleyball and soccer players. The sport science knowledge is also extended to the field of humanoid development [3]. There are different methods that are used to measure the height of jumping including time taken to calculate the height of a vertical jump by using ground test strips to detect jumping and landing marks. Strain gauge, electrical switch [4][5][6][7][8][9], accelerometer [10][11] and optoelectronic [12] are also applied to above measurements. Training patterns may benefit from the measurements of jumping because athletes can be trained to jump more effectively. To achieve this, collected statistical data of each athlete will be analysed. Height measurements of jumping is related to a countermovement Jump using Wii Remote which is used in many research projects including virtual reality, tracking systems, control systems, and clinical tests [13][14][15][16]. The participants' waist-height is the level of movement which is used in measuring this vertical jump. To conduct experiments, a belt fitted with an infrared lamp at the buckle is attached to the waist position of participants. Then, an infrared camera located inside the Wii remote detected the position of participants once the jump is performed. The detected position was subsequently analysed by relevant physical equations such as the rules of energy conservation vertical movement and coordinate system. All of the analysis were processed by using computer program. System design It is important to eliminate interfering noises during the image processing. Furthermore, any signal or light that may interfere with the measurement system must also be eliminated. Installation stage using the infrared camera in the Wii remote mask should be at the position of 23 degrees of vertical angle [17] as shown in Figure 1. As per the height of the Wii remote above the ground (a.) 1 meter distanced from the Wii remote system to the participants must be maintained (b.) 2 meters with measuring device based on resolution of 1 centimeter. System testing Static testing system, according to the testing system in Figure 1, tests were conducted to evaluate effectiveness of image processing system using infrared cameras. infrared LED were installed at 25 different coordinate points on the backgrounds behind participants as per Figure 2. This experiment required 15 repeated tests. Real-time image processing system of infrared camera was assessed for its efficiency by using vertical camera unit beaming at a cylindrical tube which was fitted with infrared LED at the end of the tube. The tube weight was 100 grams with the dimensions of; 12.7 millimetre in diameter, and 20 centimeter in length as illustrated in Figure 3. The vertical camera unit is equipped with 3 levels of adjustable beaming power as per Figure 3.b. Subsequently, a slow-motion camera at the end of the tube was used to record VDO clips in order to determine positions of the infrared LED. The results were used as references for the height measurements which has been calibrated against level indicator that pertains the resolution of 1 centimeter as demonstrated in figure 4. Energy conservation rules were used to analyze height of the jump but disregarded the initial velocity because the time used in the calculation was the entire time required to perform the jumping [18]. Conservation of energy equation is as following; Measurement in accordance with the conservation of energy rules Prior to jumping at position Ref, there was no potential energy, and at the highest point 3 t , there was no kinetic energy, therefore According to the linear motion equation, Δt Represented the entire time required in jumping which was 4 2 t -t , for which timing that was recorded by the computer software was initiated at 2 t and stopped at 4 t . This was due to the initial velocity and the final velocity of the jump were of equal value but were performed on the opposite direction. Therefore, Which could be written as; The obtained i v were used to calculate the height as following; Half of the time was used for the calculation. Measurement by coordinate systems Height measurement using coordinate systems is a photographic analysis of an object's size that are generally used. The technique offers variety of applications such as measuring the flower's head of sunflower [19]. Moreover, the technique can be applied to hydrodynamic experiments [20]. Ir camera from Wii remote system can also be applied to physics experimentation [21] such as the calculation of car speed. Using the coordinate system that employs similar triangles theory to determine positions of car's headlight on both sides, the technique can determine the positions of a car that has been changed in relation to the time required for such movements [22]. In this article, similar triangles theory was used to determine the position of jumper. A belt equipped with Infrared Light-Emitting Diodes at the buckle position with the forward Current of 100 mA and pertains 940 nm Wavelength was worn by participants in the experiment. Upon jumping from the point of reference to the highest point of jumping, displacement is considered as the vertical height of the jump that was measured for (H) was related to the coordinated position captured by the photo sensor array(h). At the same time, the focal length of Wii remote (a) and object distance (b) remain constant. The diagram of digital image processing measuring by using the similar triangles system as shown in figure 6. Wii remote H Ref Experiments to determine the vertical change of positions were measured in centimeter while the position of photosensor array was measured in pixel. This was achieved by arranging the infrared LED at different but adjustable vertical positions. The height of such vertical arrangement was increased every 10 centimeter before the difference in height were read by the Wii remote system. The experiments were repeated 10 times.The obtained results were used to plot a graph as per Fig 7. The horizontal axis represents an averaged value of the vertical change of the height while the vertical axis represents the positions of photosensor array that were related to the linear relationship which has R squared value of 0.9997. The relationship was subsequently used to determine the vertical height that was captured by Wii remote infrared camera. Results and discussion System verification is a significant step that must be completed prior to measuring the height of vertical jumping. The system can be tested against an object of known value such as the positions of infrared LED which changed at different scale in both static and dynamic formats. This can be achieved by using the realtime image processing system. The system's precision was determined by measuring the height at the same spot repeatedly. For instance, the static system was measured by using Wii remote that detected the position of infrared LEDs arranged as a 5x5 matrix as per figure 2. The test was done by switching on and off the power source that supply electricity to each infrared LED before reading the measured coordinate position of the infrared LED. The first coordination was 100,100 pixel for the subsequent increase along the x axis was raised to 200 pixel. Subsequently, the coordination on the y axis was raised by 100 pixel until 25 positions were completed. The tests were repeated 15 times which revealed that the highest standard deviation obtained from the measurement was 4.08 pixel as shown in Figure 8. The assessment of real-time image processing system was recorded using veritical infared camera unit againt a cylinder tube that was attached with 5 mm LED which contain the forward current of 100 mA and 940 nm Wavelength at the end of the tube. The tube is 12.7mm in diameter, 20 cm in length and was weighted down to have the total mass of 100 gram as per Figure 3.a. The vertical camera unit was adapted from PASCO's ME-6800 short range projectile launcher that was adjusted to 90degree angle. The power was set at 3 level, short range, medium range and long range as illustrated in Figure 3.b. A slow motion camera was used to record the position of Infrared LED at the end of the tube against the measuring equipment that has upto 1 centimeter precision which was subseequently used as a reference for the measurement. The results of an averaged height on the measuring rod that were tested againt all 3 testing levels were shown as per table 1. Afterward, the unit was used to measure human's Counter movement vertical jump as per Figure 9 which was recorded by a slow motion camera. The results were the difference between a distance measured from the starting point (reference point) to the maximum point which was detected against the infrared LED. At this measured height, the potential energy was at its highest point where as the kinetic energy was the lowest, which was substantiated by the level indicator. The test was done with 30 male participants who were 15 basketball players and another 15 of the general public. Each of the participant repeated the jumps 15 times and the analysis was later conducted by the software-based Conservation of energy, Motion under gravity and Coordinate system as per Figure 10. Subsequently, the timing was stopped and the recorded time was used to analyze the height of the vertical jump as per the principle of Conservation of energy, Motion under gravity and Coordinate system and measured heights are simultaneously displayed. Moreover, display the detail of measurement system : name of paticipate, raw data of vertical coordinate from Wii remote and hight measurand of countermovement jump from the three principles. Conclusion The coordination obtained by using infrared camera from Wii remote was used to analyse the vertical countermovement jump of 30 male basketball players. A software which has been developed were used for this analysis. The analysis was divided into 3 methods which were; Conservation of energy, Motion under gravity and Coordinate system. Results revealed that the Coordinate system produced the height which has the closest value to that of the reference which was determined by the motion camera. Although the deviation of this measurement was only 2.14 cm, but the challenge of using such system is the equipment set up which requires good calibrations prior to each usage.
2019-02-19T14:08:06.409Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "eec83dd030119716c7a537cc47b55f3de9ccc80f", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/47/e3sconf_ceege2018_02004.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1592ea72ebb95286bf3b5baeb22098ec6786a39c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Geology" ] }
18813294
pes2o/s2orc
v3-fos-license
Genetic Variation in the TP53 Pathway and Bladder Cancer Risk. A Comprehensive Analysis Introduction Germline variants in TP63 have been consistently associated with several tumors, including bladder cancer, indicating the importance of TP53 pathway in cancer genetic susceptibility. However, variants in other related genes, including TP53 rs1042522 (Arg72Pro), still present controversial results. We carried out an in depth assessment of associations between common germline variants in the TP53 pathway and bladder cancer risk. Material and Methods We investigated 184 tagSNPs from 18 genes in 1,058 cases and 1,138 controls from the Spanish Bladder Cancer/EPICURO Study. Cases were newly-diagnosed bladder cancer patients during 1998–2001. Hospital controls were age-gender, and area matched to cases. SNPs were genotyped in blood DNA using Illumina Golden Gate and TaqMan assays. Cases were subphenotyped according to stage/grade and tumor p53 expression. We applied classical tests to assess individual SNP associations and the Least Absolute Shrinkage and Selection Operator (LASSO)-penalized logistic regression analysis to assess multiple SNPs simultaneously. Results Based on classical analyses, SNPs in BAK1 (1), IGF1R (5), P53AIP1 (1), PMAIP1 (2), SERINPB5 (3), TP63 (3), and TP73 (1) showed significant associations at p-value≤0.05. However, no evidence of association, either with overall risk or with specific disease subtypes, was observed after correction for multiple testing (p-value≥0.8). LASSO selected the SNP rs6567355 in SERPINB5 with 83% of reproducibility. This SNP provided an OR = 1.21, 95%CI 1.05–1.38, p-value = 0.006, and a corrected p-value = 0.5 when controlling for over-estimation. Discussion We found no strong evidence that common variants in the TP53 pathway are associated with bladder cancer susceptibility. Our study suggests that it is unlikely that TP53 Arg72Pro is implicated in the UCB in white Europeans. SERPINB5 and TP63 variation deserve further exploration in extended studies. Introduction In more developed countries, urothelial carcinoma of the bladder (UCB) is the fourth most common cancer in men and the seventeenth in women, the overall male:female ratio being 3:1. This ratio is greater (6:1) in Spain, where the disease presents one of the highest incidence rates among men (51 per 100,000 man-year) [1]. Tobacco smoking and occupational exposure to aromatic amines have been established as the strongest risk factors, among others [2]. While no high-penetrance allele/gene has been identified to date as associated with UCB, there is wellestablished evidence that UCB risk is influenced by common genetic variants [3,4]. Previous studies characterizing UCB are consistent with the existence of, at least, two disease subtypes based on their morphological and genetic features. The first subtype includes low-risk, papillary, non-muscle invasive tumors (NMIT, 60-65% of all UCB) and the second type includes both high-risk NMIT (15-20% of all UCB) and muscle invasive tumors (MIT, 20%-30% of all UCB). Supporting these morphological subtypes, differential genetic pathways were described and were associated with distinct UCB evolution. Somatic mutations in FGFR3 are more frequent in low-risk NMIT, while mutations in TP53 and RB are mainly involved in high-risk NMIT and MIT [5,6]; mutations in PIK3CA and HRAS occur similarly in the two tumor subtypes. Interestingly, an exploratory analysis has shown that some germline genetic variants might be differentially associated with the risk of developing distinct UCB subphenotypes defined according to tumor stage (T) and grade (G) [7]. TP53 is the most important human tumor suppressor gene and its implications in UCB have been extensively studied [8]. TP53 is located in17p13, a region that is frequently deleted in human cancers, and it encodes the p53 protein. p53 is a transcription factor controlling cell proliferation, cell cycle, cell survival, and genomic integrity and -therefore -it regulates a large number of genes. Under normal cellular conditions, p53 is rapidly degraded due to the activity of MDM2, a negative p53 regulator that is also a p53 target gene. Upon DNA damage or other stresses, p53 is stabilized and regulates the expression of many genes involved in cell cycle arrest, apoptosis, and DNA repair among others. Somatic alterations in TP53/p53 are one of the most frequent alterations associated with UCB, especially with the more aggressive tumors [9]. Germline TP53 mutations predispose to a wide spectrum of early-onset cancers and cause Li-Fraumeni and related syndromes [10,11]. These mutations are usually single-base substitutions. Over 200 germline single nucleotide polymorphisms (SNPs) in TP53 have been identified at present [12]. SNP rs1042522 (Arg72Pro) has been assessed in association with several cancers, among them UCB. However, the results of these studies are inconsistent [13,14,15,16,17,18]. In contrast, an association between SNP rs710521 in TP63, a TP53 family member, and risk of UCB has been convincingly replicated, pointing to the involvement of TP53 pathway members in UCB susceptibility [4]. The aim of this study was to comprehensively investigate whether germline SNPs in genes involved in the TP53 pathway are associated with risk of UCB. To this end, a total of 184 tagSNPs in 18 key genes were assessed using data from the Spanish Bladder Cancer/EPICURO study. Study Subjects The Spanish Bladder Cancer/EPICURO Study is a casecontrol study carried out in 18 hospitals from five areas in Spain and described elsewhere [2,4,7]. Briefly, cases were patients diagnosed with primary UCB at age 21-80 years between 1998 and 2001. All participants were of self-reported white European ancestry. Diagnostic slides from each patient were reviewed by a panel of expert pathologists to confirm the diagnosis and to ensure that uniform classification criteria were applied based on the 1999 World Health Organization and International Society of Urological Pathology systems [19]. Information on sociodemographics, smoking habits, occupational and environmental exposures, and past medical and familial history of cancer was collected by trained study monitors who conducted a comprehensive computer-assisted personal interview with the study participants during their hospital stay. Of 1,457 eligible cases and 1,465 controls, 1,219 (84%) and 1,271 (87%), were interviewed, respectively. All subjects gave written informed consent to participate in the study, which was approved by the ethics committees of the participating centers. Genotyping A total of 184 tagSNPs from 18 genes participating in the TP53 pathway were selected using the Select Your SNPs (SYSNPs) program [20]. SYSNP used information from dbSNP b25, hg17 and HapMap Release #21. Haploview's Tagger algorithm (v3.32) was applied with default parameter values. The tool considers all available information for each SNP and implements algorithms that provide the status of each SNP as a tagSNP, a captured SNP or a non-captured SNP. According to this information tagSNPs were selected. The following groups of genes were considered: 1) TP53 family members (TP53, TP63 and TP73) and 2) genes known to be targets of p53 or regulators of p53 function [BAK1, BAX, BBC3, BIRC5, CDKN1A, FAS, GADD45A, IGF1R, MDM2, PCNA, PMAIP1, SERPINB5, SFN (Stratifin, 14-3-3sigma), TP53AIP1), and 3) c-MYC, a major oncogene involved in a broad range of human cancers that regulates p53 pro-apoptotic activity (See Table S1 in File S1). SNPs were genotyped using Illumina Golden Gate and TaqMan (Applied Biosystems) assays at the Spanish Core Genotyping Facility at the CNIO (CEGEN-CNIO). Genotyping was successful for 1,058 cases and 1,138 controls. We calculated the coverage for each gene using Haploview 4.2 by selecting the SNPs within a gene with a MAF$0.05 from the 1000 genomes project, as reference, and obtained the number of SNPs captured with the SNPs genotyped at r2$0.8 within each gene. Statistical Analysis Departure from Hardy-Weinberg equilibrium was assessed in controls using Pearson's chi-squared test. Missing genotypes were imputed for the multi-SNP model using the BEAGLE 3.0 method [21]. Associations between UCB and the SNPs considered were assessed using two approaches: classical logistic and polytomous regression analyses applied to each SNP individually, and the Least Absolute Shrinkage and Selection Operator (LASSO)penalized logistic regression to assess all SNPs simultaneously. All models were adjusted for age at diagnosis (cases) or interview (controls), gender, region, and smoking status. Smoking status was coded in four categories (never: ,100 cigarettes in their lifetime; occasional: at least one per day for $6 months; former: if they had smoked regularly, but stopped at least 1 year before the study inclusion date; and current: if they had smoked regularly within a year of the inclusion date [2]. With the ''classical'' statistical approaches we assessed SNP main effects for the whole disease and for different subtypes of UCB, as well as SNP*SNP and SNP*smoking interactions. Disease subtypes were defined in two ways. First, according to established criteria based on tumor stage (T) and grade (G) as low-risk NMIT (TaG1 and TaG2), high-risk NMIT (TaG3, T1G2, T1G3, and Tis), and MIT (T2, T3, and T4); and second, according to the tumor expression of p53 determined using DO7 antibody. We applied the histoscore as z~P . We then classified cases as having low or high p53 expression relative to the median histoscore. To assess overall main effects, the four modes of inheritance were considered: co-dominant, dominant, recessive, and additive. The statistical significance of associations was determined using the Likelihood Ratio Test (LRT). We evaluated associations between individual SNPs and subtypes of UCB using polytomous logistic regression. Heterogeneity by disease subtype was tested by a LRT comparing this model to that with the ln(OR) restricted to be equal across subtypes. We also evaluated all two-way interactions between SNPs by a LRT comparing logistic regression models with the two SNPs (additive model) and covariates described above, with and without a single interaction term for multiplicative, per-allele effects. Interactions between each SNP and cigarette use (never vs. ever) were assessed using a similar method. Multiple testing was accounted for by applying a permutation test with 1,000 replicates. We applied Quanto (http://hydra.usc.edu/gxe/) to assess statistical power considering the available sample size. We also assessed combined SNP effects using LASSO. The method has been described in detail by [22]. Briefly, the loglikelihood function applied in classical logistic regression where n is the number of observations, is reconstructed incorporating a penalty so that where p is the number of SNPs and l is the lasso penalty. The Newton-Raphson algorithm is applied to equation (2) to estimate b's in an iterative way. The LASSO method is based on the idea of removing irrelevant predictor variables (b = 0) via the penalty parameter, thereby selecting only the most relevant SNPs as the subset of markers most associated with the disease. The application of the penalty parameter also avoids overfitting due to both high-dimensionality and collinearity between covariates. We only considered additive genetic mode of inheritance. This technique gives biased estimators to reduce their variance. Because of this, the implemented package in R does not provide estimates p-values for the regression beta coefficients, since standard errors are not meaningful under a biased estimator. We therefore evaluated the results by first applying the LASSO using a 5-fold cross-validation (CV) method [23] to choose the optimal l as that giving the minimum Akaike information criterion (AIC); we then selected the subset of SNPs that were most informative with that l. We assessed the robustness of each SNP selected in the optimal model by calculating the reproducibility as the proportion of times each SNP was selected to be in the multivariate model from 1,000 bootstrap subsamples [24]. To evaluate the association with UCB risk of that subset of SNPs, we tested them by the LRT in a multivariate regression model with all the SNPs in comparison to the null model. To correct for the over-estimation due the pre-selection of the best SNPs, we performed a permutation test with 10,000 replicates. STATA 10 was used to run the classical logistic and multinomial regression analyses. All other statistical analyses were run in R (http://www.R-project.org), using the penalized library [25] for LASSO penalized logistic regression. Table 1 shows the distribution of the study subjects included in the analysis: 1,058 cases and 1,138 controls. Most individuals (87%) were male and cases were more likely to be current smokers than controls (43% vs. 25%, respectively, p-value,0.001). Results No evidence of departure from Hardy-Weinberg equilibrium was observed for any SNPs after consideration of multiple testing (unadjusted p-value.10 24 ). Polymorphisms in TP53 were not individually associated with UCB risk, even at a nominal, uncorrected 5% significance level (uncorrected p-value.0.4). The percentage of reproducibility from the LASSO model using 1,000 bootstrap subsamples was ,50%, indicating a poor robustness of the models. Results for the additive and co-dominant models are summarized in Table 2. Using classical logistic regression, SNPs in BAK1 (1), IGF1R (5), (3), and TP73 (1) showed significant results, at a non-corrected p-value#0.05, with overall UCB risk (Table 3). However, no evidence of association with risk was observed for any individual SNPs after correcting for multiple testing (permutation test p-value.0.8). This was also the case for the associations with the established disease subtypes defined according to stage/grade or by p53 expression (Figure 1). Of note, SNPs rs3758483 and rs983751 in FAS were differentially and inversely associated with MIT and high p53 expressing tumors in uncorrected analyses (Tables S2 and S3 in File S1). We also observed no evidence of SNP*SNP interactions or interactions between SNPs and smoking status (data not shown). Discussion We genotyped common variants in genes in the TP53 pathway in 1,058 cases and 1,138 controls of white European ancestry and found no strong evidence of association with risk of UCB overall, or with subtypes of the disease defined by stage and grade or by p53 expression. A key gene in the pathway is TP53, and the most commonly studied variant in this particular gene is Arg72Pro (rs1042522). Its implication in susceptibility to various cancers has been reported in Asian populations, but not in white Europeans. A meta-analysis of 49 cervical cancer studies contributing a total of 7,946 cases and 7,888 controls found that the Arg allele was associated with an increased risk of cervix cancer [14]. However, another metaanalysis of 39 studies (26,041 cases and 29,679 controls) found weak evidence for an association of the same variant with reduced breast cancer risk [18]. Regarding gastric cancer, a combined analysis of 6,859 cases and 9,277 controls from 28 studies found a Table 2. SNPs in TP53 and bladder cancer risk. AA, Aa and aa represent common-homozygotes, heterozygotes and rare-allele homozygotes, respectively. OR, odds ratio; CI, confidence interval; OR(Aa) and OR(aa) were estimated relative to genotype AA. stronger inverse association only among Asians [26]. For lung cancer, a marginally significant increased risk was in a combined analysis of data with 15,647 cases and 14,391 controls from 36 studies, though the association seemed to be also confined to the Asian population [27]. The association between TP53 Arg72Pro and UCB risk has been assessed by two meta-analyses. Overall, no association was observed by Jiang et al. when comparing 1,601 cases and 1,948 controls from 10 studies, although a marginally significant association was seen among Asians (OR = 0.77, 95%CI 0.59-1.00, for ArgArg/ArgPro vs. ProPro) [13]. Discordant results have been recently reported combining data from 14 studies contributing with 2,176 cases and 2,798 controls (OR = 1.268, 95%CI 1.003-1.602, for ArgArg/ArgPro vs. ProPro among the Asian population) [17]. A large number of studies overlap between the two meta-analyses. The lack of information on gene-gene and gene-environment interactions, as well as on the concomitant effect of TP53 somatic mutations may explain the discordant results [28]. The findings from our study confirm the lack of association of Arg72Pro in TP53 with risk of UCB in white Europeans (OR = 0.98, 95%CI 0.77-1.26, for ArgPro vs. ArgArg and OR = 0.91, 95%CI 0.75-1.09, for ProPro vs. ArgArg, pvalue = 0.5 for overall effects) [13,17]. However, we cannot rule out that lack of statistical power may hamper identification of a small effect association: even with its large sample size, the present study sample size could detect an OR$1.3 per-allele for this SNP with 90% statistical power and at a significance level of 5%. Regarding other SNPs in TP53, Lin et al reported an association with rs9895829 and rs1788227 (p-value = 0.003 and 0.027, respectively) in a smaller study with 201 cases and 311 controls in an Asian population [29]. We did not genotype these SNPs, though they are in high LD with two SNPs considered here: rs8079544 (LD = 1.0) and rs12951053 (LD = 0.7), respectively. Nonetheless, none of the assessed additional SNPs in TP53 appeared to be associated with UCB risk. The partial coverage of the gene with the assessed SNPs (38%) does not allow us to dismiss the role of TP53 in UCB susceptibility. TP63 is another key member of the studied pathway. One SNP (rs710521) located in this gene has been reported to be associated with risk of UCB by a GWAS (per-allele OR = 1.19, 95%CI 1.12-1.27, p-value = 1.15610 27 ) [30]. This association was convincingly replicated in a combined analysis of data from different studies (allele-specific OR = 1.18, 95%CI 1.12-1.24, p-value = 1.8610 210 ), including ours, for which it was genotyped as part of a separate initiative [4]. Of note, this particular SNP did not show significant results in our study (OR = 0.95, 95%CI 0.83-1.10, p-value = 0.5), a fact that can be explained by the different geographical location related exposures of the participating studies, being UCB an environmental driven disease [31]. The present study assessed 32 SNPs in TP63, providing 24% of the gene coverage. Three of them showed uncorrected significant results in the overall UCB association analysis with a percentage of reproducibility .70% from LASSO. These results warrant an extended UCB study on this region. Regarding other SNPs in the selected genes, we did not find any strong evidence of association after correcting for multiple testing (permutation test p-value$0.8 for overall main effects and pvalue$0.3 for subtype effects). The top (uncorrected) significant SNPs were located in BAK1, IGF1R, P53AIP1, PMAIP1, SERPINB5, and TP73. Common variants in these genes have not previously been reported as associated with UCB risk, though an altered expression of BAK1 and IGF1R has been described in bladder tumors. Many complex diseases, such as UCB, are likely due to the combined effects of multiple loci [32] and most traditional association studies assessing main effects for one SNP at a time are underpowered to detect small effects [33]. Therefore, the implication of common genetic variants may be better assessed by a method that both selects a far-reduced set of potentially associated SNPs and tests for association globally. This has been a challenge due to the high-dimensionality and collinearity Figure 1. Main effect p-values for bladder cancer risk (overall and for each subphenotype) for each tag-SNP under the additive mode of inheritance. A SNP p-value above the red line is considered as associated with the phenotype after multiple testing correction by Bonferroni (4.2 for main effects and 3.6 for subtypes). All models are adjusted for age, gender, region and cigarette smoking status. doi:10.1371/journal.pone.0089952.g001 Table 3. Significant SNPs at a = 0.05 in the logistic regression main effect models. between SNPs. Nevertheless, penalized techniques can deal with these problems and they are starting to emerge in genetic association studies. Wu et al used penalized logistic regression in a genome-wide association study applied to coeliac disease data and Zhou et al extended this work to the assessment of association for common and rare variants applied to family cancer registry data [34] [35]. In the present study, we applied the LASSO algorithm to account for the combination effects of the SNPs in the TP53 pathway and UCB risk. Under the criteria applied, this method selected one SNP (rs6567355) that showed a noncorrected p-value = 0.006 for the additive mode of inheritance with a percentage of reproducibility = 83%. This is a frequent G. A SNP (MAF = 0.29) located in the intron region of SERPINB5. As mentioned before, no evidences of previous association between this SNP and any disease have been reported at present. SERPINB5 is a tumor suppressor (Table S1 in File S1). The expression levels of this gene has been correlated with those of DBC1 (Deleted in bladder cancer 1) in UCB specimens, suggesting its involvement in the urokinase-plasminogen pathway [36]. SERPINB5 would deserve of further exploration in extended studies, as well. A limitation of our study is the incomplete tagging of the selected genes due to the use of an earlier HapMap release to select tag SNPs, prior to the availability of data from the 1000 genomes project. The median coverage of the 18 genes considered in the pathway is, according to the updated HapMap releases, 44%, ranging from 21% to 86%. Therefore, we cannot rule out completely the implication of common variation in these genes in UCB susceptibility. For common SNPs (MAF.0.05), our study is powered (90%) to detect ORs$1.4 at a significance level of 0.05, assuming an additive mode of inheritance. Therefore, the study is not conclusive with OR,1.4. While this study represents one of the largest assessments conducted till present, much larger studies will be required to rule out smaller main effects associated with common variants in the genes of this pathway. This is even more important when subphenotype analyses are considered. We also found no evidence of SNP-SNP interactions (permutation test pvalue$0.3) and SNP-smoking interactions (permutation test pvalue$0.07), although the power was even more limited to detect these. According to the candidate pathway, the studied SNPs were selected as tags; therefore, they were not correlated showing a low LD. This fact, let us overcome a potential limitation affecting the percentage of reproducibility when SNPs are high correlated. Credit should also be given to this study, not only regarding its large sample size, but also for its prospective nature and disease representativeness, for the homogeneous methods applied to collect information and biosamples by the participating centers, for the integration of different type of information (sociodemo-graphics, epidemiological, genetic, clinical and pathological, and molecular), and for the comprehensive and innovative statistical approaches applied to assess UCB susceptibility associated with a highly candidate pathway. In conclusion, using a comprehensive analysis accounting different models and different approaches, we found no strong evidence that common variants in the TP53 pathway are associated with UCB risk. However, specific members of the pathway, TP63 and SERPINB5 deserve of further exploration in extended studies. On the other hand, our study suggests that it is unlikely that TP53 Arg72Pro is implicated in the UCB in white Europeans. While biological sound, candidate pathway analysis have throw limited acknowledge in the genetic susceptibility field of many diseases. The reasons of this relative poor efficiency may be, among others, the still lack of knowledge of all key components of a given pathway, the introduction of noise by considering many genes/variants without showing association, and the lack of coverage of rare variants not tagged through this approach, in addition to methodological explanations such as an impaired statistical power. Scientists should review whether it is time to dismiss this approach towards a more comprehensive strategy such whole genome/exome sequencing in dissecting the genetic architecture of complex diseases. Supporting Information File S1 Combined Supporting Information file containing: Table S1, Location and function of the selected genes. Table S2, Heterogeneity in single nucleotide polymorphism (SNP) risk estimates among bladder cancer subphenotypes defined according to stage and grade in the Spanish Bladder Cancer Study. Table S3, Heterogeneity in single nucleotide polymorphism (SNP) risk estimates among bladder cancer subphenotypes defined by p53 expression in the Spanish Bladder Cancer Study. (DOCX)
2017-06-04T13:51:02.261Z
2014-05-12T00:00:00.000
{ "year": 2014, "sha1": "00bcf4993bd83d66cf4c7b32c1bc2a4ef2122af1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0089952&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21be18dfc1c474f83598e55d9689d2185fbed7cc", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17417050
pes2o/s2orc
v3-fos-license
Hard and Soft Tissue Management of a Localized Alveolar Ridge Atrophy with Autogenous Sources and Biomaterials: A Challenging Clinical Case Particularly in the premaxillary area, the stability of hard and soft tissues plays a pivotal role in the success of the rehabilitation from both a functional and aesthetic aspect. The present case report describes the clinical management of a localized alveolar ridge atrophy in the area of the upper right canine associated with a thin gingival biotype with a lack of keratinized tissue. An autogenous bone block harvested from the chin associated with heterologous bone particles was used to replace the missing bone, allowing for a prosthetic driven implant placement. Soft tissues deficiency was corrected by means of a combined epithelialized and subepithelial connective tissue graft. The 3-year clinical and radiological follow-up demonstrated symmetric gingival levels of the upper canines, with physiological peri-implant probing depths and bone loss. Thus, the use of autogenous tissues combined with biomaterials might be considered a reliable technique in case of highly aesthetic demanding cases. Introduction The long-term success of an implant-supported rehabilitation is strictly influenced by both the density and volume of available bone and the quality of soft tissues at the implant site. Particularly in case of ridge atrophies in the premaxillary region, the creation of an optimal bone support to dental implants is mandatory to guarantee an ideal functional and aesthetic outcome. This aim cannot be achieved in a satisfactory way without even considering the quality of the soft tissues surrounding an implant, especially in thin gingival biotypes. Such a sophisticated and multiple-staged approach basically provides for several surgical steps and one or two temporary prostheses before going to the final restoration. Further, an overall treatment time of about 18-24 months might be considered. Alveolar bone reconstruction can be obtained by means of different surgical procedures, including autogenous blocks harvested from intraoral or extraoral donor sites, guided bone regeneration (GBR), ridge splitting or expansion techniques, and distraction osteogenesis [1]. The choice of the technique mainly depends on both size and extension of the defect, on the clinical history of the tooth loss as well as the patient's compliance and expectations. The purpose of the present paper was to present the surgical and prosthetic management of a paradigmatic clinical case characterized by a localized atrophy of the upper jaw subsequent to a failed previous reconstruction due to an impacted canine extraction. An intraoral bone block combined with bone substitutes was used to regenerate the missing bone, in association with soft tissue grafts to manage the soft tissues deficiency. ( Figure 1). The patient was in good general health, with no history of systemic diseases, drug allergies, and smoking habits. Upon clinical examination, the patient was found to have a horizontal bone loss with a minimal vertical component and a coronal ridge width of 2 mm ( Figure 2). With respect to the soft tissues, the site was characterized by a thin gingival biotype with no keratinized mucosa. Further, the patient presented a skeletal class I deep bite with a history of an impacted canine extraction and a bone regeneration procedure carried out two years earlier, followed by infection of the graft itself. Radiographic examination, carried out with a Cone Beam Computed Tomography (CBCT) scan, confirmed the clinical evaluation and showed remnants of a non-well-defined graft consisting of few granules of a radiopaque material and a transcortical screw ( Figure 3). The patient feared to be treated with a conventional bridge and was seeking for an implant treatment and an aesthetic solution. Contextually, the patient complained about a dental apicoectomy performed in the lower incisors area, followed by a mucosal fistula occurred one year later. Case Presentation The proposed treatment plan consisted of an orthodontic treatment in order to correct the deep bite and obtain a teeth alignment, followed by a bone and soft tissues reconstruction to place implants in a prosthetically driven position, two provisional crowns, and a gold ceramic crown. The patient refused to undergo the orthodontic treatment; hence only the rehabilitation of the canine area was chosen. The first step consisted in the reconstruction of the bone defect by means of an intraoral corticocancellous block graft harvested from the chin area, taking advantage of the simultaneous treatment of the apicoectomy area in the lower incisors region. Before the surgery, the patient was provided with a full-mouth disinfection session. One day before the appointed surgical session, the patient was instructed to start with an antibiotic therapy consisting of amoxicillin clavulanate (Augmentin5, GlaxoSmithKline S.p.A., Verona, Italy) 1 g twice daily for six days. On the day of the surgery, mepivacaine 2% with epinephrine 1 : 100.000 (Carbocaina, AstraZeneca S.p.A., Milan, Italy) was used to induce local anesthesia, both at the mandible and at the upper left premaxilla. A double layer straight incision was first done below the mucogingival line in between the lower canines, to expose the apexes of the lower incisors as well as the symphysis area. The apexes of the lower incisors were then exposed; a surgical toilette was performed to remove the pathological tissue, followed by the creation of new apical seals obtained using a reinforced zinc-oxide cement (Bosworth5 Super Eba6, Skokie, IL, USA) ( Figure 4). Subsequently, an osteotomy was conducted with rotating instruments in the underlying mandibular symphysis area to harvest a corticocancellous bone block ( Figure 5). The donor site was then filled with a native collagen sponge and a double layer suture was performed, with a 5-0 resorbable suture (Vicryl5, Ethicon Inc., Somerville, NJ, USA) on both the periosteum and the mucosal levels. The recipient site was then prepared with a trapezoidal full thickness flap from the mesial side of the right lateral incisor up to the distal side of the right first bicuspid. The bone was then exposed ( Figure 6) and the cortical plate was perforated with a round bur under copious irrigation with sterile physiological saline solution to promote rapid Case Reports in Dentistry angiogenesis and migration of osteogenic potential cells from the endosteal compartment. The block, previously shaped, was adapted to the recipient site and fixed to the residual ridge with two transcortical screws ( Figure 7). The edges of the block were then smoothened with an oval bur and the graft was covered with a thin layer of anorganic bovine bone granules (Bio-Oss5, Geistlich Pharma AG, Wolhusen, Switzerland) ( Figure 8) and a collagen membrane (Bio-Gide5, Geistlich Pharma AG, Wolhusen, Switzerland) ( Figure 9). Flaps were released with sharp dissection to allow tensionfree closure. Horizontal mattress and single stitches were used to seal the surgical wound. The reentry procedure was accomplished after a healing period of four months. The healing proceeded uneventfully and no complications were encountered. After elevation of a mucoperiosteal flap, no signs of graft resorption were observed as from the absence of exposed threads of the transcortical screws (Figure 10). A 4.3 mm diameter per 13 mm length single implant (Camlog Screw-Line, Camlog Biotechnologies, Basel, Switzerland) was therefore placed in a prosthetically guided position (Figure 11). A connective tissue graft (CTG), harvested from the inner part of the palatal mucosa at the surgical site, was placed to increase the thickness of the soft tissues ( Figure 12). After 4 months, being the quality of the soft tissues unsatisfactory (Figure 13), a free deepithelized gingival graft was used to enhance the amount of tissue (Figure 14). A first provisional crown was connected to the implant two months later, but the coronal level of the soft tissues was still aesthetically unacceptable, when compared to the contralateral canine ( Figure 15). Hence, the need to move the gingiva more coronally induced the clinician to detect a technique able to correct the difference in height between the two canines. Being it impossible to perform a coronally repositioned flap, due to the presence of the acrylic crown associated with the absence of enough keratinized tissue, a combination of epithelialized and subepithelial CTG was chosen. A free gingival graft (FGG) was therefore harvested from the premolar-molar region of the palatal vault, prepared so as the apical part was disepithelialized leaving the connective tissue exposed, 4 Case Reports in Dentistry whereas the coronal part corresponding to the portion of the crown to be covered was left epithelized (Figure 16). The recipient site was then prepared with a 64C beaver blade to create a partial thickness envelope around the canine gingival margin. Subsequently, the upper part of the graft consisting of connective tissue was inserted by leaving out the epithelial half-moon coronal portion (Figure 17). A 6-0 nylon suture (Ethilon5, Ethicon Inc., Somerville, NJ, USA) was used to secure the graft in the proper position (Figure 18). A new temporary crown was placed and adapted to the recipient site ( Figure 19). Six months later, impressions were taken and the final gold ceramic crown was placed (Figure 20). At the recall visit three years after the delivery of the final prosthesis, gingival levels of the upper canines appeared almost symmetric and clinically stable, with < 3 mm probing depths and no bleeding on probing circumferentially around ( Figure 21). The radiological examination conducted with a periapical X-ray using the long-cone paralleling technique demonstrated the maintenance of bone levels at the mesial and distal aspect of the implant (Figure 22). Discussion The present case report demonstrated the volume maintenance of hard and soft tissue autografts in a critical anatomical situation characterized by ridge atrophy and thin biotype in an aesthetic zone. Alveolar bone grafting with autogenous mandibular bone has shown excellent survival and success rate of implants and a low tendency to long-term surface resorption, as well as good patients' tolerance with minimal side effects [2]. Based on current findings, the surface resorption of ramus and symphyseal blocks grafted in the anterior maxilla is similar; however corticocancellous blocks might be retrieved more effectively and with a higher amount from the chin [3]. Besides these considerations, the need for a second surgical site in the symphyseal region was another factor addressing the clinician to consider the chin as a donor site. To prevent significant graft resorption during the integration phase, anorganic bovine bone granules have been placed over the block and covered with a bioabsorbable collagen membrane. At the reentry surgery, the transcortical screws were entirely surrounded by remodeled bone, meaning that no graft resorption occurred during the healing time. This result favorably comply with those reported by Maiorana et al., indicating that bovine bone particles might prevent excessive bone remodelling due to its osteoconductive properties [4]. The stability of the mesial and distal bone levels has been radiographically documented at the 3-year follow-up examination, demonstrating the short-term reliability of the surgical technique even in a thin biotype patient. Indeed, thin tissue biotype is associated with an increased risk for unfavorable treatment outcomes following surgical interventions, due to increased friability, impaired vascularization, and thinner underlying bone [5]. The present result corroborates the finding that a thin periodontal biotype does not significantly affect the volume integrity of autogenous bone blocks harvested from the mandibular symphysis and transplanted in the premaxilla for implant placement purposes [6]. On the other hand, thin tissue biotype might be associated with a lack of keratinized mucosa and increased risk of mucosal recession together with unsatisfactory aesthetic outcomes [5]. Hence, several techniques have been advocated to increase keratinized tissue surrounding implants, including FGG and CTG. Nevertheless, recent findings pointed out the fact that periodontal plastic surgery procedures around dental implants gave good initial results from the inflammation involved in wound healing, but virtually all cases resulted in some significant recession as healing resolved and the tissue matured [7]. As a matter of fact, in the present report, both CTG and FGG showed a certain tendency to relapse in a short-term period. This might be related to the anatomical differences existing between teeth and implants, including the lack of periodontal ligament and therefore a decreased vascular supply for the graft. Such resorption rate prompted the use of combined epithelialized-subepithelial grafts for augmenting the keratinized mucosa around implants. The rationale is based on an increased vascularization provided by the connective tissue portion, which is sutured inside the partial thickness envelope so as to receive a flow of plasma and ingrowth of capillaries from the surrounding connective tissue. Consequently, as confirmed by the present case report, this technique might be able to decrease the potential partial shrinkage of the graft due to lack of blood supply, reducing the failure and dehiscence rate [8,9]. In conclusion, when autogenous sources and biomaterials are coupled together, their synergistic potential is able to provide a successful result even in more demanding areas such as the aesthetic zone of the premaxilla.
2018-04-03T05:32:33.181Z
2016-09-21T00:00:00.000
{ "year": 2016, "sha1": "f95b4e42200d192a0fb28abd87a5d14fe3ee9823", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crid/2016/8468763.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c375acc1b735032557093986bee3b8afa077e87c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
244426541
pes2o/s2orc
v3-fos-license
Solar energy potential at the Great St Bernard Pass The historical site of the Great St Bernard Pass, situated in the Alps at an elevation of 2469 m, records very high electricity consumptions. This study aims at evaluating the photovoltaic potential at the pass. The analysis was performed on the envelope surfaces of the building complex, taking into account the topography as well as typical climate data issued from a weather station located on site. It is found that during the months from June to October, each year, 54 MWh of electricity could potentially be generated, which is more than what could be produced by the same system in the climate of Geneva. Introduction It is well established that in most cases a photovoltaic (PV) system in Switzerland provides a significant amount of electricity with a break-even point achieved before the end of the lifespan of the modules [1]. The setting of the Great St Bernard is, however, profoundly different to that of a common installation in the lowland. Located 2469 meters above sea level on the border between Switzerland and Italy, the historic site of the Great St Bernard centered around its medieval hospice is situated in a distinctive environment. Studies already show the bright side of installing PV systems at this altitude [2]. This paper explores the particular case of the Great St Bernard Pass, with special attention to the climatic, cultural and infrastructural constraints imposed by the nature of the site. First of all, it should be noted that the site as well as the individual buildings are listed in the ISOS inventory (Federal Inventory of Swiss Heritage Sites) as a place of national importance. The energy refurbishment of historical buildings is nowadays a real topic of interest. Overcoming the barriers for the renovation of culturally important buildings represents a great challenge but is nonetheless possible [3]. In the case of the Great St Bernard Pass particular attention must be paid to the architectural integration of the PV installation in order to limit its visual impact as much as possible. Located just below the limit of eternal snow, the Great St Bernard Pass is subject to a harsh climate. Except for a few months, snow constantly covers the landscape of the pass. This must be analyzed in detail in order to know the conditions under which the solar modules will have to operate. Finally, the electricity consumption of the three main buildings of the pass, namely the hospice, the church and the inn, far from being negligible, is mostly due to lightning and appliances. The high electricity needs result not only from the massive size of the buildings but also from the considerable number of lodgers. Method The electricity consumption of the inn, the hospice and the church was calculated by averaging the consumption from 2018 to 2020 issued from the electricity bills provided by the administrator of the The solar potential analysis was performed on the envelope surfaces of the building complex, taking into account the topography as well as typical climate data issued from a weather station located on site. The computation was carried out with the software Rhino and its parametric environment Grasshopper using the Ladybug and Honeybee plugins. The 3D models of the buildings and the elevation profiles were issued by Swisstopo [4], made available by the ETH domain. For the analysis of the solar potential, the roofs shown in black in Figure 1 were considered. It was decided to analyze as many roofs as possible in order to get the most exhaustive look at the solar energy potential. Only the morgue and the restaurant have not been studied. The former is historically too important to allow any form of alteration and the latter has too many dormers on its roof to properly hold a PV system. For the same reason, the middle stretch of the hospice roof was not analyzed. (3), the old stables (4), the hospice (5), the church (6) and the garage (7). In addition to the analysis of those roofs, the energy production that could potentially be produced on three buildings of the Great St Bernard Pass was compared with the energy that would be produced by the same system in the same environment but with Geneva climate conditions. This allows for a comparison between the PV production potential in two typical climates of Switzerland, the Alpine and the lowland climates respectively. In parallel, a study of the snow cover on the Great St Bernard throughout the year gave precious information on the production curtailment. The typical weather data from the meteorological station located on the pass was obtained with the software Meteonorm [5]. Concerning the region of Geneva, the weather data was retrieved from EnergyPlus™ [6]. Finally, a visibility analysis was carried out in Grasshopper to identify to which extent the buildings are visible from the surrounding roads and hiking trails. This was achieved by using the same 3D models as for the solar potential analysis. Weather conditions The weather conditions in the Alps are far from trivial. Snow is present at the pass during more than 280 days a year. This has two consequences on the electricity production of PV systems. First, if the modules are covered with snow, production is greatly reduced or even stopped depending on the thickness of the layer [7]. Secondly, the snow covering the ground in front of the roofs increases the albedo of the surrounding surfaces [8]. Thus, more solar radiation is reflected by the ground and captured by the solar panels. The analysis of the snow cover at the pass demonstrates that the ground is practically devoid of snow, in August and September as well as in the second half of July and the first half of October. During the rest of the year, however, there is a layer of snow on the ground which increases considerably the surrounding's albedo. When the ground is covered with snow, the modules may remain clear. In Figure 2 it can be seen that from June until the end of October it rarely snows at the pass. In addition, the temperature during these same months is generally above 0 °C, thus favoring the melting of the snow. It is therefore reasonable to consider the roofs of the buildings to be free of snow during these months. In practice, it may be necessary to clear the roofs once or twice in October, in order to maximize the energy production. During the month of June along with part of July and October, the modules are free of snow but the ground is covered with it. This means that the diffuse radiation impinging on solar cells is increased by the albedo of the environment. Then, the amount of snow, falling between the period extending from November to May, is too high to suggest any significant electricity production, especially on the less pitched roofs. However, the roofs may remain clear of snow for periods of several days. For the calculations, the most pessimistic scenario, which considers all the roofs covered with snow from November to May, is selected. Table 1 summarizes the value chosen for technical parameters used for the calculation of the electricity production by a PV system. The diversity of roofs implies a wide range of energy production and production efficiencies. Figure 3 compares the electricity needs of the buildings with the production from their respective roofs. Only the kennel was left out of this plot because it is the least profitable roof considering the cost per unit area of energy, due to its poor orientation and its numerous windows. As can be noted in this Figure, a considerable amount of electricity is produced between the months of March and July. However, due to the snow cover on the solar panels between the beginning of November and the end of May, a significant quantity of solar radiation cannot be harvested. Figure 3. Photovoltaic production from different roofs from June to October in comparison with the annual electricity consumptions of the buildings. The discolored bars represent the electricity production potential lost due to the snow cover. PV production With a 723 m 2 surface equipped with photovoltaic modules distributed on six roofs and by considering a module efficiency of 15.2 %, 54 MWh of electricity would be produced each year. This means that 18.7 % of the annual needs could be covered. This value may be reduced due to the mismatch between instant energy demand and supply. Without batteries, the electricity created by the solar panels must be consumed immediately. Depending on the size of the installation, this would not be a problem. In fact, if the production is low compared to the needs, all energy produced is instantly consumed. Otherwise, the excess production will have to be sold on the Swiss electricity grid. Then, the comparison between the PV potential in the highlands and lowlands of Switzerland highlights some key elements. The potential electricity production of the same system if it were to be installed once in the climate of Geneva and once in the climate of the Great St Bernard Pass is shown in Figure 4. Except during the months of January, November and December, the estimated production at the pass is higher than that of Geneva. This difference in production is especially pronounced during the months from March to June when the energy produced at the Great St Bernard Pass is between 38 % and 16 % higher than in Geneva. It is necessary to note that the energy production at the Great St Bernard does not take into account the effect of snow cover on photovoltaic panels. If we consider that the modules are covered from November to May, the production at the pass would be reduced by 47 %, which results in a much lower production than that of an installation in Geneva where snow is not such an issue. To obtain an energy production at the pass equivalent to that of Geneva, it would be necessary to clear the solar panels from snow from mid-March to June. Provided the panels are free of snow, an installation in the mountains is therefore more energyefficient than in the plains. Be that as it may, it is difficult to plan on clearing the modules after each new snowfall, especially since the volume of fresh snow can be very high at the Great St Bernard Pass. Context As explained previously, the Great St Bernard Pass presents a very specific context compared to the settings in which most solar installations take place. The most notable differences are the presence of snow cover in winter and spring and the fact that the buildings are under the protection of the Swiss legislation which minimizes the possible alterations of the site. These do not affect all the roofs to the same extent. The shed and the old stables can totally disappear under the snow during the cold period. This adds a considerable load on the PV system. Moreover, snow will remain on those roofs the longest. On the other hand, with their high steep roofs, the inn, the hospice and the church have less snow accumulating on them. However, they are subject to stronger architectural protection [9]. This means that when selecting roofs to hold a PV system, the potential energy production is not the only criterion that should be taken into account. How snow accumulates on each roof, as well as the visibility of the buildings and the degree of architectural protection are decisive elements to consider. Figure 5 shows the results of the visibility analysis. The least visible buildings are warm in color. On the contrary, the blue buildings are visible on more than half of the surrounding roads and hiking paths. As can be seen in this Figure, the buildings subject to strong architectural protection, namely the inn, the hospice, the church and the morgue, are particularly visible from the surrounding pathways. Therefore, a system fitting perfectly into the landscape is needed on those culturally relevant buildings. Part of the answer could come from colored solar modules [10] or modules shaped like ordinary tiles [11,12,13] which would blend smoothly in the architectural style of the buildings. Architectural integration is not the only field of research where great progress could significantly impact the implementation of PV systems at the pass. Indeed, the clearing of snow on solar panels [14,15] is also an important topic. Following the massive development of solar energy, researchers and promising companies are looking for solutions to face this constraint. Progress in this field of research would be groundbreaking for the Great St Bernard Pass and solar energy in mountain environments in general. Close to half of the production is lost due to the snow cover during the cold season. Having a system that frees the solar modules from the snow would make the regain of this energy possible. Conclusion From this study, it emerges that the Great St Bernard Pass presents a very particular setting for a solar system. Not only is it the subject of strong architectural protection, but it must also face a harsh climate. The pass is covered with snow for more than two-thirds of the year. It is therefore estimated that photovoltaic energy production is only possible during the months from June to October. The rest of the year the solar modules are considered to be covered with snow.
2021-11-19T20:53:50.902Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "17094280bc2d8f07a7e730d5ac778bfc9d4a3d28", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2042/1/012104", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "17094280bc2d8f07a7e730d5ac778bfc9d4a3d28", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
228973124
pes2o/s2orc
v3-fos-license
Impact of thermo-optical effects in coherently-combined multicore fiber amplifiers In this work we analyze the power scaling potential of amplifying multicore fibers (MCFs) used in coherently-combined systems. In particular, in this study we exemplarily consider rod-type MCFs with 2x2 up to 10x10 Ytterbium doped cores arranged in a squared pattern. We will show that, even though increasing the number of active cores will lead to higher output powers, particular attention has to be paid to arising thermal effects, which potentially degrade the performance of these systems. Additionally, we analyze the influence of the core dimensions on the extractable and combinable output power and pulse energy. This includes a detailed study on the thermal effects that influence the propagating transverse modes and, in turn, the amplification efficiency, the combining efficiency, the onset of nonlinear effect, as well as differences in the optical path lengths between the cores. Considering all these effects under rather extreme conditions, the study predicts that average output powers higher than 10 kW from a single 1 m long Ytterbium-doped MCF are feasible and femtosecond pulses with energies higher than 400 mJ can be extracted and efficiently recombined in a filled-aperture scheme. Introduction Fiber lasers have demonstrated an impressive power performance scaling potential over the last decades, which has opened up a wide field of new applications in the industrial and scientific sectors. However, the onset of mode instability (TMI) [1] and the impact of nonlinearities currently restrict the power scalability of these systems. Large Mode Area (LMA) fibers have enabled further power scaling by distributing the intensity over larger core areas [2]. Although increasing the effective mode area in these fibers has been an effective solution over the last decades, the continued scaling of single-mode core dimensions is technologically challenging. Parallelization followed by beam combination enables scaling beyond single emitter limitations. Numerous techniques have been developed with successful demonstrations employing continuous-wave and pulsed, even ultrafast, lasers [3][4][5][6][7][8]. While beam-combining has shown its potential to linearly scale the output power with the number of individual amplifying channels used, the potential of this techniques is limited by practicality. Linear scaling of system performance dictates an equivalent requirement for space, number of components and overall complexity. For this reason, high power demonstrations of beam combining have achieved only about one order of magnitude in scaling compared to single emitters [9]. Significant reduction of the footprint, component count and complexity of beam combined systems can be achieved by integrating the individual amplifying channels into a single fiber, a so-called multicore fiber (MCF). Such fibers have already been successfully implemented in optical communications [10], in high temperature sensing systems [11] or in beam shaping systems [12]. Additionally, recent demonstrations have successfully shown that the amplified output of active MCFs can be efficiently coherently combined, with kW-average power [13] and femtosecond-pulse durations [14,15]. These proof-of-principle experiments showed the potential for active MCFs to be used in next generation coherently combined high-power fiber laser systems. In this work, we analyze to which extent the linear power scaling potential of such fibers holds true while paying particular attention to thermal effects that are described in detail in the following sections. As known, quantum defect heating is one of the most significant heat sources in almost all lasers, so in Yb-doped laser systems. In fiber laser systems in particular, due to the elongated geometry of the fibers, the heat generated in the active cores flows predominantly in the radial direction, which induces a radially inhomogeneous temperature profile as described in the literature [16]. To illustrate the impact of thermal load in an active MCF, the left-hand side of Fig. 1 exemplarily shows a simulated temperature profile at the end facet of a 5x5-MCF operated at high average output power. The underlying simulation tool together with made assumptions will be described in section 2. As it has also been shown in [17], it can be clearly seen that the temperature reaches its maximum at the center of the fiber and, due to heat conduction, decreases towards the outer fiber surface. Due to this radial temperature gradient, which affects the index profile through the thermo-optic effect [16], mode shrinking and mode deformation occur at each core of the MCF, as depicted on the right-hand side of Fig. 1. It can be seen that the modes shrink and are shifted towards the fiber center. These effects depend on the position of each core and can be significantly stronger for the outer cores than for the central ones, which in turn impacts the amplification process as well as the beam combination in a detrimental way. Worth to be mentioned, the presented analysis is, to a large extent, valid for arbitrarily arranged cores and beam combination in general. However, we concentrate on the mentioned squared pattern of cores and filled aperture coherent combination as described and experimentally demonstrated in [18]. Coherent combining is a phase-stabilized superposition of the E-fields, which leads to a summation of the power emitted by the individual emitters, in the case of a MCF, the individual cores. However, as shown later in this letter, the thermally induced non-uniform mode shrinking between the different cores (see Fig. 1 right) will affect the spatial overlap, with it the combining efficiency and finally the combined output power. Also, the output power emitted by the individual cores might vary, since the amplification efficiency depends on the spatial overlap of the modes with the doped cores. Moreover, the thermally induced modifications of the different cores will result in different optical path lengths and a different accumulated nonlinear phase, which could be important for the coherent combination in ultrashort pulse laser systems. In this work we will pay particular attention to these effects and, especially, we will evaluate the maximum extractable and combinable output power/energy. In the first section, the simulation tool together with the fiber parameters will be described. Subsequently, the different effects originating from the non-uniform temperature profile will be explained. In the last section, the simulation results will be presented and discussed together with the power and pulse energy scaling prospects. Simulation tool and fiber parameters A simulation tool has been developed in order to describe an MCF amplifier system accounting for all the previously mentioned effects. The working principle of this tool is schematically depicted in Fig. 2. It iteratively solves the laser rate equations, from which the heat-load can be calculated, whereby we assume that it is caused exclusively by the quantum defect. This results in a temperature profile that leads to a modification of the refractive index profile and, with it, to the distortion of the guided core modes along the fiber. Since single transverse mode operation in the individual cores of the MCFs is assumed, only the fundamental modes (FM) in each of the cores are taken into account. With the assumption that no optical coupling between the cores occurs (due to the large core-to-core distance, see [19]) the propagating modes can be calculated by solving the scalar wave equation for each core, as described in [20], at every point along the fiber. The distortion of the propagating modes along the fiber influences, in turn, the solution of the rate equations, which must then be recalculated. This cycle is repeated until convergence is reached. As convergence criterion, a maximum deviation of 1 % in the temperature, signal and pump power evolution between two iteration steps has been chosen. The result from the simulation contains the 3-dimensional temperature and refractive index profiles, the power evolution of the signal light in each core, the pump power evolution as well as the electrical field distribution in each core along the fiber. The simulation tool is capable of solving fiber structures with arbitrary core dimensions, core arrangements and index profiles. In this work we consider configurations of MCFs with a quadratic arrangement of step-index cores (as shown in Fig. 3), from 2x2 up to 10x10 cores. The square pattern is compatible with the typical splitting and combining elements presented in [18], enabling an efficient coherent combination of the emitting cores. The simulated ytterbium-doped MCFs are counter-pumped, whereas the performance of the fibers at different pump power levels is analyzed. Fig. 3: Schematics of MCFs with different core and cladding configurations. The pump cladding (orange) with a low index layer (green) is adapted in each simulated case to achieve the same small signal pump absorption of 20 dB/m. The outer fiber diameter is at least 1.5 mm, but when the pump cladding (orange) is larger than this value, the outer fiber diameter has the same dimension as the pump cladding. The fiber length has been chosen to be 1 m. This is a typical length of rod-type fibers which are suitable for high-energy, ultrafast pulse operation. The diameter of the active cores (with an Yb-doping concentration of 5•10 25 ions/m 3 ) in the different designs are 30, 50 and 80 µm respectively. All cores possess a flat step-index profile (in cold operation) that corresponds to a V-parameter of 3. The pump cladding consists of silica glass and its size is appropriately chosen to achieve a small-signal pump absorption of 20 dB/m at 976 nm wavelength in all cases. The green ring indicates a low index layer to obtain pump guidance within the orange area (pump cladding). The fiber diameter has at least the same dimension as the cladding size. However, in the case of the smaller pump claddings (i.e those with a diameter smaller than 1.5 mm), an additional silica over-clad is added to obtain a mechanically stable rod-type fiber of at least 1.5 mm outer diameter (blue ring in Fig. 3). Furthermore, we would like to point out that, as usual for rod-type fibers, no additional polymer coating is considered in these simulations. The coolant medium at the fiber outer surface is chosen to be water (with a thermal conductivity of 0.58 W/(m·K) and a temperature of 25 °C). Thermal effects and their consequences As mentioned before, the typical non-uniform thermal transverse profile in an MCF, shown in Fig. 1 (left), will lead to different propagation constants along the individual cores and, thus, to different optical paths lengths. Our simulations reveal that the maximum optical path difference among all herein considered scenarios, encountered in a 1m long 10x10 80 µm MCF with 300 W of average output power being extracted per core, is smaller than 25 µm, which is low enough to be compensated by phase stabilization systems, as done in [18]. A more serious impact of the thermal profile, shown in Fig. 1 (right), is mode shrinking, which also varies from core to core. An example of the evolution of the effective mode area of the propagating fundamental modes (FM) in the different cores (color-coded) is depicted in Fig. 4 for a 5x5 MCF, with 80 µm cores, counter pumped at 976 nm wavelength with an average output power of 3.4 kW. It can be seen that the propagating FM in the inner cores of the MCF (red and green curve) show significantly less mode shrinking than the one in the outer core (blue curve). In this example a maximum difference of ~20 % in the mode area at the fiber end facet between the inner and the outer core is obtained. The different distribution of the intensity profiles in the cores at the fiber end facet will influence the spatial combining efficiency , that is calculated as follows: can be understood as the ratio of the combined power to the sum of the power emitted by all cores. The combined power is calculated by spatially superimposing the normalized electrical field amplitudes , emitted by the individual cores with the core centers and at the fiber end facet (which are part of the solution given by the simulation tool). At this point it is important to note that different output powers emitted by each core might occur. This is due to the fact that the different mode sizes along the fiber will, in principle, influence the amplification behavior in each core (due to the different overlap with the doped core area). This will also be considered in the calculation of the combining efficiency. Fig. 5 shows the intensity profiles of a 5x5 MCF with 80 µm cores at different power levels. On the left side (corresponding to 500 W of total output power) almost no mode shrinking occurs, which results in a high spatial combining efficiency ( , ) of nearly 100 %. However, at high output power levels (3400 W in total, as shown on the right-hand side) significant mode shrinking occurs. Since this mode deformation is not homogeneous for all cores, the combining efficiency drops to ~75 % in this case. Another important parameter that has to be taken into account, is the accumulated nonlinear phase for each core. This parameter can play a significant role in coherently combined ultrashort laser systems (where high peak powers occur, even with stretched pulses), since it might lead to different temporal phases between the single emitters. As described in [21], the so-called B-integral for the i th core can be calculated as follows: With being the signal light wavelength, 2 the nonlinear index and , ( ) the peak intensity of the propagating pulse along the core i. Here, the strength of the simulation tool can be seen, since the mode shrinking along the fiber, and with it the change in peak intensity, are taken into account. As shown in Fig. 4, the mode shrinking is different in each core, leading to different B-integrals, which are also be accounted for in the considered coherent combination. A detailed analysis on the impact of B-integral variations among the interference partners on the combining efficiency is given in [22]. Results and discussion As described in the last section, there are various effects that must be considered when operating an MCF amplifier at high thermal loads. First, prohibitively high temperatures might occur due to the extremely high powers that can be extracted from these waveguides. In Yb-doped fibers, in particular, the optical properties (such as the absorption and emission cross sections, as described in [23]) will drastically change when the fiber exceeds a temperature of 500 °C. Especially the pump absorption from 900 to 1000 nm and the emission above 1010 nm decreases while the re-absorption in the signal band increases. Moreover, thermally induced damage can start to happen at around 1000 °C -as additionally described in [24]. Following these studies, a maximum occurring temperature of 500 °C in the MCF will be considered as an upper temperature limit of the safe working range in our simulations. Another thermal effect that must be taken into account is transverse mode instability (TMI) that occurs at high average output powers (in each core). Previous experiments with single-and multicore fibers have shown that the TMI threshold for ~1 m long fibers can be estimated to occur at ~300 W per core [25,26], while the threshold of each single core is not changed by the number of cores [19]. Moreover, it has been theoretically shown that the TMI threshold does not significantly change with the core size when the V-parameter is kept constant [27,28]. Therefore, in our simulations we define a maximum extractable average power per core of 300 W for all core sizes as another performance limit due to TMI. Taking all these limitations into account leads to the results as shown in Fig. 6, where the maximum occurring temperature in the fiber is plotted as a function of the extracted power per core for different MCF designs ranging from 2x2 to 10x10 cores (30 µm core diameter on the left and 80 µm core diameter on the right-hand side). It turns out that the maximum temperature always occurs in one of the cores since the heat (due to the quantum defect) is generated there. It can be seen that this maximum temperature rises almost linearly with the extracted power per core in all cases. For any fixed extracted power per core, the absolute temperature also rises with an increasing number of cores (from 2x2 up to 10x10) since much more total power is extracted from the same fiber length. In most cases the stable operation regime of the fiber (highlighted as a green area in Fig. 6) is limited by the TMI threshold when the extracted power per core reaches 300 W. Consequently, this corresponds to a total average output power (taking all cores into account) ranging from 1.2 kW for the 2x2 MCF to almost 30 kW for the 10x10 MCF -in the case of 80 µm cores. However, in a few cases -especially for MCFs with a smaller core diameter of 30 µm and high number of cores -it can be seen that the thermal limit threshold can be reached before the TMI limit. Due to their smaller fiber and core dimension, MCFs with 30 µm cores reach a higher maximum temperature at the same extracted power compared to 80 µm cores. Regarding a simple step-index-fiber this behavior was analytically shown and explained in [16] and is analogous for MCFs. Thus just 220 W, 200 W, 170 W and 150 W of average output power per core (instead of 300 W) can be reached with the 7x7, 8x8, 9x9 and 10x10 MCF, respectively, when considering 30 µm core diameter. Another important aspect beside the maximum extractable output power per core is the combining efficiency, as described in the previous section. This parameter has to be taken into account for coherently combined systems since it limits the combinable output power emitted by an MCF. To stay in an acceptable working regime we additionally define a self-imposed performance limit of a minimum combining efficiency of 70 %. The combining efficiency as a function of the extracted power per coreconsidering MCFs with 30 µm (left) and 80 µm (right) core diameter -is shown in Fig. 7. It can be seen that the combining efficiency decreases with more extracted power per core for both core diameters. This is due to the non-uniform mode shrinking/deformation shown in Fig. 5. Moreover, the degradation of the combining efficiency is worse for larger cores since their propagating modes (for a constant V-parameter) are more sensitive to thermally induced changes. Additionally, it can be seen that using more cores per fiber will also lead to a decrease in the combining efficiency at the same extracted power per core. This is because more cores generate more total heat in the fiber, which will lead to a higher distortion of the modes at the fiber end. It can be seen that, e.g. with a 6x6 MCF with 30 µm cores a combining efficiency of ~ 93 % is reached at an extracted power of 300 W per core, which corresponds to a total extracted and combined output power of ~ 10 kW. When combining the results from Fig. 7 with those of Fig. 6, it can be seen that average power scaling of MCFs with smaller core dimensions (and high number of cores) is limited by the maximum occurring temperature, whereas MCFs with large core dimensions are instead restricted by the lower limit of the combining efficiency of 70 % (see Fig. 7 right). Taking all the limitations simultaneously into account (i.e. the maximum occurring temperature of 500 °C, the maximum extractable output power per core of 300 W due to TMI and a minimum combining efficiency of 70 %) will lead to a maximum combinable output power for all the different MCF configurations with different core sizes, as shown in Fig. 8. Thus, the maximum combinable output power for 30, 50 and 80 µm MCFs is depicted by the red, green and blue curves, respectively. The red shaded area shows the operating regime above our theoretical maximum (i.e. considering an extracted output power per core higher than 300 W together with a combining efficiency of 100 %). With more cores, the maximum combinable output power starts rising linearly for the different core sizes (from 30 µm to 80 µm). However, large core MCFs show a worse performance in terms of the combinable output power since the combining efficiency decreases drastically, as shown in Fig. 7 (right). Additionally, it can be seen that the linear rise of the combinable output power for the 30 µm MCF flattens for 7x7 cores and more. At this point a maximum temperature of 500 °C is observed in the fiber before the maximum extracted power per core of 300 W is reached, as depicted in Fig. 6 (left). The same flattening-effect can be observed with the 50 µm MCF with more than 9x9 cores which leads to the same performance in terms of average power as the 30 µm MCFs. In spite of this, it can be seen that more than 10 kW (with at least a 6x6 30 µm MCF) of combinable output power can be realized with such a 1m long MCF arrangement. Please note that this presented study is rather an extreme scenario and that the combinable average power can be further scaled by using longer fibers since the thermal load is distributed over a longer fiber length. The results shown above are valid for CW-operation in MCF systems. With pulsed operation (and, especially, ultrafast pulse operation) the accumulated nonlinear phase -described in equation (2) -must be accounted for since it will impose a further performance limit. Thus, instead of using the CW-power along the fiber the evolution of the pulse peak power must be considered. In order to do this, it will be assumed that the peak power of the pulses follows the same amplification profile along the fiber as the CW power calculated by the simulation tool. In this case Gaussian pulses with a stretched duration of 10 ns are assumed (this stretched pulse duration, although large, is something that will be implemented in the next generation of stateof-the-art, high-performance ultrafast fiber laser systems). This information, together with the repetition rate (that will be varied in this study) allows the calculation of a corresponding pulse energy and B-integral. The nonlinear accumulated phase will in most cases distort the temporal profile of the recompressed pulses. Using a spectral shaping device is a well-known technique to reduce the impact of the temporal Kerr nonlinearity, the B-integral [29,30]. Even though this approach has achieved good results in the past, compensating for high B-integrals is technologically challenging [5,31]. Hence, we chose a maximum B-integral of 10 rad as the performance limit in the following consideration. Taking this limitation into account (in addition to the others already considered in the CW study) will lead to a maximum combinable and extractable pulse energy for the different MCFs at different repetition rates, as shown in Fig. 9. Hereby it can be seen that, with decreasing repetition rate (at the same combined output power), the pulse energy rises since it is only limited by the maximum combinable output power, as seen in Fig. 8. However, at a certain point a maximum B-integral of 10 rad or more (red shaded area) is reached which limits any further scaling of the pulse energy. It can be seen that for 30 µm MCFs (left) the maximum Bintegral is reached already at 100 kHz whereas 80 µm MCFs (right) allow for repetition rates as low as 10 kHz. The larger mode area in 80 µm cores consequently leads to significantly higher pulse energies. At this point it is important to note that the different B-integrals for all cores have been taken into account to calculate the impact on the combining efficiency, as demonstrated in [22]. It turns out that, even at the highest pulse energies, the combining efficiency is just affected by a very few percent and, therefore, the impact of the B-integral on this parameter is neglected in the presented considerations. The maximum extractable and combinable pulse energy for MCFs with different core dimensions (30, 50, 80 µm) at their optimum repetition rate (i.e. that corresponding to a maximum occurring B-integral of 10 rad, as shown in Fig. 9) is depicted in Fig. 10. It can be seen that using a 10x10 MCF with 30 µm core dimensions (left figure) will allow for a maximum pulse energy of ~ 130 mJ at 100 kHz repetition rate. On the other hand, using a 10x10 MCF with 80 µm core dimensions will result in a maximum extractable and combinable pulse energy of ~ 450 mJ when using a repetition rate of 10 kHz. Conclusion Simulation results for coherently combined ytterbium-doped multicore fiber amplifier systems have been presented. In order to do this, a tool has been developed that numerically solves and analyzes the amplification behavior in MCFs together with thermal considerations. Using this model allows predicting the expected thermal load and temperature-dependent effects such as mode-shrinking as a function of MCF design parameters. It turns out that the combining efficiency is particularly strongly influenced by the thermally-induced mode shrinking and deformation that differs from core to coredepending on its position. Due to the limitations imposed by thermal effects, the best strategy for scaling the average power appears to be increasing the number of cores and using relatively small core diameters. By doing so, our simulations suggest that it should be possible to extract and combine around 13.5 kW from a just 1 m long 10x10 MCF with 30 µm core dimensions. The use of larger core diameters, on the other hand, will help to maximize the extractable and combinable pulse energy in these systems in ultrashort pulse operation. Assuming a stretched pulse duration of 10 ns and a maximum B-integral of 10 rad, it was possible to show that femtosecond pulses with up to 450 mJ of pulse energy in a 10x10 MCF with 80 µm core dimensions together with an average power of 4.5 kW can be achieved. At this point we want to emphasize that our study does not account for other effects that might reduce the combining efficiency in MCFs, such as multimode operation, tolerances in core sizes/positions or polarization changes. According to already existing coherently combined systems, an additional reduction of power/energy performance of 10-20 % (due to a plethora of spurious effects, such as multimode operation of the cores, imperfections in the optical elements, etc.) has to be taken additionally into account for a realistic estimation. According to our simulations, considering the aforementioned power and energy scaling prospects together with these additional limitations, a combined average power of more than 10 kW in CW operation and more than 400 mJ (4.0 kW) in pulsed operation seem feasible with MCF systems. Future related studies will focus on experimental investigations of the thermal effects that have been revealed and presented in this work. Moreover, strategies have to be worked out to mitigate the impact of thermal effects in MCFs so that the combined average output power and pulse energy can be further increased. This will be an important step towards compact ultrashort pulse Joule-class MCF laser systems in the future.
2020-10-29T09:02:48.371Z
2020-12-07T00:00:00.000
{ "year": 2021, "sha1": "18a8dbc944d2249294c8838fbb83c81547f53241", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.410614", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "2ff2e0239fe2fddda3bdcf26cbaffe4b813583a6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
228887255
pes2o/s2orc
v3-fos-license
Untargeted proteomics-based approach to investigate unintended changes in genetically modified maize used for food and feed purposes Profiling technologies, such as proteomics, allow the simultaneous measurement and comparison of thousands of plant components without prior knowledge of their identity. The combination of these non-targeted methods facilitates a more comprehensive approach than targeted methods and thus provides additional opportunities to identify genotypic changes resulting from genetic modification, including new allergens or toxins. The purpose of this study was to investigate unintended changes in GM Bt maize grown in South Africa. In the present study, we used bi-dimensional gel electrophoresis based on fluorescence staining, coupled with mass spectrometry in order to compare the proteome of the field-grown transgenic hybrid (MON810) and its near-isogenic counterpart. Proteomic data showed that energy metabolism and redox homeostasis were unequally modulated in GM Bt and non-GM maize variety samples. In addition, a potential allergenic protein – pathogenesis related protein -1 has been identified in our sample set. These finding highlight the suitability of unbiased profiling approaches to complement current GMO risk assessment practices worldwide. Introduction Genetically modified organisms (GMOs) have been extensively grown and consumed in a number of countries since 1998. Twenty-years after the first cultivation, the accumulated genetically modified (GM) crop area surged to a record of 191.7 million hectares in 26 countries around the world (ISAAA, 2018). Despite the widespread use of GMOs, the need for biosafety science remains a concern and it is mandated in the domestic legislation of many countries as well as in international treaties (Davison 2010;Eckerstorfer et al. 2019). Confidence in the safety and reliability of GMO food products depends significantly on the genetic integrity of the organism; however, the frequency of transformation-induced mutations which could result in altered metabolism, novel fusion proteins, or other pleiotropic effects leading to adverse effects are poorly understood (Zolla et al. 2008;Kohli et al. 2010;Brandão et al. 2010;Agapito-Tenfen et al. 2018). In fact, the transgene insertion site cannot be predetermined and for this reason transgenes may be inserted in functional genomic regions thus disrupting the structure and/or altering the regulation patterns of genes from the plant host genome as previously observed for some commercialized GM crops (Holck et al. 2002;Hernandez et al. 2003;Rosati et al. 2008;Morisset et al. 2009;La Paz et al. 2010). Other secondary unintended effects of genetic modification can also arise during conventional breeding as the result of hybridization or spontaneous mutations, processes that are integral to breeding programs (Van Gelder et al. 1991;Conner andJacobs 1999 andFAO 2002). Other documented effect is related to the application of supporting technologies used in the GMO agroecosystem, such as the use of combined herbicides (Bøhn and Millstone, 2019). Profiling technologies, such as proteomics, allow the simultaneous measurement and comparison of thousands of plant components without prior knowledge of their identity. The combination of these non-targeted methods facilitates a more comprehensive approach than targeted methods and thus provides additional opportunities to identify genotypic changes resulting from genetic modification, including new allergens or toxins (Ruebelt et al. 2006;Agapito-Tenfen et al. 2013). The identification of such changes in the GMO that could cause adverse effects on the conservation and sustainable use of biological diversity, taking also into account risks to human health, is a first step in the GMO risk assessment process (UNEP 2016). Two-dimensional electrophoresis (2-DE) gel-based proteomic approaches have been widely used to investigate the protein-level metabolism of transgenic maize, soybean, cotton, rapeseed and rice in contrast to their non-transgenic counterpart in the past decade (Ren et al. 2009;Coll et al. 2011;Barbosa et al. 2012;Xue et al. 2012;Liu et al. 2015;Wang et al. 2015;Benevenuto et al. 2017;Galazzi et al. 2019). However, these studies do not report consistent results, which may be explained by their use of a variety of organism's genetic backgrounds and/or different growth conditions, as well as variations in the technologies applied (Ricroch et al. 2011). These inconsistencies highlight the importance of building a "database" of knowledge around genetic variability in GM crops, as well as the need for harmonization of analytical methods that could be addressed through continuous multi-laboratory tasks (Batista et al. 2010;Zanatta et al. 2020). Among the different omics platforms investigating the proteome, 2-DE gel-based approaches enable the identification of protein isoforms that would not be possible by means of high throughput omics systems. In the present study, we used bi-dimensional gel electrophoresis based on fluorescence staining, coupled with mass spectrometry in order to compare the proteome of the field-grown transgenic hybrid (MON810) and its near-isogenic counterpart commercially available in South Africa. Protein profiles were generated and compared between the two plant varieties to assess differences in protein expression. Differentially expressed proteins were successfully identified and their molecular function and cellular components were analyzed. We observed imbalanced redox metabolism and a potential allergenic protein in GM maize expressing Bt toxin which have been grown in field conditions mimicking real world agricultural scenarios. Plant material and growing conditions The cultivation of GM maize MON810 event (unique identifier MON-ØØ81Ø-6, Monsanto Company), also known as Bt-maize, has been approved in South Africa in 1997 (CERA 2012). MON810 was genetically modified by particle bombardment to genomic insert the modified cry1Ab gene from Bacillus thuringiensis. The expression product of this gene is the insecticide protein (Bt toxin) Cry1Ab. White maize variety PAN 6Q-321B containing MON810 event (Pannar Seed Ltda., South Africa) and its non-GM near isogenic variety PAN 6Q-121 (Pannar Seed Ltda., South Africa) were planted in November 2009. These are single-cross hybrid seeds which are the progeny derived from the cross of a maternal endogamous line "A" with the paternal endogamous line "B". This seed population is, therefore, highly genetically similar (all genotype should be AB). After the confirmation of MON810 event in GM seeds and the absence in its near isogenic non-transgenic (non-GM) counterpart (data not shown), plants were grown side by side in 2.4 Ha blocks (density of 20 000 plants/ Ha) in the same field located at the University of Free State Research Farm, Bloemfontein, South Africa. Plots were managed following standard agricultural practices in the region, without the application of herbicides. No fungicide or insecticide was either applied. Six plants were randomly sampled per maize hybrid from each plot inner rows, in order to avoid border effects. Maize leaves were collected at R1 stage (approximately 90 days after sowing). Sampling was performed during early morning in which around 5 g of material was collected from the third upper leaf, consisting of a 10 cm long tissue piece located in the mid portion. Plant samples were carefully checked for the absence of herbivory and disease symptoms, as well as necrotic tissue areas. The leaves were cut, placed in 15 ml tubes before immersion in liquid nitrogen and transported to the lab. The samples were kept at -80ºC until used. Protein extraction and sample labeling for 2-D DIGE gel electrophoresis Each sample was separately ground-up in a mortar with liquid nitrogen and protein extraction was subsequently carried out according to Carpentier et al. (2005) with some modification. Phenol extraction and subsequent methanol/ammonium acetate precipitation was performed and PMSF was used as protease inhibitor. Pellets were resuspended in an urea/thiourea buffer compatible to DIGE (4% w/v CHAPS, 5 mM PMSF, 7 M urea, 2 M thiourea and 30 mM Tris base; all reagents were purchased from Sigma-Aldrich Corporation, St. Louis, USA). Protein quantification was determined by means of the copper-based method 2-D Quant Kit (GE Healthcare Bio-Sciences AB, Uppsala, Sweden). A pool of 60 ug of protein samples per variety (consisting of equal amounts of each of the six plants assessed per treatment) were labeled with 400 ρmol/ul of CyDye DIGE fluors (GE Healthcare Bio-Sciences AB, Uppsala, Sweden), according to the manufacturer's instructions. Each pool was first separately labeled with a different fluor. After protein-fluor hybridization, samples were treated with lysine (10 mM) to stop the reaction and then mixed together for 2-D DIGE gel electrophoresis separation. 2-D DIGE gel electrophoresis conditions In order to determine the biological variance among our samples, a preliminary test has been carried out to established baseline variation information on samples collected for this study (Coll et al. 2011). The pre-test consisted of 450 ug of each of the six unlabeled samples from each variety which were then separated by 2-D gels using Immobiline™ DryStrip gels of 13 cm and a linear pH range of 4-7 (GE Healthcare) coomassie brilliant blue G-250 colloidal stained gels (Candiano et al. 2004). 2-D gel electrophoresis conditions were performed as described by Weiss and Görg (2008). Once determined that variability within samples were minimal and felt within the optimal range for proteomic analysis, the extracted proteins were separated by two-dimensional gel electrophoresis (Weiss and Görg 2008). In the isoelectric focusing (IEF) step, strip gels of 24 cm and a linear pH range of 4-7 (GE Healthcare) were used. Strips were initially rehydrated with labeled protein samples and a rehydration solution (7 M urea, 2 M thiourea, 2% w/v CHAPS, 0.5% v/v IPG buffer (GE Healthcare), 0.002% w/v bromophenol blue). Strips were then processed using an Ettan IPGPhor IEF system (GE Healthcare) in a total of 35000 Volts.h -1 and subsequently reduced and alkylated for 30 min under slow agitation in a Tris-HCl solution (75 mM), pH 8.8, containing 2% w/v SDS, 29.3% v/v glycerol, 6 M urea, 1% w/v dtt and 2.5% w/v iodocetamide. Strips were placed on top of SDS-PAGE gels (12%, homogeneous) and used in the second dimension run with a Hoefer DALT system (GE Healthcare). Gels were immediately scanned with the FLA-9000 modular image scanner (Fujifilm Lifescience, Dusseldorf, Germany). Preparative gels for each treatment were also performed in order to extract spots with statistical significance differential expression between varieties. These were performed with a 700 ug load of total protein pools in 24 cm gels from each treatment, separately, and stained with coomassie (MS compatible) (Candiano et al. 2004). Gel analysis For the purpose of addressing plant-to-plant variability within our GM and non-GM varieties, the pre-test experiment consisted of twelve gels, six from each variety. These were analyzed all together by software Image Master 2D Platinum, version 7.0 (GE Healthcare). Gels were compared and matched spots volume of each gel was used to determine the biological variation by Principal Component Analysis (PCA) using Euclidean distance for quantitative analysis. PCA was first applied to determine the proportion of the total proteomic variation that originates from differences between biological repetitions. PCA analysis was performed by examining similarities of correlations between the observed measures. The analysis was carried out using covariance matrix performed by Multibase PCA PLS Cluster-Analysis Excel Add-in Program (Numerical Dynamics Co.). The 2-DIGE experiment consisted of four technical replicate gels, each containing a loading pool of six biological replicates per variety. Cross-comparisons among the different samples were performed using software Image Master 2D Platinum, version 7.0. Hierarchical matching of gels was organized in such a way that technical replicate gels were compared fist and exclusive spots were removed from subsequent analysis. To analyze gel similarities or experimental variations, such as disparities in stain intensity or sample loading, scatter plots based on a linear dependence between the spot values of one gel and the corresponding values in the reference gel were produced. Spots within each variety with a high coefficient of variance (>20%) were excluded from the analysis. Therefore, only consistent spots for each variety were used in the comparative analysis. Statistical analyses were performed with the Student's t test (95% confidence interval). In-gel digestion and protein identification by MS/MS Gel spots were excised and subjected to in-gel reduction, alkylation, and tryptic digestion using 2-10 ng/μl trypsin (V511A; Promega) (Shevchenko et al. 2006). These were analyzed by the Proteomics Platform at the Arctic University of Norway (UiT). Peptide mixtures containing 0.1% formic acid were loaded onto a nano ACQUITY Ultra Performance LC System (Waters Massachusetts, USA), containing a 5-μm Symmetry C18 Trap column (180 μm × 20 mm; Waters) in front of a 1.7-μm BEH130 C18 analytical column (100 μm × 100 mm; Waters). Peptides were separated with a gradient of 5-95% acetonitrile, 0.1% formic acid, with a flow of 0.4 μl/min eluted to a Q-TOF Ultima mass spectrometer (Micromass; Waters). The samples were run in data dependent tandem MS mode. Peak lists were generated from MS/MS by the Protein Lynx Global server software (version 2.2; Waters). The resulting pkl files were searched against the NCBInr 2011120 protein sequence databases using Mascot MS/MS ion search (Matrix Sciences; http://matrixscience.com). The taxonomy used was Viridiplantae (Green Plants) and 'all entries' for contamination verification. The following parameters were adopted for database searches: complete carbamidomethylation of cysteines and partial oxidation of methionines; peptide mass tolerance ± 100 ppm; fragment mass tolerance ± 0.1 Da; missed cleavages 1; and significance threshold level (P < 0.05) for Mascot scores (-10 Log(P)). Even though high Mascot scores are obtained with significant values, a combination of automated database searches and manual interpretation of peptide fragmentation spectra were used to validate protein assignments. Molecular functions and cellular components of proteins were compared against ExPASy Bioinformatics Resource Portal (Swiss Institute for Bioinformatics; http://expasy.org) and Gene Ontology Consortium (http:// geneontology.org). Genome location for each protein was searched against Maize Genome Sequencing Project (http://www.maizesequence.org/index.html) by using the protein name. A database search for allergenic epitopes was performed at the Allergen Database for Food Safety (ADFS; Division of Biochemistry and Immunochemistry of National Institute of Health Sciences; http://allergen.nihs.go.jp/ADFS/). A graphical abstract is presented in Figure 1. Figure 1: Graphical abstract and methodological pipeline for this study. Proteomic profiling analysis was performed for GM versus non-GM maize samples expressing the Cry1Ab cassette. Plants were field grown in South Africa and subjected to a phenol-based protein extraction. 2-D fluorescent gels were analyzed and statistically significant spots (<0.05%) were sequenced by MSMS analysis. Identified proteins were then searched against public databases for their annotations. Suitability and reproducibility of 2-D gel-based DIGE experiments Profiling techniques are broadly accepted as being capable of delivering sound descriptions of their target class of molecule in. a range of diverse field from molecular medicine to food safety and plant physiology studies (Karahalil B. 2016;Mehta et al. 2019;Argueso et al. 2019;Carrera et al. 2020). A number of molecular profiling studies have already indicated unintended effects of genetic modification (Coll et al. 2011;Wang et al. 2015;Peng et al. 2019;Zanatta et al. 2020). These studies revealed inter alia that compared to an effect of the genetic modification in the GM plants, there is also effects on the plant's physiology arising from (i) the genetic background, (ii) environmental conditions during growth, (iii) sampling procedures and (iv) plant-to-plant variability. Even the growth condition of the previous generation (such as the production of seeds) is known to cause epigenetic effects (Zolla et al. 2008). Gel-free high-throughput mass spectrometry (MS) approaches have been applied in the past years to identify proteins on a larger scale with higher sensitivity compared to the traditional compositional analysis and reveal new aspects of the protein-level regulatory metabolism of GM crops (Garcia-Canas et al. 2011;Valdivel 2015). However, 2-DE technology is irreplaceable because it yields visualization maps of protein profiles, which provide information on the abundance of proteins and reliable evidence for existing protein isoforms (Benesova et al, 2012;Tan et al, 2017;Fonseca et al. 2012). Therefore, two-dimensional gel electrophoresis (2DE) is still one of the most important techniques, mostly due to its high performance regarding the separation of complex mixtures of fulllength proteins. Ultimately, gel-free and gel-based approaches are both of great value to a proteomic study and often provide complementary information for an overall richer analysis (Abdallah et al, 2012). Comparative proteomic analysis requires reliable methods for investigation of differential protein expression. An important source of variation is derived from technical artifacts or heterogeneities (i.e. differences between sample collection, IEFs, gel runs). Blocking enables an effective comparison between observed conditions with little dependence on technical heterogeneities thus improving the precision of the statistical analyses (Valledor and Jorin 2011). This approach has been successfully applied to reduce bias related to protein labeling in 2-D DIGE experiments. In the present investigation, we have chosen the nearest isogenic counterpart as the appropriate comparator. This is in agreement with several international adopted guidelines for GMO safety analyses (Codex 2003;AHTEG 2011, EFSA 2011. In order to avoid environmental variation, we have performed a field experiment that consisted of several seed lines out of which individual plants were randomly selected from the inner rolls in order to minimize possible field effects derived from border effects or heterogeneities in the field area. For the purpose of addressing plant-to-plant variability within our treatment and control plants, we have performed PCA which demonstrated similarities in the protein quantity between different gels. PCA analytical report shows that the first three components explained 46% of the variation. The PCA plot showed a clear separation between the GM and non-GM plants in the first component, which explained 33.7% of the total variation ( Figure 2). There was also biological variation in the analysis, explaining 12.6%. These results show that the plant-to-plant variability fall in the range of what is usually accepted for proteomic analysis. Proteomic profile of white Bt-maize (MON810) and its non-GM counterpart In this study, 2-D DIGE combined with mass spectrometry (MS) was used to develop protein profiles in order to assess new protein products or metabolic differences occurring due to genetic modification resulting from particle bombardment. The proteomic profile of the field-grown white maize MON810 GM variety PAN 6Q-321B, widely grown in South Africa, was compared to its near-isogenic variety PAN 6Q-121; thus mimicking real world agricultural scenarios (Figure 3). Table 2 is indicated in yellow boxes. The amounts of total protein extracted were 10.78 ± 1.19 mg/g (dry weight) for the non-GM samples and 11.11 ± 1.57 mg/g (dry weight) for the GM samples. Average numbers of spots on the 2-D DIGE gel were 710 ± 105 (GM) and 820 ± 95 (non-GM). The amount of protein extracted and the number of detected spots from both treatments did not show statistical significant difference (P = 0.749 and 0.172; respectively). After manual verification of spots, gels were matched according to hierarchical condition, in which technical repetitions were first compared, followed by biological repetitions comparison and further treatment comparison. Therefore, gels from different treatments were internally matched and only consistent spots that were included in the analysis. The average correlation coefficients were 0.91 ± 0.2 for the non-GM sample and 0.92 ± 0.2 for the GM sample with a total number of matched spots of 514 and 669 for the non-GM and GM gels, respectively (Table 1). These results indicate a high degree of sensitivity and reproducibility using the 2D-DIGE/MS approach (Choudhary et al. 2020;De Campos et al. 2020 Differential protein expression patterns in MON810 compared to its near-isogenic line A comparison between the GM and non-GM plants revealed a total of 16 different proteins that were either up or down regulated in one of the varieties at a statistically significant level (P < 0.05). Eleven out of 16 of the differentially expressed proteins were detectable only in the GM variety. And three proteins were completely repressed in the GM variety while 2 proteins were down-regulated by a factor of 1.5 and 2 ( Figure 4). The 16 proteins were successfully identified with C.I.% values greater than 95% using MALDI-TOF-MS/MS analysis (P < 0.05) ( Table 2). values and were, therefore, included in a new histogram. Protein expression levels represent the relative protein expression compared to a reference gel (four technical replicates were used). Most proteins were specific enzymes closely related to cellular energy homeostasis and reduction-oxidation (redox) metabolism ( Figure 5). We found proteins involved in photosynthesis and the synthesis of temporary storage polysaccharide pathways, including adenylate kinase (down-regulated in the GM), bifunctional 3phosphoadenosine 5-phosphosulfate synthetase (repressed in the GM), thylakoid lumenal 19 kDa protein (expressed in the GM), chlorophyll a-b binding protein 6A (expressed in the GM), all identified in Zea mays species; plus chloroplast fructose-1,6-bisphosphatase (expressed in the GM) identified in Oryza sativa Indica Group. Previously, it has been also observed energy-related proteins, such as those involved in the photophosphorylation metabolism (ferredoxin-nadp reductase and thylakoid lumenal 19 kda proteinin) for Bt maize grown in field conditions in Brazil (Agapito-Tenfen et al. 2011). Few other studies have also investigated the proteome or the transcriptome profile of GM Bt maize (MON810 event) but the functional information on the identified differential proteins and genes were not available; most likely due to the lack of annotations in databases at the time of publication (Coll et al. 2010;Vaclavik et al. 2013;Balsamo et al. 2015). We have also observed the cytochrome c oxidase subunit II (over expressed in GM) which is involved in the aerobic respiration, 2-cys peroxiredoxin BAS1 (repressed in the GM) which catalyzes the transfer of electrons from sulfhydryl residues to peroxides, and the manganese superoxide dismutase enzyme (SOD-3) (expressed in the GM). Superoxide dismutase enzymes (SODs) act as antioxidants and protect cellular components from being oxidized by reactive oxygen species (ROS). ROS can form as a result of various stress conditions, such as drought, injury, pesticides, ozone, plant metabolic activity, nutrient deficiencies, photoinhibition, air and soil temperature, toxic metals, and UV or gamma rays. Therefore, these are important enzymes that are closely linked to stress perception and physiological responses to stress. Disturbances at the redox metabolism of transgenic Bt maize have been previously observed for GM maize lines expressing cry1Ab/cry2Aj transgenes (Hao et al. 2017). Field grown Bt maize also showed a different proteomic profile for other anti-oxidant enzimes, including the 2-cys peroxiredoxin and apx1 -cytosolic ascorbate peroxidase (Agapito-Tenfen et al. 2011). In addition, we found isoforms for five identified proteins (Table 2). These isoforms are photosynthesis-related proteins present in the chloroplast and some with specific functions in phosphoric ester hydrolase activity, calcium ion binding and oxidoreductase activity. Although protein identity matches were highly confident, we observed differences between theoretical and expected molecular weight and pI. Similarly, when analyzing GM maize seed lines, Zolla et al. (2008) found that a number of seed storage proteins (such as globulins and vicilin-like embryo storage proteins) exhibited truncated forms with a molecular mass significantly lower than native ones. Our results also showed lower molecular weight for photosynthesis-related, signal transduction and pathogenesisrelated proteins. As for pI changes, most of our identified proteins showed lower pI than the theoretical values. Considering that we have high resolution gels with little or no smear, and the protein identity match is high confident, the low pI values may be evidence of a post-translational modification, cleavage or alternative-splicing event. Moreover, no transgenic protein products (Cry1Ab) deriving from the transgene inserted into MON810 event were revealed in our 2-D DIGE gels. We hypothesized that the extraction buffer at pH 8.0 does not allow Cry1Ab solubilization, which is well known to be solubilized around pH 11 (Zolla et al. 2008;Balsamo et al. 2011). Genes are not randomly distributed in the genome and their coordinated expression can be regulated by many factors at virtually any step of gene expression; from transcriptional initiation, to RNA processing, and to the post-translational modification of a protein. Nevertheless, in the case of a transgenic organism, the insertion and expression of a transgene can also be a source of endogenous gene modulation (Latchman 2005). Similarly, different copies of the introduced gene which integrate into the host chromosomes at different positions are expressed at very different levels, suggesting that gene activity is being influenced by adjacent chromosomal regions (Li et al. 1999). We have included the genome location of each of the differentially expressed protein found in this study to our analysis (Table 2). Interestingly, the genome location of differentially expressed genes varied, thus showing the influence of MON810 transgene integration site, into other genomic locations, likely due to changes in chromatin structure (e.g., heterochromatin) or inserted sequences acting as transcriptional regulation elements (e.g., enhancers, strong promoters) (Weising et al. 1988). Independent researchers have sequenced flanking regions at both ends and found results that matched to sequences in different chromosomes. Holck et al. (2002) found that the maize endogenous DNA showed high similarity, 88% across the 440 bp next to the 5' end junction, with the Zea mays chromosome 4 22kD alpha zein-gene cluster region (accession number AF105716). And Rosati et al. (2008) found 99% identity of the junction 3' end with the chromosome 5 BAC clone ZMMBBc0409B05 (accession number AC185641). The latter having 82% identity with Oryza sativa locus coding for a putative HECT E3 ubiquitin ligase. These results suggest that the integration of the MON810 vector has probably caused a complex recombination event as the 5' e 3' end regions do not correspond to the same genomic locus. We identified six proteins that were located in chromosomes 4 and 5the putstive location of MON810 insert. Other proteins were found to be located at all maize chromosomes, with the exception of chromosome 7, 10 and circular chloroplast chromosome. Apparently, these genes are not clustered together but dispersed in the maize genome. And their expression does not seem to be controlled by a common regulatory process. These should, however, be further investigated. The allergenic potential of the protein isoform in white Bt-maize We found an unique spot (Match ID 529) corresponding to the pathogenesis-related protein class 1 (PR1) was expressed only in the GM variety. Annotations for this protein in the ExPASy Tool related it to a 160 amino acid residues protein with a 17.1 kDa mass and belonging to the Bet v-1family of proteins. The betallergens have a signature represented by a 7-element fingerprint which was derived from an initial alignment of motifs that were drawn from short conserved regions spanning the full alignment length of 45 sequences (Attwood et al. 2013). Zea mays PR1 has not been experimentally characterized, but it is 85.7% identical in amino acid sequence to the well characterized PR1 from Sorghum bicolor (Q41298_SORBI). Epitope search in the allergen database for food safety (http://allergen.nihs.go.jp) using Z. mays PR1 as the query, revealed high matches to two known allergens: PR1 from Asparagus officinalis and Pru av1 from Prunus avium. A. officinalis PR1 was a 51% identity match with Z. mays PR1 and P. avium AV1 was a 42.4% match (Table 3). The Allergen Database for Food Safety is based on a Joint FAO/WHO Expert Consultation on Foods Derived from Biotechnology report which proposed that cross-reactivity between a query protein and a known allergen has to be considered when there is more than 35% identity match in the amino acid sequence of the expressed protein, using a window of either 80 amino acids and a suitable gap penalty or the identity of 6 contiguous amino acids. The epitope search results in relation to PR1 from A. officinalis, produced a score of 173, 51% identity and an E-Value = 1e-42. The search result related to the prediction of allergenicity produced a match with Pru av1 from P. avium with a score of 481.4, 42.4% identity and an E-Value = 4.8e-22. Spangfort et al. (2003) studied IgEbinding epitope of Bet v-1 from Betula sp. and verified that the epitope occupies 10% of the molecular surface area of the protein. These authors found that it is clearly conformational and has a sequential motif around residues 42-52. Our results show a full match with the same sequential motif (43 -50 residues) for the Pru av1 data. Further, our peptide sequence obtained from the MS/MS results produced a true match to the epitope region. The allergenic potential of the Bet v-1 like protein, to which the PR proteins are related, are common pollen and plant food allergens that have been widely described, including their possible isoforms (Reuter et al. 2005). In a recent review, Schenk et al. (2010) show evidence of variation in the immune reaction to different isoforms of Bet v-1 allergens as the allergenicity of pollen from a particular biological source is not determined by the total allergen content alone, but also by the quantities of the different isoforms and their allergenic potential. To the best of our knowledge, there is no investigation on transgenic maize leaf allergens reported in the literature. A recent study reveals differential reactivity of plasma from two maize allergic subjects against transgenic (MON810) versus non-transgenic grain protein extracts; however, it was not possible to identify the putative allergens (Fonseca et al. 2012). Although the authors were not able to observe differential expression for the tested allergen genes presented in GM versus non-GM varieties, these authors have confirmed the reactivity of chitinase and endochitinase A proteins in maize samples, both belonging to the group of the "pathogenesis-related proteins". Nonetheless, the potential allergen isoform identified in this study (PR1) shows high homology to the major cherry allergen (Pru av1) and 97 of 101 (96%) patients with birch pollinosis and oral allergy syndrom to cherry had IgE against Pru av 1 (Scheuer et al. 2001). It is, therefore, highly recommended that further studies should be performed in order to investigate the expression of PR1 and its allergenic potential. Coll et al. (2009) investigated field-grown maize leaves at the vegetative two-leaf (V2) and the tasseling (VT) stages and compared those results to the previous study with the same varieties at V2 stage under in vitro conditions by transcriptomics. These authors found four transcripts out of 36 that were differentially expressed between GM (MON810 event) and its conventional counterpart. These were identified as pathogenesis related protein (PR-9), trypsin inhibitor gene, Myb-like protein E1 and one uncharacterized RNA-binding protein. It is interesting to note that PR-9 was overexpressed in the GM plants in all stages and conditions analyzed, but the fold variation was greater for V2 plants grown under field-conditions. PR-9 and PR-10 represent two protein classes, the first being a peroxidase-like and the second a ribonuclease-like protein. Although these proteins participate in different metabolic pathways, both are produced in plants in the event of a pathogen attack. The distinct expression level between GM and non-GM plants reveals that the presence of an insecticide protein (Bt) might challenge plant-pathogen response differentially. Evidence to support this idea has been observed by Coll et al. (2011) study, which has investigated the proteome of GM maize grains grown under field conditions. Coll et al. (2011) found 4 and 6 differentially expressed proteins in 2 different commercialized maize hybrids in Spain. These authors concluded that the differential expression was variety-specific manner and called attention to the fact that depending on the experimental conditions applied, the analysis concerned a defined window in terms of pI and MW and are restricted to soluble and abundant proteins. In principle, it is possible that the allergenic potential of GMOs may be increased due to the introduction of potential foreign allergens, to potentially upregulated expression of allergenic components caused by the modification of the wild type organism or to different means of exposure. It is suggested for GMO risk analysis that experimental comparison of the wild-type organism with the whole GMO regarding their potential to elicit reactions in allergic individuals and to induce de novo sensitizations should be investigated along with the current evaluation of physiochemical properties and sequence homology with known allergens (Spok et al. 2005). Therefore, we strongly suggest that further studies should be performed in order to investigate the nature and the allergenic potential of the PR1 protein isoform found in our study. Relevance of the use of profiling-techniques in comparative risk assessments and contributions to the method development Proteomics and the use of bi-dimensional gel electrophoresis have long being tested as analytical tools that can complement existing risk assessment methods. Bi-dimensional gels have the capacity of characterizing and distinguishing varieties and genotypes, identifying possible allergens present in a sample, and detecting possible posttranslational modifications (Zolla et al. 2008). Other studies have revealed that transgenic plants react differently to environmental conditions as compared to their near isogenic counterparts. In these studies, differences in gene expression, both proteins and metabolites, are not a product of the transgene per se. The differently expressed products are affected by the genetic manipulation where the process of transformation seems to cause insertional or pleiotropic changes in the maize proteome. Several other studies have investigated the proteome of transgenic maize grains due to concerns about human and animal consumption. However, when released into the environment, humans and animals are in contact with the GMO by several different exposure routes which is not always related to grains alone. In addition, at the hazard identification step, any genotypic and phenotypic differences in the GMO should be scrutinized for their potential safety effect. According to the Guidance on Risk Assessment of Living Modified Organisms (AHTEG 2011), the "exposure assessment" aims to determine whether the receiving environment will be exposed to an living modified organism (LMO) that has the potential to cause adverse effects, taking into consideration the intended transfer, handling and use of the LMO, and the expression level, dose and environmental fate of transgene products. In our study, field-grown maize was sampled in order to investigate the proteome of maize leaves under real field conditions. Therefore, our approach provides an important insight into environmental risk assessments (ERA) with regards to possible impacts on herbivores and/or other pathogen communities that feed on the transgenic maize leaf. Current ERA practice of GM maize for food and feed or cultivation purposes in the European Union have been challenged due to the assumption that GM plants consist of two parts that function in a linear additive fashion: the crop and the novel GM transgene product (Dolezel et al. 2011). This is assumption is based on the substantial equivalence concept; when no statistically relevant compositional changes are detected, the crop plant is declared as safe and consequently only the added transgene product is subject to testing in the environmental risk assessment. It is important to note that our findings might be specific to the sample used, especially because these proteins are highly environmentally dependent. Therefore, case-by-case studies should be performed in order to provide reliable results for a specific type of risk assessment. The detection of changes in protein profiles does not present a safety issue per se; however, further studies should be conducted in order to address the biological relevance of such changes. Finally, regulatory agencies may take into account proteomic studies among the required ones. Conclusions In conclusion, our results showed that GM Bt maize grown in South Africa were clustered together and distant from on-GM genotypes analyzed by PCA which explained cerca 34% of the variation in the dataset. In addition, we obtained evidence of possible synergistic and antagonistic interactions following Bt transgene insertion into the GM maize genome. This conclusion is based on the observation of several metabolic processes that were disturbed in the GM samples alone. These proteins were mainly assigned to the energy/carbohydrate metabolism and also found in previous studies. In addition, a potential allergenic protein PR1 was also observed in the GM Bt samples, in which the epitope has been sequenced by MSMS. Such observations indicate that the genome changes in Bt GM maize may influence the overall gene expression in ways that may have relevance for hazard identification assessments. Funding This work, including experimental design, sample collection, data analysis and interpretation, and manuscript writing, were supported by the Environmental Biosafety Cooperation Project between South Africa and Norway coordinated by the South African National Biodiversity Institute. CAPES and CNPq provided for scholarships to S.Z.A., M.P.G. and R.O.N.
2020-11-19T09:12:13.505Z
2020-11-13T00:00:00.000
{ "year": 2020, "sha1": "247caf0233e5b83c9fa689d4c5363eef62a82ccf", "oa_license": "CCBY", "oa_url": "https://www.preprints.org/manuscript/202011.0367/v1/download", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "7dc02cb178a1129fa955a1cfc2c13727ef8c99ce", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
4569086
pes2o/s2orc
v3-fos-license
Serum Chemerin Concentrations Associate with Beta-Cell Function, but Not with Insulin Resistance in Individuals with Non-Alcoholic Fatty Liver Disease (NAFLD) The novel adipokine chemerin has been related to insulin-resistant states such as obesity and non alcoholic fatty liver disease (NAFLD). However, its association with insulin resistance and beta cell function remains controversial. The main objective was to examine whether serum chemerin levels associate with insulin sensitivity and beta cell function independently of body mass index (BMI), by studying consecutive outpatients of the hepatology clinics of a European university hospital. Individuals (n=196) with NAFLD were stratified into persons with normal glucose tolerance (NGT; n=110), impaired glucose tolerance (IGT; n=51) and type 2 diabetes (T2D; n=35) and the association between serum chemerin and measures of insulin sensitivity and beta cell function as assessed during fasting and during oral glucose tolerance test (OGTT) was measured. Our results showed that serum chemerin positively associated with BMI (P=0.0007) and C peptide during OGTT (P<0.004), but not with circulating glucose, insulin, lipids or liver enzymes (all P>0.18). No BMI independent relationships of chemerin with fasting and OGTT derived measures of insulin sensitivity were found (P>0.5). Chemerin associated positively with fasting beta cell function as well as the OGTT derived insulinogenic index IGI_cp and the adaptation index after adjustment for age, sex and BMI (P=0.002-0.007), and inversely with the insulin/C peptide ratio (P=0.007). Serum chemerin neither related to the insulinogenic index IGI_ins nor the disposition index. In conclusion, circulating chemerin is likely linked to enhanced beta cell function but not to insulin sensitivity in patients with NAFLD. Introduction Chemerin is highly expressed in white adipose tissue, liver and lungs, secreted as inactive prochemerin and binds after activation to the chemokine-like receptor 1 (CMKLR1) [1,2]. Binding of chemerin to CMKLR1 in immune cells and adipose tissue stimulates chemotaxis at sites of inflammation [3,4]. Circulating levels of chemerin have been also associated with non-alcoholic fatty liver disease (NAFLD) [13,29,30], which comprises a broad spectrum of disorders ranging from simple fatty liver to nonalcoholic steatohepatitis (NASH) and cirrhosis [31]. NAFLD is closely linked to obesity and insulin resistance, which frequently co-exists with impaired glucose tolerance (IGT) or type 2 diabetes [32,33]. As chemerin may modulate insulin resistance and inflammatory responses [34], we examined its role in patients with NAFLD. More specifically, we tested (i) whether chemerin levels differently associate with insulin sensitivity and/or beta-cell function using both fasting levels and dynamic changes of glucose and insulin after oral glucose loading and (ii) if so, whether these associations occur independently of BMI in patients with NAFLD and different degrees of glucose tolerance. Study participants We studied consecutive patients who presented with elevated serum aminotransferases at the outpatient Hepatology Clinics of Attikon University General Hospital in Athens, Greece, between June 2009 and May 2012, and who were diagnosed with NAFLD. The diagnosis of NAFLD was based on the presence of increased liver transaminases, i.e. alanine transaminase (ALT) and gamma-glutamyl transpeptidase (γ-GT) 1.5-2times above the upper limit of the normal range, along with typical hepatic fat infiltration, i.e. "bright liver" or hyperechogenic appearance employing abdominal ultrasound and the exclusion of other possible causes of liver disease, including alcoholic liver disease (alcohol consumption exceeding 20 g/day), adverse drug reactions, viral hepatitis, autoimmune disorders and hereditary diseases affecting the liver [35]. Upon inclusion in the study, height and weight from each patient were measured. Systolic and diastolic blood pressure values were recorded. After overnight fasting, the patients with NAFLD underwent fasting blood sampling and a 75-g oral glucose tolerance test (OGTT), during which blood samples were obtained through an indwelling peripheral vein cannula at 0, 30, 60, 90 and 120 min to measure plasma glucose, serum insulin and serum C-peptide concentrations. Patients with known history of diabetes or a fasting plasma glucose 126 mg/dl did not undergo an OGTT and were therefore excluded from the study. A total of 197 patients were enrolled in the study and classified according to the criteria by the American Diabetes Association [36] into 110 individuals with normal glucose tolerance (NGT), 52 with impaired glucose tolerance (IGT) and 35 with type 2 diabetes (T2D). Ethics statement The institutional review board of "Attikon" University General Hospital approved this study and the patients gave their informed consent in writing. Laboratory analyses Serum cholesterol (total, LDL, HDL), triglycerides, aspartate transaminase (AST), ALT and γ-GT were assessed using a Cobas 8000 analyzer (Roche, Basel, Switzerland). Plasma glucose was measured using the glucose oxidase method (YSI, Yellow Springs Instruments, Yellow Springs, CO). Serum C-peptide and insulin levels were quantified using the respective radioimmunoassays from Millipore (St. Charles, MO). Serum chemerin was measured by an enzymelinked immunosorbent assay (Chemerin Human ELISA; Biovendor, Brno, Czech Republic) with intra-and inter-assay coefficients of variation of 3.4% and 6.4%, respectively. Calculation of insulin sensitivity and secretion In the fasted state, insulin sensitivity was assessed from QUICKI = 1/[log (insulin) + log (glucose)] [38], while beta-cell function was estimated with the ratio of insulin to glucose concentration. During the OGTT, total and incremental areas under the time curves (AUC) of plasma glucose, insulin and C-peptide concentrations were calculated using the trapezoidal rule. The oral glucose insulin sensitivity (OGIS) represents a measure of dynamic insulin sensitivity as previously described and validated against the hyperinsulinemic-normoglycemic clamp [37,38]. Beta-cell function was assessed from insulinogenic indices (IGI) [39]. The IGI_cp, given as the ratio of the difference between C-peptide at 30 min and at fasting versus the analogous difference for glucose at 30 min and at fasting, provides an empirical index mirroring beta-cell function at portal level. Using insulin instead of C-peptide yields the classic IGI_ins [39]. The beta-cell adaptation index reflects the mechanism, by which the beta-cell modulates insulin release in response to changes of insulin resistance, and is given as OGIS times IGI_cp, while the disposition index is given as OGIS times IGI_ins [38]. Statistical analysis The characteristics of the study sample are given as percentages for categorical variables, mean and standard deviation (SD) for continuous variables with normal distribution and median (25 th and 75 th percentiles) for continuous variables without normal distribution. The Kolmogorov-Smirnov test was used to test for normal distribution of data. Group comparisons for these variables were performed using Fisher's exact test, t-test or ANOVA with Dunnett's multiple comparison test (for two or more groups) and Kruskal-Wallis test with Dunn's multiple comparison test for more than two groups, respectively. Correlations were assessed as partial Spearman correlation coefficients adjusting for the variables indicated. P values lower than 0.05 were considered statistically significant. Statistical analyses were performed using Graph-Pad Prism version 6.02 (GraphPad Software, La Jolla, CA) and SAS version 9.3 (SAS Institute, Cary, NC). Table 1 presents the basic characteristics of the study population stratified by glucose tolerance status. Except from younger age in NGT compared with IGT and T2D, the groups did not differ with respect to sex, BMI as well as serum lipids and transaminases. Individuals with IGT and T2D had higher fasting, 2-h and AUC glucose than individuals with NGT (Table 1). Fasting insulin was higher in T2D than in NGT, while other insulin and C-peptide-related parameters were not different between groups. QUICKI was lower in T2D than in IGT and NGT, while OGIS was lower in both T2D and IGT than in NGT (Table 1). Fasting beta-cell function did not differ between the groups (Table 1). From OGTT estimates of beta cell function, IGT and T2D had lower IGI_cp, IGI_ins and disposition index than NGT, whereas the adaptation index was decreased only in T2D (Table 1). Associations of chemerin with age, sex and metabolic characteristics In a joint model containing chemerin, age, sex and BMI, both age and BMI positively associated with serum levels of chemerin, and the sex difference remained significant for chemerin ( Table 2). In contrast, no significant associations were found in age-, sex-and BMI-adjusted analyses between chemerin levels and lipids (triglycerides, total, LDL, HDL cholesterol) or transaminases (ALT, AST, γ-GT) ( Table 2). In age-and sex-adjusted analyses, serum chemerin levels positively associated with 2-h glucose and fasting insulin, but not with other glucose-and insulin-related variables. After further adjustment for BMI, the significant associations disappeared (Table 2), while serum chemerin associated with C-peptide levels independently of BMI. Serum chemerin levels neither related to QUICKI nor to OGIS after adjustment for age, sex and BMI. In contrast, serum chemerin associated positively with fasting beta-cell function, IGI_cp as well as adaptation index based on C-peptide, independent of BMI, but not with IGI_ins and disposition index. Serum chemerin also inversely associated with the ratio of insulin to C-peptide (Table 2). Stratifying the study population into the subgroups with NGT and IGT/T2D did not affect the associations between serum chemerin and metabolic parameters in both subgroups (S1 Table). Discussion This study shows that circulating chemerin concentrations not only positively associate with BMI in patients with NAFLD, but are also higher in individuals with IGT or type 2 diabetes compared to NGT in this study population. Serum chemerin further exhibited BMI-independent associations with beta-cell function, but not with insulin sensitivity. Chemerin has been proposed as a novel biomarker of insulin resistance related to obesity and T2D, but it was unknown whether the observed associations between chemerin levels and metabolic features are mainly mediated by obesity or occur independently of obesity. In particular, putative associations with T2D are poorly understood because it is not clear whether chemerin primarily relates to insulin resistance or beta-cell dysfunction. Our study extends previous studies by assessing insulin resistance based on dynamic measurements from an OGTT rather than relying on surrogate markers of insulin resistance derived from fasting parameters only. Although we found that chemerin and QUICKI were correlated, there was no association when OGIS was used as more precise measure of insulin resistance. Moreover, the association with QUICKI was mainly explained by BMI. These data are in line with several studies that also failed to observe a BMI-independent association between chemerin and insulin resistance assessed by HOMA-IR [6,11,14,19,26]. Two previous studies analyzed the association between chemerin and the OGTT-based Matsuda insulin sensitivity index (ISI) [11,26] with contradictory results: the first study in Japanese patients with type 2 diabetes found a significant inverse association after adjustment for age, sex and BMI [11], whereas the inverse association between chemerin and Matsuda ISI in children from Germany was mainly explained by age and BMI [26]. Evidence for a direct BMI-independent association between chemerin and insulin resistance comes from two studies using hyperinsulinemic-normoglycemic clamps in non-obese normoglycemic men [17] and in a very heterogeneous cohort with respect to age, BMI and insulin resistance [27]. The reason for the discrepancy between the aforementioned findings is currently unclear. It is important to note that also mechanistic studies revealed opposing effects of chemerin on insulin action. Kralich et al. showed an increase in insulin-stimulated glucose uptake and insulin receptor substrate-1 tyrosine phosphorylation after chemerin treatment [40], whereas Takahashi et al. reported a decrease in insulin-stimulated glucose uptake [41]. While exogenous administration of chemerin exacerbates glucose intolerance and decreases tissue glucose uptake in obese diabetic mice [42], chemerin-deficient mice are also glucose intolerant [28]. The absence of any correlation between serum chemerin and insulin resistance in the present study could be due to our study population consisting of patients with NAFLD. Several studies indicated that liver disease may affect circulating chemerin levels [42]. Previous studies reported higher chemerin levels in patients with clinical or biopsy-proven NAFLD [13,29,30]. In the present study, patients with IGT and T2D had higher chemerin levels than glucose tolerant patients with NAFLD suggesting a specific role for insulin action or secretion. Nevertheless, a recent study found elevated mRNA levels of both chemerin and its receptor, CMKLR1, in the human liver, with greater expression in patients with NASH [43]. The increase in chemerin in overweight patients with NAFLD may be more pronounced than effects of insulin resistance per se and thereby obscure or blunt any association with insulin resistance. On basis of the previous data and our study, a future study comprising persons without and with NAFLD and careful matching for glucose tolerance status will be necessary to address the question whether the presence of NAFLD modifies the association between chemerin and insulin resistance. In contrast to insulin resistance, our study clearly showed that chemerin levels associate positively with beta-cell function during fasting and under dynamic conditions as assessed with the insulinogenic index IGI_cp and the adaptation index. These data are novel as two previous studies used only HOMA-B as surrogate marker of fasting beta-cell function. In these studies, chemerin did not relate to HOMA-B in cohorts from Japan [20] and Mauritius [3] after adjustment for age, sex and BMI. Data from studies using dynamic measurements of beta-cell functions have not been published so far. In addition to the fact that dynamic measures are more sensitive to detect changes in insulin secretion, the difference between the present and the previous studies could result form the study population, i.e. Caucasian patients with NAFLD. The generally higher degree of insulin resistance in NAFLD may explain a compensatory excessive increase in insulin secretion during OGTT, which is typical for IGT and early or newly diagnosed patients with T2D [44]. On the other hand, it is conceivable that chemerin per se supports insulin secretion under these conditions. Recent studies on chemerin in mouse models serve to support our human data [28]. Islets from chemerin-deficient mice exhibit impaired glucose-dependent insulin secretion (GSIS), whereas transgenic mice overexpressing chemerin have increased GSIS. Although these data support a beneficial effect of chemerin on beta-cell function, additional preclinical and clinical studies are required to elucidate the interplay between chemerin and insulin secretion in more detail. The use of dynamic measurements from an OGTT to simultaneously assess insulin sensitivity and beta-cell function represents the main strength of our study. Nevertheless, some limitations should be taken into account. We included patients diagnosed with NAFLD using transaminases and liver ultrasound rather than employing liver biopsy. Although the liver biopsy is the gold standard procedure to diagnose NAFLD [45,46], it is not generally used in mild forms as present in our cohort. Furthermore, we recruited patients form a specialized clinic and did not include persons without NAFLD. Thus, we cannot generalize our findings to the general population or other ethnic groups, but we were able to detect alterations within a welldefined closely matched groups with different degrees of glucose tolerance. In conclusion, this study did not observe a BMI-independent correlation between increased circulating chemerin concentrations and insulin resistance in patients with NAFLD. On the other hand, we found a robust association between elevated chemerin and increased beta-cell function which points towards a novel beneficial role of chemerin for dynamic insulin secretion in the context of NAFLD and type 2 diabetes. Supporting Information S1 Table. Associations between Serum Chemerin Levels and Anthropometric and Metabolic Variables In the Subgroups with NGT and IGT/T2D. (DOCX)
2018-04-03T02:42:16.770Z
2015-05-01T00:00:00.000
{ "year": 2015, "sha1": "033094fce84467c6324b90d075d51ab997d515b2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0124935", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "033094fce84467c6324b90d075d51ab997d515b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
242194807
pes2o/s2orc
v3-fos-license
Do Oceanic Convection and Clathrate Dissociation Drive Europa's Geysers? Water vapor geysers on Europa have been inferred from observations made by the Galileo spacecraft, the Hubble Space Telescope, and the Keck Observatory. Unlike the water-rich geysers observed on Enceladus, Europa's geysers appear to be an intermittent phenomenon, and the dynamical mechanism permitting water to sporadically erupt through a kilometers-thick ice sheet is not well understood. Here we outline and explore the hypothesis that the Europan geysers are driven by CO$_2$ gas released by dissociation and depressurization of CO$_2$ clathrate hydrates initially sourced from the subsurface ocean. We show that CO$_2$ hydrates can become buoyant to the upper ice-water interface under plausible oceanic conditions, namely, if the temperature or salinity conditions of a density-stratified two-layer water column evolve to permit the onset of convection that generates a single mixed layer. To quantitatively describe the eruptions once the CO$_2$ has been released from the hydrate state, we extend a one-dimensional hydrodynamical model that draws from the literature on volcanic magma explosions on Earth. Our results indicate that for a sufficiently high concentration of exsolved CO$_2$, these eruptions develop vertical velocities of $\sim$700 m s$^{-1}$. These high velocities permit the ejecta to reach heights of $\sim$200 km above the Europan surface, thereby explaining the intermittent presence of water vapor at these high altitudes. Molecules ejected via this process will persist in the Europan atmosphere for a duration of about 10 minutes, limiting the timescale over which geyser activity above the Europan surface may be observable. Our proposed mechanism requires Europa's ice shell thickness to be d$\lesssim$ 10 km. Introduction Inferences from both photographs and magnetometer measurements obtained by NASA's Galileo orbiter generated a consensus that Europa harbors a subsurface ocean (e.g., Carr et al. 1998;Greeley et al. 1998;Kivelson et al. 2000). The ocean is presumed to be O(100) km deep and is covered by an ice cover that is likely O(10) km thick (e.g., Carr et al. 1998;Hussmann et al. 2002;Nimmo & Pappalardo 2016). The prospects inherent within a vast sea of geothermally warmed water have long drawn the attention and sparked the speculation of astrobiologists (e.g., Hand et al. 2009). Possible evidence of liquid water in the near-surface environment was obtained when the Hubble Space Telescope detected the presence of hydrogen and oxygen in Europa's tenuous atmosphere (Hall et al. 1995;Roth et al. 2014). In isolated instances, water vapor geysers have been inferred from infrared spectra (Paganini et al. 2019), and hints of geysers have been found in the magnetic (Jia et al. 2018), chemical (Roth et al. 2014), and proton (Huybrighs et al. 2020) signatures of Europa's atmosphere. In particular, excesses of hydrogen and oxygen were detected with the Hubble Space Telescope at distances of h ∼ 200 km above the surface of the D ∼ 3120 km satellite (Roth et al. 2014). Treating these waterderived products as ballistic projectiles, a zeroth-order estimate for their ejection velocity from Europa's surface is then =ṽ gh 2 -700 ms 1 , given Europa's g = 1.31 m s −2 surface gravitational acceleration. Venting of water is not unique to Europa in the outer solar system. The presence of geysers over the south pole of Enceladus, a smaller ice-covered moon of Saturn, has also been firmly established from absorption spectra (Hansen et al. 2006), dust measurements (Spahn et al. 2006), mass spectra (Waite et al. 2006), and photographic imaging (Porco et al. 2006) by NASA's Cassini spacecraft. The geysers of Enceladus are thought to be caused by ice sheet fissures opened via tidal stresses (e.g., Hedman et al. 2013), exposing water, and possibly other volatile species, to the near-vacuum of the satellite's exospheric environment (Hurford et al. 2007). Unlike the consistent geysers of Enceladus (e.g., Hansen et al. 2006), the Europan geysers display only intermittent and transient observability. For example, in a series of observations by the Keck Observatory over 17 dates, only one date produced an emission spectra likely to be associated with a water vapor geyser (Paganini et al. 2019). This isolated event was interpreted to arise from the release of ∼2 × 10 6 kg of water into the thin Europan atmosphere (e.g., Paganini et al. 2019). Thus, the physical mechanism driving the ejection of water from Europa appears to be distinct from what is occurring on Enceladus (e.g., Kieffer et al. 2006;Nimmo et al. 2007). The precise thickness of Europa's ice shell, through which the geysers may erupt, is not well known, and arguments are made for both "thick" and "thin" shells (e.g., Pappalardo et al. 1998;Greenberg et al. 1999;Hoppa et al. 1999;McKinnon 1999;Schenk 2002). For a thick ice shell, with an approximate depth of 10 km or more, the resulting temperature gradient between the Europan surface and the much warmer ocean may produce convective cells within the ice cover (e.g., McKinnon 1999;Schenk 2002). In contrast, for a thin ice shell, with a thickness of around 10 km or less (e.g., Schenk 2002), heat transport within the shell will be dominated by conduction (e.g., McKinnon 1999). The thickness, in turn, affects the ice shell's propensity to fracture; it is easier to generate cracks in a thin, brittle shell. Multiple mechanisms for generating fractures have been suggested, with tidal stresses attracting particular note (e.g., Lee et al. 2005). Other ideas include the suggestion that overpressure generated between the liquid water and ice shell as liquid water freezes into ice may allow cracks to form in the ice shell (Manga & Wang 2007). Cracks alone, however, fall short in explaining the energetic geysering mechanism that appears to be occurring (e.g., Crawford & Stevenson 1988). Several known terrestrial phenomena bear important similarities to the mechanisms that may be responsible for driving the cryovolcanic eruptions on Europa. Particularly apt analogies can be drawn from the dynamics of buoyancy-driven flows and pressuremediated vapor exsolution that drive volcanic eruptions from Earth's interior. In a volcano, buoyant magma decompresses as it rises, releasing gas that further increases the magma's buoyancy (e.g., Woods 1995). Rising mixtures that approach the surface can then erupt from lithospheric vents. Similarly, kimberlite (CO 2 -rich and occasionally diamondiferous melt sourced from the mantle) emplacement is generated by magma ascending through cracks in the Earth's lithosphere at high speeds. This process, which originates from depths of 150 km, permits self-perpetuating upward crack propagation through the lithosphere (e.g., Sparks 2013). When the pressure drops below a critical value, volatiles in the magma can exsolve in the crack tip, powering surface eruptions (Lister & Kerr 1991). Similarly, the presence of volatile content dissolved in lake water may be responsible for limnic eruptions. Consider Lake Nyos in 1986. Here a lake in Cameroon, stratified in carbon dioxide, spontaneously released of order V = 1 km 3 of CO 2 gas (Kling et al. 1987). There are a variety of theories that describe the mechanisms precipitating this disaster, largely focusing on a buoyancy-driven flow propelled by the exsolution of carbon dioxide stored in the lake (e.g., Giggenbach 1990;Schmid et al. 2004). Finally, methane clathrate hydrates (compounds in which molecules of methane are trapped within cage-like H 2 O crystal structures) produce crater-forming eruptions on Earth. These hydrates are stable at high pressure but will dissociate when pressure and temperature conditions permit (e.g., Buffett 2000), leading to a potentially explosive release of methane gas. For example, hydrate-driven explosions are thought to have occurred in permafrost (frozen soil; Vlasov et al. 2018) and undersea domes in the high-latitude regions (Andreassen et al. 2017). We draw from these intensively studied terrestrial processes to propose an eruptive process on Europa driven by exsolved carbon dioxide. We propose that the gas derives from dissociated carbon dioxide clathrates. Our hypothesis builds on previous research that has speculated that cryovolcanic events driven by volatiles, and sometimes clathrates, may lead to geyser activity on both Europa and Enceladus (e.g., Stevenson 1982;Crawford & Stevenson 1988;Kieffer et al. 2006;Matson et al. 2012). Specifically, we quantitatively develop a scenario in which buoyancy-driven oceanic flows permit CO 2 clathrates to migrate upward through cracks in Europa's ice shell to depths at which they become unstable. As discussed in detail in Section 3, for the process to work, the ice shell must be thin, with d  10 km. At P ∼ 1 MPa pressures, associated with d ∼ 1 km of Europan ice overburden, the clathrates will dissociate, producing pressurized CO 2 gas. This generates an explosion of liquid water and CO 2 gas that fractures the ice and permits geysering with surface exit speeds of up to ∼700 m s −1 , a velocity that is in line with the inferred presence of water vapor hundreds of kilometers above Europa's surface (e.g., Roth et al. 2014). Our presentation is organized in the following manner. In the next section, we describe the conditions under which a two-layer Europan ocean with volatile-rich clathrates trapped at the interface between the layers can overturn, leading to clathrate buoyancy in the mixed column. Under sufficiently low-pressure conditions, the clathrates will dissociate, generating the eruptive process described above. In Section 3, we describe a one-dimensional hydrodynamical model for this eruptive process on Europa. The model indicates that the clathrate-eruption mechanism we propose can generate appropriate velocities to describe observations of water vapor above Europa's surface. Using this approach, we evaluate the timescales for atmospheric drawdown of carbon dioxide in Europa's atmosphere. In Section 4, we discuss how our hypothesis allows for future testable predictions from the Europa Clipper mission. A Two-layer Ocean Model We start by outlining the conditions under which a two-layer Europan ocean can exist, as described in Zhu et al. (2017). Our proposed mechanism for intermittent geyser eruptions hinges on the density stratification, or lack thereof, between these two layers ( Figure 1). Initially, the upper and lower ocean layers are separated by a density jump due to a freshwater cap generated by melting ice (Zhu et al. 2017). If conditions within the oceanic column change, however, and the density gradient between the layers decreases, then the entire column may overturn. In the event that a volatile substance (such as clathrated CO 2 ) is stored in the lower layer, this mixing has the potential to produce a water column in which volatiles at depth can become buoyant and propagate into a crack in the ice shell, eventually rising to a level where the pressure is low enough for dissociation to occur. Europa's shell likely exhibits spatial variations in ice thickness (e.g., Ashkenazy et al. 2018). Ice may grow in the polar region of Europa, generating thick ice at high latitudes and thinner ice near the equatorial region. This ice thickness gradient generates a flux of ice from the region of thick ice to the region of thin ice. In a steady state, this pole-to-equator ice flow is balanced by ice melt in the equatorial regions. Here a cooler, fresher water layer (a "freshwater" cap), sourced from the melting ice, can rest atop a warmer and saltier water layer (Zhu et al. 2017). The two-layer Europan water column is stably stratified (i.e., density increases with depth), provided the following condition is met: where α is the coefficient of thermal expansion, β is the coefficient of haline contraction, ΔT = T 2 − T 1 is the temperature jump across the upper and lower ocean layers, and ΔS = S 2 − S 1 is the salinity jump across the layers. This condition dictates that the effect of salinity on density (a large effect at low temperatures) is larger than the effect of temperature on density, allowing a cold and fresh water layer to sit above a warm and salty water layer (Zhu et al. 2017). Since any increase in ice thickness due to the ice flux must be balanced by melting of ice in a steady state, a relationship for the steady-state pole-to-equator flux of ice thickness (F h ) depends only on the diffusion of heat through the ice cover and the ice-ocean heat flux. Here F h can be described as where κ i = 2 W m −1 K −1 is the thermal conductivity of ice, ΔT ep = 58 K is the change in surface temperature between the equator and pole, h 0 = 10 km is the steady-state equatorial ice The left-hand side shows the two-layer ocean preoverturn. An O(10) km thick ice layer overlies an O(100) km deep water column. The upper water layer has a temperature T 1 , salinity S 1 , and density ρ 1 , and the bottom layer has a temperature T 2 , salinity S 2 , and density ρ 2 , with carbon dioxide clathrates (hexagons) of ρ c massed at the interface between the two layers. An increase in the basal geothermal heat flux, for example, generates the conditions for an overturn. The right-hand side shows the resulting single-layer ocean with postoverturn density ρ after . For specific density conditions, the clathrates will have ρ c < ρ after and rise into any water-filled fissures present within the ice layer. At ∼1 MPa (about 1 km) beneath Europa's surface, the clathrates will dissociate, creating CO 2 gas that triggers an eruption through the ice cover. At the surface, liquid water subject to near-vacuum conditions vaporizes, generating a transient high-altitude atmospheric geyser containing H 2 O vapor and CO 2 gas. thickness, L = 3.3 × 10 8 J m −3 is the latent heat of fusion of ice, and DF ocn is the change in ocean-ice heat flux between the equatorial and polar regions (see Zhu et al. 2017, for a description of these values). This meridional ice flux then induces a saltwater flux in the opposing direction. This is because the corresponding ice melt in a steady state leads to a freshwater input to the subsurface ocean, while the corresponding ice growth leads to a saltwater input via brine rejection to the ocean below. The saltwater flux is written as F S = S 0 (ρ i /ρ)F h , where F S is the salt flux, S 0 is the salinity of the lower layer of the Europan ocean, ρ i is the density of ice, and ρ is the density of liquid water (Zhu et al. 2017). The salt flux can then be related to the salinity jump (ΔS, small compared to S 0 ) across the upper and lower layers of the two-layer ocean. Salt is transferred from the lower layer to the upper layer by turbulent mixing. This is expressed as cu * ΔS = F S , where c is a parameter that depends on the ratio of interface stratification to shear, and u * is a turbulent velocity between the upper and lower ocean layers (Zhu et al. 2017). The temperature jump between the upper and lower ocean layers is given by a balance between the geothermal heat flux (F b ) and ocean-ice heat flux. Assuming that the ocean water is not warming globally, the geothermal heat flux is related to the temperature jump across the upper and lower ocean layers as F b = ρC p cu * ΔT, where C p is the specific heat capacity of the water. The stability condition (Equation (1)) can then be rewritten as This inequality quantifies the relative effects of geothermal heating, ice thickness flux, and background ocean salinity necessary to maintain appropriate density conditions for a stable column (see Zhu et al. 2017, for the full model description). For the Europan ocean, which is thought to be laced with dissolved magnesium sulfate or sodium chloride, the minimum salinities consistent with the idealized two-layer ocean model of Zhu et al. (2017) are between 16 and 28 g kg −1 (depending on whether the salt is NaCl or MgSO 4 , respectively). They found that S 0 can be up to 200 g kg −1 for a magnesium sulfate ocean and 100 g kg −1 for a sodium chloride ocean, and that these values are within the bounds suggested by analyses of Europa's interaction with Jupiter's magnetic field (Hand & Chyba 2007;Zhu et al. 2017). However, it has also been suggested that Europa's salinity may be lower than the minimum values assumed here (e.g., Zolotov & Shock 2001;Hand & Chyba 2007;Steinbrügge et al. 2020). The exact value of salinity used for our hypothesis is not particularly critical, provided that it is high enough to permit establishment of a two-layer column and an intermediate clathrate density. Note that this does generally require a fairly salty Europan ocean, with a salinity of around 50 g kg −1 , in order to achieve the requisite ocean layer densities to bracket the clathrate density. Further, our model for geysering hinges on the two-layer ocean being stratified by salinity rather than temperature. While it is still possible to attain a two-layer ocean even in a fresh scenario, the requirements for this are more rigid. In a pure freshwater ocean, a two-layer ocean configuration can be attained under sufficiently low pressure (e.g., Zhu et al. 2017), provided that the upper layer falls between 0°C and 4°C and the lower layer is at 4°C (e.g., Melosh et al. 2004), the temperature at which freshwater is densest. Then, if the lower layer is abruptly heated to above 4°C, the water will rise. However, if the lower layer is maintained at a temperature below 4°C, it must reach 4°C prior to any convective activity. Thus, while it may be possible to still achieve an overturn in a temperature-stratified freshwater ocean, this does not appear to be a particularly likely configuration given the large temperature differences required. An Overturn The Europan water column will overturn if the effect of temperature on density across the interface density jump is greater than the effect of salinity on density. To determine the exact conditions at which the water column will become unstable, we consider the marginal stability case, αΔT = βΔS (equivalent to αF b = βC p ρ i F h S 0 ). The relevant dynamics arise from the flux terms, F b and F h . We take all other values to be constant with fiducial values α = 7.7 × 10 −5 K −1 , β = 7.7 × 10 −4 (g kg −1 ) −1 , C p = 4000 J kg −1 K −1 , and ρ i = 920 kg m −3 and vary only F b , F h , and S 0 to reach conditions of marginal stability. These fiducial values are generally appropriate for an ocean containing either MgSO 4 or NaCl (Zhu et al. 2017). The marginal stability case dictates the point beyond which a two-layer ocean would no longer be stably stratified and thus prone to forming a single layer with a uniform density. If F b /F h > βC p ρ i S 0 /α, the two-layer system will overturn. This means that if the geothermal heat flux at the base of the ocean increases sufficiently, the effect of temperature on density can cause the water column to be unstable. Similarly, if the ice thickness flux at the top of the layer decreases, the stabilizing effect of the freshwater cap will not be sufficient to prevent overturning. Relatively minor changes in ice thickness flux or basal heating may locally destabilize the water column. Recall that the ice thickness flux can be related to the ocean-ice heat flux gradient by Equation (2) as in Zhu et al. (2017). For an ocean with two layers of equal thickness (50 km), a geothermal heat flux (F b ) of 0.01 W m −2 , a background salinity of S 0 = 50 g kg −1 , and an ocean-ice meridional heat flux (DF ocn ) gradient of 0 W m −2 corresponding to an ice thickness flux (F h ) of 1.76 × 10 −11 m s −1 , the water column is stably stratified (Zhu et al. 2017). However, for relatively minor changes in F b or F h (a factor of about 3 for F b = 0.01 W m −2 ), the stability condition is no longer satisfied. For example, at a geothermal heat flux of F b  0.03 W m −2 (and the parameters above), the water column will overturn. This value falls within the estimated range of geothermal heat flux, between 0.01 and 0.1 W m −2 (Lowell & DuBose 2005;Zhu et al. 2017), and could be plausibly generated by spatially localized hydrothermal venting. Note that these increases in heat flux possibly generated by hydrothermal venting must be local phenomena. If the steady-state basal heat flux were higher, the salinity of the two-layer ocean would likely be tied to a different value (see Zhu et al. 2017). An anomalous spatially localized ice thickness flux may also generate an instability. For example, the stability criterion is no longer satisfied for a flux F h < 5.4 × 10 −12 m s −1 , corresponding to a change in the meridional gradient in the ocean-ice heat flux of −0.01 W m −2 (i.e., the ocean-ice heat flux at the poles is larger than the ocean-ice heat flux at the equator), per Equation (2). Under these conditions, the water column becomes unstable, and the upper and lower layers mix, yielding an intermediate density. Any substance with a density lower than this intermediate density, described above, will rise to the surface. This will provide the mechanism by which CO 2 clathrates, initially stored at depth in Europa's ocean, will dissociate under lower pressure, causing an explosion from Europa's ice shell. The Role of Carbon Dioxide Clathrates Carbon dioxide clathrates, composites of water ice and CO 2 in a crystal structure, are hypothesized to exist in the Europan ocean (e.g., Prieto-Ballesteros et al. 2005;Bouquet et al. 2019). Their density, which depends on the temperature, the pressure, and the molar fraction of CO 2 in the crystal structure, is not fully known (Safi et al. 2017). For temperatures close to freezing and pressures of O(10-100) MPa (corresponding to depths of ∼10-100 km), CO 2 -dominated clathrate densities are estimated to lie between ∼1040 and ∼1100 kg m −3 (Prieto-Ballesteros et al. 2005;Bouquet et al. 2019). Remarkably, these clathrate densities are very similar to the typical densities of the Europan ocean inferred from hypothesized salinities (e.g., at 50 MPa, 273 K, and 20 g kg −1 , the ocean density would be 1039 kg m −3 ; at 50 MPa, 273 K, and 100 g kg −1 , the ocean density would be 1100 kg m −3 ). Our hypothesis for convectively driven geysers draws on this coincidental overlap in range and posits that a situation sometimes arises in which the clathrate density is bracketed by the densities of the upper and lower ocean layers. If this occurs, then the clathrates float at the interface of the two-layer ocean. After an overturn, any carbon dioxide clathrates with a density lower than the intermediate overturn density will rise toward the surface and eventually dissociate. We can consider a situation where a two-layer fluid is heated from below; the heating will generate plumes emanating from the bottom layer that will eventually entrain fluid from the top layer. As a consequence, the lower layer progressively becomes less dense due to both the heating and the entrainment of the lighter overlying fluid. The lower layer will also thicken as it mixes in fluid from above ( Figure 2). Simultaneously, the upper layer will increase its density slightly as it mixes with the fluid from below. As an end result, the entire upper layer can incorporate into the lower layer, yielding a single layer with an intermediate density (see, e.g., Davaille 1999, for a viscous case). As long as this intermediate density exceeds that of the clathrates, the clathrates will rise. Additionally, note that the inverse scenario will occur if the fluid is cooled from above (plumes will sink from above, mixing up the layer below), still leading to an intermediate density with buoyant clathrates. We can illustrate this with the most simplistic scenario, where two layers with initial densities mix to create one layer of intermediate density. To fix ideas, we first assume that the upper ocean freshwater layer and the deep ocean layer are the same thickness, 50 km. The upper ocean density is ρ 1 = ρ 1 (T 1 , S 1 ). The lower ocean layer density is ρ 2 = ρ 2 (T 2 , S 2 ) = ρ 2 (T 1 + ΔT, S 1 + ΔS), where ΔT = T 2 − T 1 , and ΔS = S 2 − S 1 . The density of the clathrates, ρ c , follows ρ 1 < ρ c < ρ 2 , ensuring that they are sequestered between the two water layers. If the water column mixes, the full water column will have a density 2 . If ρ c < ρ after , the clathrates will ascend in the water column. To linear approximation, the density of the water can be written as ρ = ρ 0 (1 − αT + βS), where ρ 0 is a reference density, T is temperature, and S is salinity. A change in density is thus δρ = ρ 0 (−αδT + βδS). We approximate the densities of the two layers in Europa's ocean before an overturn as ρ 2 = ρ 0 + 2δρ and ρ 1 = ρ 0 − δρ, where ρ 0 is the background/reference density. We further take ρ c = ρ 0 . After a convective overturn, the density of the now one-layer system using this linear approximation will be ρ after = ρ 0 + 0.5δρ. Thus, a clathrate with density ρ c = ρ 0 finds itself buoyant. The temperature and salinity jumps at the interface of the upper and lower ocean layers under which a density jump can be maintained are rather restricted in scope (Zhu et al. 2017). For a background salinity of 50 g kg −1 in an ocean composed of NaCl, a layer depth of 50 km, a prescribed 0.01 m s −1 turbulent velocity between the upper and lower layer, and a geothermal heat source of 0.01 W m −2 , ΔT ranges from about O(10 −1 ) to O(10 −3.5 ) K, and ΔS ranges from about O(10 −1 ) to O(10 −3 ) g kg −1 , depending on the value of ocean-ice heat flux (see Figure 3(b) in Zhu et al. 2017). Taking α = 7.7 × 10 −5 K −1 and β = 7.7 × 10 −4 (g kg −1 ) −1 and selecting ρ 0 = 1060 kg m −3 (to conform with clathrate density estimates and for an Figure 2. Schematic of the convective process leading to overturn of the water column. At first, two well-mixed layers (the cold freshwater layer and warmer, saltier layer) exist. Upon increased basal heating (for example), convective plumes from the lower layer penetrate the interface and mix up some fluid from the upper layer. In the final stage, plumes from the lower layer have fully penetrated into the upper layer, and both layers mix. Figure 3. Change in density, Δρ, between the top and bottom in a two-layer ocean as a function of temperature and salinity jumps ΔT and ΔS across the layer interface, based on the linear approximation for ρ. The representative density ρ 0 is taken to be 1060 kg m −3 , α = 7.7 × 10 −5 K −1 , and β = 7.7 × 10 −4 (g kg −1 ) −1 . Only Δρ > 0 is shown. There is a restricted regime of temperature and salinity jumps where a two-layer ocean may be stable. Any perturbation from this would cause an overturn. ocean at 50 g kg −1 ) yields density jumps across the upper and lower layers ranging from O(10 −1.5 ) to O(10 −6.5 ) kg m −3 , shown in Figure 3. This indicates that the Europan ocean may be very weakly stratified and gives credence to the idea that a localized change in basal heat flux of a factor of 3 (see Section 2.2) may allow for an overturn. In reality, a single oceanic overturn likely oversimplifies the dynamics within the Europan water column. More realistically, the configuration will be subject to penetrative convection. This mechanism describes an unstable fluid layer sitting below a stable layer, wherein plumes arising from the unstable layer eventually mix the fluid from the stable layer (e.g., Veronis 1963). The Europan setup of a freshwater layer sitting atop an ocean layer subject to destabilizing buoyancy forcing from hydrothermal vents seems apt for this process. We take the horizontal extent of such a convective cell to be the same order as the vertical scale of the convective cell, ultimately ∼100 km (although these cells may merge; see, e.g., Toppaladoddi & Wettlaufer 2018), giving an extent for the horizontal scale of penetrative convection in the Europan ocean as about 100 km. Taking a convective timescale (which describes how long it would take a fluid parcel to move from the lower layer to the upper layer) for this motion as ( ) ( ) r r D l g 0 and using the range of values of Δρ from Figure 3 gives convective timescales between ∼14 and 4500 hr for high and low Δρ, respectively. The lower end of this range is consistent with the inferred recurrence times for geyser eruptions (∼1 every 30 hr) as seen by Paganini et al. (2019). Moreover, a configuration in which colder, fresher water sits atop a warmer and saltier water layer may be conducive to doublediffusive convection. This diffusive-convective process, which is driven by the differential diffusivities between salt and heat, produces small-scale convective layers separated by sharp interfaces across which heat and salt are fluxed (e.g., Turner 1965;Shibley & Timmermans 2019). Past work has indicated that double-diffusive convection may be possible on Europa (Vance & Brown 2005), which would result in a vastly more complex problem. In particular, while double-diffusive convection would generally increase the density contrast between upper and lower ocean layers, depending on where the double diffusion is acting, it is also possible to homogenize portions of a water column through differential heat and salt fluxes. This may lead to smaller-scale vertical overturns with the potential to transport dissolved solute vertically, such as has been hypothesized for the case of Lake Nyos (e.g., Schmid et al. 2004). A Model of the Geyser Mechanism The convective process described in the last section will allow buoyant clathrates to propagate into fissures in Europa's ice shell. This motivates a hydrodynamical treatment to describe the time development of the process that erupts declathrated water and CO 2 on Europa. The framework is similar to the shock tube-inspired model of Turcotte et al. (1990), who modeled magma eruptions from Earth's lithosphere. If a crack or fissure exists in the ice sheet overlying Europa's ocean, then water will intrude up into the ice sheet to a height of ∼90% of the ice sheet thickness (based on the densities of ice and liquid water). Buoyant clathrates from the ocean will thus rise through the water intrusions following an oceanic overturn of the type described in the previous section. At a pressure of ∼1 MPa (∼1 km below the ice sheet), the CO 2 clathrates will dissociate (Diamond & Akinfiev 2003), generating liquid water and carbon dioxide gas and paving the way for an explosion similar to the explosion that drives the cork from a champagne bottle (e.g., Liger-Belair et al. 2019). For an ice sheet of 10 km thickness, a fissure can be maintained by the denser liquid water up to 9 km. Above this point (at 1 km depth), not being maintained by adjacent water pressure, the ice walls will experience structural collapse. The clathrate dissociation at 1 km (1 MPa) provides a mechanism to remove this overlying ice cover, allowing for the geyser eruption. It is an interesting coincidence that the height to which ocean water will rise into an ice fissure on Europa with a 10 km thick ice sheet is the same depth at which clathrates will dissociate. Simultaneously, such a clathrate-driven eruptive mechanism places an upper bound on the thickness of Europa's ice sheet. For an ice shell thickness d  10 km, water will not rise sufficiently far into a fissure to allow for an overlying ice cover of 1 km or less (see, e.g., Crawford & Stevenson 1988). (Water will only be able to propagate 18 km into a 20 km ice cover, for example.) Hence, if the mechanism we describe here occurs, it both signals and requires a constraint on Europa's ice thickness. The Eruption A carbon dioxide clathrate that rises from depth in the Europan ocean up to ∼1 MPa pressure will dissociate into water with dissolved carbon dioxide and excess exsolved carbon dioxide. The CO 2 solubility at 273 K and the 1 MPa pressure level appropriate to a 1 km depth in the Europan ice is ∼1.5 mol% (Diamond & Akinfiev 2003), corresponding to a saturated CO 2 mass fraction f 0 ∼ 3%-4%. Thus, if 1 kg of CO 2 clathrates with a composition of one molecule of CO 2 for ∼six molecules of H 2 O (Crawford & Stevenson 1988) are brought to 1 MPa, after dissociating, 0.02-0.03 kg of CO 2 will remain dissolved in the (saturated) water. Thus, 0.26-0.27 kg of excess CO 2 vapor per kilogram of clathrate brought to dissociation conditions will exist in the column. The amount of energy in the compressed CO 2 generated from bringing 1 kg of CO 2 clathrates to the dissociation pressure is sufficient to clear a conduit with at least an A = 1 cm 2 cross section in the overlying d ∼ 1 km ice cover. As a consequence, in order to achieve an eruption of 2 × 10 6 kg of water vapor molecules (as in Paganini et al. 2019) from clathrate dissociation alone, ∼2.8 × 10 6 kg of clathrate must rise into the ice fissure, generating a postexplosion crater of ∼10 m in radius. Governing Equations After the initial explosion occurs, the water-carbon dioxide mixture will rapidly depressurize, and carbon dioxide will exsolve from the solution. The governing equations for a onedimensional model of a fluid rising through a constant-area conduit are described here (Figure 4) and follow Turcotte et al. (1990). Variables (pressure p, vertical velocity w, density ρ, height z, time t, gas density ρ g , mass fraction of gas f, and volume fraction of water f ) are nondimensionalized, and hats denote nondimensional terms: Turcotte et al. (1990) neglected the gravitational term in the momentum equation and solved the resulting system. This gives the analytic solution , taking the condition that w = 0 for p = 1. We solve these equations (including the gravitational term) using a second-order Lax-Wendroff numerical scheme over 329 grid points (evenly spaced in increments of 0.03 nondimensional units). We confirm that there is a negligible difference between solutions with and without the gravitational term. For simplicity, we also neglect the gravitational term in the solutions described just below. Noting that the six equations above are easily reduced to four, we adopt the following initial conditions for w, ρ, f, and p: We apply Dirichlet boundary conditions, for which the quantities w, ρ, f, and p at the upper and lower domain bounds are fixed to their initial values. Our numerical solution in the active region of the computational grid matches Turcotte et al.ʼs (1990) analytic solution. We next examine how the eruption velocity of the carbon dioxide and liquid-water foam is affected by the saturated mass fraction of gas (f 0 ) and the time of the eruption at p 0 = 1 MPa below the ice shell ( Figure 5). At early times (∼2 s) and f 0 = 0.03-0.04, consistent with the dissolved mass fraction of gas after clathrate dissociation, geyser eruption velocities can reach between 600 and 700 m s −1 ( Figure 5). As expected, larger dissolved mass fractions lead to larger vertical velocities. Vertical velocities of these magnitudes are large enough to be consistent with the existing inferences of water vapor above Europa's surface. Eruption velocities quickly decrease, reaching ∼130 m s −1 roughly 10 s into the eruption. Taken together, our results show that the inferences of Europan geyser activity can be explained by a hypothesis invoking a two-layer ocean bringing clathrates up to Europa's ice cover and a subsequent eruption generated by both the excess gas from clathrate dissociation and the dissolved gas in the water column under rapid depressurization. Our theory allows for testable predictions. For example, an eruption of ∼2 × 10 6 kg of water vapor (Paganini et al. 2019), arising from a solution with a 4% mass fraction of CO 2 , would yield 8.3 × 10 4 kg of carbon dioxide; this should be observable from spectra. The CO 2 and H 2 O molecules released to the atmosphere will either undergo photodissociation or fall back down and potentially stick to the surface. These competing mechanisms allow for predictions of the observability of carbon dioxide and water molecules in Europa's atmosphere or on Europa's surface. We estimate an order-of-magnitude value for the rate of photodissociation of a CO 2 molecule using the solar flux (about 50 W m −2 ; see Ashkenazy 2019), the absorption cross section of a CO 2 molecule (taken here as 10 −25 m 2 ; see Schmidt et al. 2013), and the energy of a photon (hc/λ = 1.1 × 10 −18 J, where h = 6.6 × 10 −34 J s is Planck's constant, c = 3 × 10 8 m s −1 is the speed of light, and λ = 180 nm is a photon wavelength in the ultraviolet range). We assume that 5% of the solar flux is in the UV range. Then, an estimate for the rate of dissociation for a carbon dioxide molecule in Europa's atmosphere is 2.3 × 10 −7 s −1 . Further, a recent modeling study indicated that an estimated photolysis rate describing the dissociation of water into OH and H is ≈10 −7 s −1 (Li et al. 2020). If a molecule survives through its ballistic arc, it will stick to the Europan surface, provided that it loses enough energy on impact such that its resultant kinetic energy is smaller than the attractive potential between a water molecule at the surface and the fallen molecule. Using a simplified single-collision model (realistically, there will be multiple collisions) for an order-of-magnitude estimate, for a falling CO 2 molecule, this criterion gives and estimate D from the dipoleinduced dipole interaction between a carbon dioxide molecule and a water molecule or the dipole-dipole interaction between water molecules, respectively. An order-of-magnitude estimate for D gives D = O(1) eV (using a distance of approach of ∼1 Å), while the left-hand side of the inequality is O(0.01-1) eV, indicating that Figure 5. Eruption velocities at subsequent times for different initial content (3% in blue and 4% in black) of dissolved carbon dioxide. At early times (about 2 s after the ice is removed), eruption velocities are large enough to be consistent with inferences of water vapor found ∼200 km above Europaʼs surface. a falling molecule (whether carbon dioxide or water) will likely adhere to Europa's icy surface. Then, using a simplified ballistic representation, a molecule subject to a 1.3 m s −2 surface gravitational acceleration will fall back down to Europa's surface in 554 s (about 9 minutes). For the small photolysis rates described above, this means that photodissociation of atmospheric carbon dioxide or water molecules will have a negligible effect on the number of geyser molecules observable above Europa's surface over a 9 minute period. If the eruptive mechanism that we have described occurs, then carbon dioxide and water vapor sourced from geysers will be observable in Europa's atmosphere on timescales of about 10 minutes. With such a short timescale, one may expect to only intermittently observe the geysering phenomenon, consistent with the results of Paganini et al. (2019), who detected only one episode of increased water molecules in the Europan atmosphere out of 20 observations (totaling 1827 minutes of observations), implying a global eruption rate of roughly once every 30 hr. More precisely, out of these observations, only one instance of 148 minutes in length resulted in increased atmospheric water molecules (Paganini et al. 2019). Interestingly, this timescale is consistent with the lower values of the convective timescale (between ∼14 and 4500 hr) discussed in Section 2.3, providing a possible estimate of O(10 −3 ) kg m −3 on the density jump between the lower and upper layers of a stratified Europan ocean. Discussion and Conclusions The forthcoming launch of the Europa Clipper mission will allow for in situ investigation of the geysering phenomenon. In particular, the Europa Imaging System will provide a highresolution (50 m or less) image of Europa's surface (Bayer et al. 2018); this may allow for observations of surface changes associated with the geyser process, which involves catastrophic disruption on horizontal scales of order ∼10 m and presumably subtler changes to surface shading over wider scales. Moreover, the synchronous measurements from the Clipper Ultraviolet Spectrograph and Mass Spectrometer may allow for chemical characterization of geyser molecules (Bayer et al. 2018), providing a further mechanism to test the hypothesis of a carbon dioxide-driven geyser. At first glance, the possibility of geysers on Europa capable of transporting water vapor to ∼200 km above the satellite's surface presents a surprise. Europa's ice shell is estimated to be O(10) km thick, and this cold ice lid would not necessarily seem to lend itself to frequent eruptive activity. Indeed, in the terrestrial context, this picture is similar to that presented by the large subglacial water reservoir associated with Antarctica's Lake Vostok, which shows compelling analogies to the Europan structure (e.g., Wüest & Carmack 2000). The novel aspect of our geyser hypothesis is that it connects a number of otherwise disparate phenomena. These include (1) the initiation of an unstable water column driven by changes in ocean heat fluxes or ice thickness fluxes, (2) the existence of faults or fissures (possibly driven by tidal stresses) in the overlying ice cover that permit the inflow of fluid, (3) a buoyancy-driven flow of carbon dioxide hydrates to any fissure in the overlying ice cover, and (4) a depressurization schema that can be fruitfully modeled with shock tube-like dynamics. The eruptive mechanism that we propose draws on a range of processes that have close analogs on Earth, such as the eruption of Lake Nyos, volcanic eruptions deriving from magma chambers, and methane hydrate-driven explosions from permafrost in the high latitudes. Yet while our hypothesis allows for water molecules to reach the vast heights necessary to be consistent with the inferences based on the Hubble observations (e.g., Roth et al. 2014), we must admit a certain degree of credulity to believe that the complex and interlinked chain of events occurs exactly as we describe in this paper. In fact, one wonders whether the eventual observations of the Europa Clipper mission will suggest that the geysers themselves were spurious inferences of water vapor, given the challenges necessary to ensure water vapor reaches the high altitudes required. If the process we suggest does occur, however, it provides detailed constraints on both the properties of Europa's subsurface ocean and the thickness of the ice shell itself.
2021-11-04T21:22:01.953Z
2021-11-03T00:00:00.000
{ "year": 2021, "sha1": "2371802844cf9fe69c488caccef0e6d218dd8ef9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3847/psj/ac2b2c", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "2371802844cf9fe69c488caccef0e6d218dd8ef9", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
265652108
pes2o/s2orc
v3-fos-license
Features of lexical complexity: insights from L1 and L2 speakers We discover sizable differences between the lexical complexity assignments of first language (L1) and second language (L2) English speakers. The complexity assignments of 940 shared tokens without context were extracted and compared from three lexical complexity prediction (LCP) datasets: the CompLex dataset, the Word Complexity Lexicon, and the CERF-J wordlist. It was found that word frequency, length, syllable count, familiarity, and prevalence as well as a number of derivations had a greater effect on perceived lexical complexity for L2 English speakers than they did for L1 English speakers. We explain these findings in connection to several theories from applied linguistics and then use these findings to inform a binary classifier that is trained to distinguish between spelling errors made by L1 and L2 English speakers. Our results indicate that several of our findings are generalizable. Differences in perceived lexical complexity are shown to be useful in the automatic identification of problematic words for these differing target populations. This gives support to the development of personalized lexical complexity prediction and text simplification systems. Introduction A growing body of research has focused on the detection of complex words for automatic text simplification (TS) (Paetzold and Specia, 2016;Yimam et al., 2018;Shardlow et al., 2021a).Complex words within TS frameworks are those words that are difficult to recognize, understand, or articulate and can significantly reduce reading comprehension (Kyle et al., 2018;Shardlow et al., 2021a).Perceived lexical complexity is therefore the level of difficulty associated with any given word form by a particular individual or group. With distance learning becoming ever more popular (Morris et al., 2020), research has been focusing on identifying barriers and improving approaches to online education (McCarthy et al., 2022).This includes an increase in the demand for AI/NLP technologies such as TS that can be utilized within computer-assisted language learning (CALL) applications (Tseng and Yeh, 2019;Rets and Rogaten, 2020).However, little research has been conducted on the similarities or differences between L1 and L2 English speakers' perception of lexical complexity in the field of automatic lexical complexity prediction.Demographic differences between annotators hinder the automatic detection of complex words, and in turn reduce the performance of generalized TS technologies (Zeng et al., 2005;Lee and Yeung, 2018b;Maddela and Xu, 2018).As far as the authors are aware, this correlation has not been fully explored within complexity prediction literature, especially in consideration with theoretical explanations from applied linguistics (Zeng et al., 2005;Tack et al., 2016;Lee and Yeung, 2018b;Shardlow et al., 2021a,b;Tack, 2021). To aid complexity prediction research along with its downstream TS and CALL applications, this study asks the following research questions: • RQ 1: Are there sizable differences between L1 and L2 English speakers perception of lexical complexity reflected in the annotation of existing complexity prediction corpora? • RQ 2: If these differences exist, are they represented by the features commonly used in systems developed for complexity prediction? Section 2 introduces the reader to complexity prediction research.Section 2.2 details several datasets used for complexity prediction.Section 2.3 provides features related to lexical complexity.Section 4 draws several conclusions regarding the strength of the correlations between these features and the complexity assignments of L1 and L2 English speakers.Section 5 gives several brief possible explanations taken from applied linguistics.Section 6 lastly shows the application of our findings for automatically classifying spelling errors made by L1 and L2 English speakers.This was done to test the validity of our findings and to provide a potential use case within TS and CALL technologies. A survey of existing datasets and features . Complexity prediction research Automatic complexity prediction is primarily split into (1) complex word identification, and (2) lexical complexity prediction.Complex word identification (CWI) entails the development of binary classifiers that can automatically distinguish between complex and non-complex words.They achieve this by assigning target words with binary complexity values of either 0 (noncomplex) or 1 (complex) (Table 1) (Paetzold and Specia, 2016;Yimam et al., 2018).Lexical complexity prediction (LCP) is essentially a regression based task.It relies on multi-labeled data to model lexical complexity on a continuum.This continuum has varying thresholds, which may range from very easy (0), easy (0.25), neutral (0.5), difficult (0.75), to very difficult (1).These thresholds are used to label a target word with a continuous complexity value between 0 and 1 (Table 1).LCP, therefore, provides more fine-grained complexity values than in comparison to the binary annotated data provided by CWI, as it is able to recognize those words with a neutral level of complexity (Shardlow et al., 2020(Shardlow et al., , 2021a)).This study has subsequently focused on LCP, hence continuous complexity assignments made by L1 and L2 English speakers. Researchers interested in both CWI and LCP have brought into question the generalizability of automated lexical complexity assignments.They argue that prior CWI and LCP systems are unable to account for "variations in vocabulary knowledge among their users" (Lee and Yeung, 2018b).In other words, they are unable (Shardlow et al., ). Extract: Folly is set in great dignity Binary Complexity: 1 is 0 in 0 Continuous Complexity: 0.57 is 0.18 in 0.15 1 is complex and 0 is non-complex.Target words are in bold. to account for differing perceptions of lexical complexity.Studies, such as Zeng et al. (2005), Tack et al. (2016), Lee and Yeung (2018b), and Tack (2021), introduced personalized CWI to account for such variation among differing target populations.Personalized CWI caters for the individual by taking into consideration their specific demographic and by relying on features that correlate with that demographic's assignment of lexical complexity. Traditionally, research has indicated statistical and psycholinguistic features as being reliable indicators of a word's complexity (Shardlow et al., 2021b), such as word frequency, length, familiarity and concreteness.Nevertheless, since lexical complexity assignments differ from one demographic to the next, the question remains whether these features are truly universal in their ability to predict lexical complexity across multiple target populations.These features include word frequency, word length, syllable count, familiarity, prevalence, and concreteness (Paetzold and Specia, 2016;Monteiro et al., 2023).Thus, to answer research questions 1 and 2, numerous LCP datasets have been selected to represent the lexical complexity assignments of both L1 and L2 English speakers (Section 2.2).The above features were then applied to these datasets to uncover whether sizable differences exist between each set of annotators' complexity assignments, as well as these features ability to predict lexical complexity for each target demographic (Section 4). . Existing datasets Shared-tasks have increased the popularity of complexity prediction research (Paetzold and Specia, 2016;Yimam et al., 2018;Shardlow et al., 2021a).This has resulted in several datasets that can be used for complexity prediction.These datasets have been labeled by annotators from differing backgrounds.Some datasets were created by annotators made up of purely L1 English speakers or annotators from a specific country, such as China (Lee and Yeung, 2018a;Yeung and Lee, 2018), Japan (Nishihara and Kajiwara, 2020), or Sweden (Smolenska, 2018).Other datasets contained a mixture of L1 and L2 English speakers from a variety of international backgrounds (Yimam et al., 2018).However, few of these datasets consist of multi-labeled continuous data used to train state-of-the-art LCP systems.In fact, only several datasets exist that contain English words annotated using a likert-scale and labeled with continuous complexity values.Examples include the CompLex dataset (Shardlow et al., 2020), the Word Complexity Lexicon (Maddela and Xu, 2018), and the CERF-J project's word list (Tono, 2017).These datasets have been grouped in accordance to their annotators and have been described throughout the following sections. . . Dataset with L English speaking annotators The CompLex dataset (Shardlow et al., 2020) is the most recent dataset that has been used to develop LCP systems (Shardlow et al., 2020(Shardlow et al., , 2021a)).Over 1500 L1 English speaking annotators were responsible for labeling continuous complexity values to a range of extracts taken from the Bible (Christodouloupoulos and Steedman, 2015), biomedical articles (Koehn, 2005) and Europarl (Bada et al., 2012).These annotators were crowd-sourced from "the UK, USA, and Australia" (Shardlow et al., 2020).Having been sourced from English-speaking countries, it is likely that these annotators were predominately L1 English speakers.The CompLex dataset is also split into two sub-datasets.The first contains 9,000 instances of single words.The second houses 1,800 instances of multi-word expressions (MWEs).Both sub-datasets were created using a 5point likert scale and therefore provide continuous complexity values ranging between 0 (very easy) and 1 (very difficult).Assigned complexity values were averaged.The returned averaged values were then used as the corresponding target words' , or MWEs' , overall level of complexity. . . Datasets with L English speaking annotators The Word Complexity Lexicon (WCL) (Maddela and Xu, 2018) consists of "15,000 English words with word complexity values assessed by human annotators" (Maddela and Xu, 2018).These annotators were 11 non-native yet fluent L2 English speakers from varying international backgrounds.These annotators also had varying first languages.The WCL contained the most frequent 15,000 words provided by the Google 1T Ngram Corpus (Brants and Franz, 2006).Its complexity values were continuous and were gained through the use of a 6-point likert scale.Each annotator was asked whether they believed the target word was either very simple, moderately simple, simple, complex, moderately complex, or very complex. The Common European Reference Framework for Languages (CERF) is a recognized criteria for assessing language ability.It contains multiple levels.These levels range from: "A1 (elementary), A2, B1, B2, C1, to C2 (advanced)" (Uchida et al., 2018).A1 is used to refer to elementary proficiency, having the ability to "recognise familiar L2 words and very basic phrases" (Council of Europe, 2020).C2 denotes advanced proficiency, "having no difficulty in understanding any kind of L2 [spoken or written] language" (Council of Europe, 2020).The CERF-J project is the utilization of the CERF for English foreignlanguage teaching in Japan.The project contains a CERF-J wordlist with 7800 English words, with each word having been assigned a CERF level marking their complexity.These assigned CERF levels were calculated in accordance to a word's frequency within CERF rated foreign-language English textbooks.These textbooks were taken from Chinese, Taiwanese, and Korean schools (Markel, 2018;Tono, 2017).As such, the complexity assignments contained within the CERF-J wordlist may reflect those made by Chinese, Taiwanese, or Korean L2 English speaking annotators. . . Datasets with L and L English speaking annotators The Personalized LS Dataset was created by Lee and Yeung (2018b) to reflect the individual complexity assignments of 15 Japanese learners of English.These L2 English speakers were tasked with rating the complexity of 12,000 English words.To do so, they used a 5-point likert scale that depicted how well they knew the word.They chose from 5 labels that ranged from (1) "never seen the word before", to (5) "absolutely know the word's meaning" (Lee and Yeung, 2018b).The dataset classified those words labeled between 1 and 4 as being complex, whereas those labeled 5 were believed to be non-complex.Regardless of this binary classification, the use of a 5-point likert scale means that such data can easily be adapted for continuous LCP rather than for binary CWI.The annotators of the Personalized LS Dataset were also sub-divided in regards to their English proficiency (Lee and Yeung, 2018b).The first subgroup contained the four least proficient annotators whom knew less than 41% of the 12,000 English words.The second sub-group consisted of the four most proficient annotators whom knew more than 75% of the 12,000 English words.Unfortunately, however, the Personalized LS Dataset is not publicly available and was therefore not used within this study. The CWI-2018 shared-task (Yimam et al., 2018), introduced several participating teams to a set of CWI datasets annotated by a variety of L1 and L2 English speaking annotators.These datasets were of differing genres containing extracts taken from news articles, Wikinews and Wikipedia.These datasets were also of differing languages, being English, German, Spanish and French.Its annotators were collected using the Amazon Mechanical Turk (MTurk) and were tasked with identifying complex words from a given number of extracts (Yimam et al., 2018).In total, 134 L1 and 49 L2 English speakers labeled the English datasets with 34,789 binary complexity values.Due to CWI-2018's use of annotators from a variety of backgrounds, as well as its datasets being constructed from multiple sources, the CWI-2018 datasets acted as a good control during our initial analysis by indicating which of our selected features (Section 2.3) were salient across both sets of annotators.However, due to the CWI-2018 datasets containing binary instead of continuous complexity assignments, these datasets were later dropped.This is since a direct comparison between binary complexity values and continuous complexity values is less informative and is subsequently less helpful in the development of state-of-the-art LCP systems that rely on continuous data. . Features Many CWI and LCP classifiers use statistical, phonological, morphological, and psycholinguistic features to predict lexical complexity (Shardlow et al., 2021b).Among these features, word frequency, word length, syllable count, and familiarity are the most common (Paetzold and Specia, 2016;Yimam et al., 2018;Shardlow et al., 2021b).Despite current state-of-the-art LCP systems preferring the adoption of unsupervised deep learning transformer-based models (Pan et al., 2021;Rao et al., 2021;Yaseen et al., 2021), those LCP systems that rely on feature engineering still perform well.During the LCP-2021 shared-task (Shardlow et al., 2021a), the third best performing system adopted a feature engineering approach (Mosquera, 2021).Among Mosquera (2021)'s extensive set of features, word frequency, length, syllable count, and familiarity were found to be among the best features in predicting lexical complexity.This is further supported by the findings of Desai et al. (2021) and Shardlow et al. (2021b).As such, this study has used these features as a means of analyzing the differences in complexity assignment between L1 and L2 English speakers. . . Statistical features Zipf 's Law states that few words are rare, few words are very frequent, and the rest are more or less evenly distributed.Those words which are rare and that appear less frequently within a text are likely to be longer than compared to those words that are more common (Quijada and Medero, 2016;Zampieri et al., 2016).Therefore, it is often believed that since infrequent words are longer, they are less likely to be familiar and as a consequence, are more complex than compared to more frequent words that are shorter (Zampieri et al., 2016). Zipfian frequency is used to predict the frequency of a target word within a natural language, such as English, given a provided dataset.It is calculated per the following equation: where k is the frequency rank of the target word ordered from the most to least frequent, s is the exponent that defines the distribution, n is the vocabulary, and size H n,s is the generalized harmonic number; "being the sum of the reciprocals of the size of the vocabulary" (Zampieri et al., 2016). True frequency represents the frequency of a target word within a given dataset rather than its predicted frequency within its respective language.True frequency is generated through the following equation: with the numerator being the number of times the target word appeared in a dataset, and where N is the number of tokens within that dataset.We calculated frequency using the Brown Corpus (Francis and Kucera, 1979) and the British National Corpus (BNC) (BNC Consortium, 2015).A percentage of the BNC was used to generate document frequencies, being how many documents the target word was found in.The BNC consists of 4049 texts, including both written and spoken texts (BNC Consortium, 2015).We selected a percentage of written texts, with an average of 10% of our selected texts coming from each text genre, spanning literary works to news and scientific articles.We believed document frequency would help verify or disprove any potential correlation drawn between lexical complexity and word frequency. Word length is associated with lexical complexity (Paetzold and Specia, 2016;Yimam et al., 2018;Desai et al., 2021;Shardlow et al., 2021a,b).It is calculated by simply counting the number of characters that form a target word.Zampieri et al. (2016) along with others (Paetzold and Specia, 2016;Yimam et al., 2018;Desai et al., 2021;Shardlow et al., 2021a,b), have discovered that statistical features, such as word frequency, be it either Zipfian frequency or True frequency, along with word length, are good baseline indicators of lexical complexity.A strong negative correlation should be seen between word frequency, word length and complexity, regardless of whether the annotator is a L1 or L2 speaker. . . Phonological features Syllable count is also used for predicting lexical complexity (Paetzold and Specia, 2016;Yimam et al., 2018;Desai et al., 2021;Shardlow et al., 2021a,b).This is since words with a high number of syllables can be hard to pronounce for some individuals (Mukherjee et al., 2016).L2 English speakers who are not yet familiar with the phonology of the target language, may subsequently find such words to be difficult to read and articulate (Mukherjee et al., 2016;Desai et al., 2021).Learners of English may perceive these words to be more complex than words with less syllables in comparison to L1 English speakers.Syllable count is normally obtained by counting the number of vowels within a target word (Desai et al., 2021). . . Character N-grams Many languages do not share the same writing system or the same alphabet.This may lead to some L2 English speakers being unfamiliar with certain character combinations found in English.Thus, words made up of these unfamiliar character combinations are also likely to be considered more complex for a L2 English speaker than those words which have a similar appearance to words within their L1, i.e., cognate words.Certain character combinations may subsequently impact reading and understanding either as a consequence of being part of an acquired alphabet or simply being unfamiliar to the reader. Character N-grams are often used to recognize those character combinations which may pose difficulty to a given reader (Desai et al., 2021;Shardlow et al., 2021b).We suspect that these differing character combinations may be identifiable when analyzing the bigrams and trigrams of the complex words annotated by L1 and L2 English speakers. . . Psycholinguistic features Familiarity is among the most popular psycholinguistic feature for LCP (Paetzold and Specia, 2016;Yimam et al., 2018;Desai et al., 2021;Shardlow et al., 2021b).Obtained from the MRC Psycholinguistic Database (Wilson, 1988), familiarity is a measure of how well-known a target word is to an individual and was obtained through self-report from a group of 36 L1 English speaking university students (Gilhooly and Logie, 1980;Desai et al., 2021).Familiarity is related to another feature referred to as prevalence. Prevalence is the percentage of annotators who know a target word (Brysbaert et al., 2019).It is produced by the following equation: with the numerator being the number of annotators familiar with the word, and N being the total number of annotators.Brysbaert et al. (2019) has provided a dataset containing 62,000 English words and their respective prevalence ratings annotated by 221,268 L1 English speakers from the USA and UK. Concreteness is another popular feature for LCP (Paetzold and Specia, 2016;Yimam et al., 2018;Desai et al., 2021;Shardlow et al., 2021b).It is defined as "the degree to which the concept denoted by a target word refers to a perceptible entity" (Brysbaert et al., 2013).Concreteness is also normally obtained through selfreport.Brysbaert et al. (2013) have provided a dataset containing the concreteness ratings of 40,000 English words provided by 4,000 L1 English speakers located in the USA. . . Summary and hypotheses In the preceding sections, we compare the above features' correlations with lexical complexity across multiple datasets created by differing sets of annotators: L1 and L2 English speakers.Zipfian, True, and document frequency, word length, syllable count, and character n-grams were computed manually, whereas familiarity, prevalence, and concreteness scores were extracted from the MRC Psycholinguistic Database (Wilson, 1988;Brysbaert et al., 2019Brysbaert et al., , 2013)), respectively, and then applied to each of the three datasets.We put forward several hypotheses. Hypothesis 1: We suspect that strong correlations will exist between lexical complexity and word frequency, word length, syllable count, familiarity, prevalence, and concreteness, regardless of the type of annotator.Hypothesis 2: We do, however, hypothesize that the strength of these strong correlations shall vary between datasets and their respective annotators.Hypothesis 3: We predict that there shall be differences between the most complex bigrams and trigrams belonging to either set of annotators. Data extraction and normalization This study has extracted the English tokens without context and their corresponding continuous complexity values provided by the L1 English speaking annotators of the CompLex dataset (Shardlow et al., 2020), and the L2 English speaking annotators of the WCL dataset (Maddela and Xu, 2018), and the CERF-J wordlist (Tono, 2017).In total, 940 tokens were found to be shared among these datasets.However, 1 and 18 tokens were not matched with either prevalence or concreteness scores, respectively.These tokens were not considered within our final analysis of prevalence or concreteness and lexical complexity. To compare L1 and L2 English speakers' complexity assignments, each dataset complexity values were normalized to a range between 0 and 1. Normalization was achieved through the following equation: where x i is the current complexity value, and min(x) and max(x) are the respective minimum and maximum values of the given likert scale range.Table 2 provides a snapshot of the 940 shared tokens along with their normalized complexity values. Results The following sections compare the relationships between the chosen features and the normalized continuous complexity values made by either set of annotators.Figures 1-8 depict each features correlation to lexical complexity per dataset.The average lexical complexity values of L1 English speakers are shown in blue and have no symbol (CompLex), whereas those provided by L2 English speakers are represented by a purple square (WCL) and a red triangle (CERF-J). . Zipfian and true frequency Figures 1, 2 display each datasets' average complexity values per token frequency within the Brown Corpus and BNC, respectively (Francis and Kucera, 1979).We predicted that in accordance to Zipf 's Law, a negative correlation would exist between word frequency and complexity, with words of a higher frequency having been assigned lower complexity values.This would appear to be true for all datasets, particularly for the WCL dataset and the CERF-J wordlist.The 100 tokens with the lowest Zipfian frequencies, including such words as "valentine", "genetics", and "functionality", were on average +0.20 and +0.27 more complex than the 100 tokens with the highest Zipfian frequencies, including such words as "may", "first", and "new", within the WCL dataset and the CERF-J wordlist, respectively.This negative correlation is also supported by examining these 100 tokens' True frequencies both in regards to the Brown and BNC datasets.The 100 tokens with ./frai. . FIGURE Avg. complexity per brown true freq. FIGURE Avg. complexity per BNC true freq. the lowest True frequencies per the Brown corpus were on average +0.13 and +0.19 more complex than those 100 tokens with the highest True frequencies within the WCL dataset and the CERF-J wordlist, respectively.Furthermore, an average decrease of −0.0012 per +10.00 increase in True Frequency was observed for those complexity assignments belonging to the WCL dataset and −0.0019 for those belonging to the CERF-J wordlist.An average decrease of −0.0065 and −0.0099 per +10.00 increase in True Frequency was likewise observed per the BNC for the WCL dataset and the CERF-J wordlist, respectively.The CompLex dataset alternatively depicted a less strong negative correlation between word frequency and complexity.For most instances, the frequency of a given token did not appear to have a great influence on its assigned complexity.The 100 least frequent tokens were found to be on average only +0.06 more complex in regards to their Zipfian frequencies and on average only +0.04 more complex per the Brown corpus and +0.05 more complex per the BNC in regards to their True frequencies.The 100 most frequent tokens were on average rated to be −0.15 less complex by the CompLex dataset's L1 annotators than they were by the WCL dataset's L2 annotators and the CERF-J wordlist in regards to their True frequency.P-values of 0.0004 and 0.0035 suggest that there is a significant difference between the complexities of the 100 most and least frequent tokens of either set of annotators, respectively.In addition, a less impressive average decrease of −0.00008 in complexity using the Brown corpus and a decrease of −0.0013 using the BNC per +10.00 increase in True Frequency was also observed for the CompLex dataset. . Document frequency Figure 3 shows a snapshot of the relative document frequency of target words found within the BNC.It was discovered that those target words with a relative document frequency >0.140 exhibited a similar complexity.However, a decrease in assigned complexity can be seen between relative document frequencies of 0 to 0.140.This is in parallel with the negative correlation shown between assigned complexity and word frequency for CompLex, WCL, and the CERF-J wordlist within Section 4.1.An average decrease of −0.0017 for CompLex, −0.0054 for WCL, and −0.0075 for the CERF-J wordlist was seen per +0.006 increase in relative document ./frai. . FIGURE Avg. complexity per word length.within the BNC. FIGURE Average complexity per syllable. frequency.As such, the L2 annotators of the WCL and CERF-J wordlist seem to be more affected by the frequency of a word compared to the L1 annotators of the CompLex dataset. . Word length Figure 4 depicts each dataset average complexity values per word length.We predicted that longer words would be less familiar and more difficult to learn for set of annotators and consequently would be rated with higher complexity values.This was seen to be true across all datasets.Each dataset demonstrated a positive correlation between word length and complexity.Tokens with 3-7 characters, including such words as "day", "men", and "may", were rated to be on average −0.15 less complex than tokens with 10-14 characters, examples being "management", "international", and "relationship".On average, a +0.03 increase in complexity was observed per every additional character across all datasets. The CompLex dataset appears to show a less strong positive correlation between word length and complexity compared with the WCL dataset and the CERF-J wordlist.Words with 4-7 characters were assigned with an average complexity of 0.22.Words with 10-14 characters were rated as having an average complexity of 0.25.Therefore, on average, words with 10-14 characters were perceived to be +0.03more complex than those with 4-7 characters for the CompLex dataset's L1 English speaking annotators.In comparison, the L2 English speaking annotators of the WCL dataset and the CERF-J wordlist, respectively, assigned words with 10-14 characters with complexity values that were on average +0.25 and +0.18 greater than those with 4-7 characters.The L1 English speaking annotators of the CompLex dataset have subsequently interpreted long words of 10-14 characters to be on average −0.19 less complex than compared with the L2 speaking annotators of the WCL dataset and the CERF-J wordlist.A p-value of 0.0002 between the complexities assignments of 10-14 character words belonging to either set of annotators, confirms that this difference is significant. . Syllable count Figure 5 displays each datasets average complexity values per number of syllables within a given token.The WCL dataset and the CERF-J wordlist showed a positive correlation between assigned complexity and a target word's number of syllables.The CompLex dataset, however, demonstrated no such positive correlation.Instead, the CompLex dataset showed little to no fluctuation in complexity between 1 and 5 syllable words.For every additional syllable in this range, the CompLex dataset shows an extremely small increase in complexity of +0.004.Thus, no real change in complexity was observed.The WCL dataset and the CERF-J wordlist, on the other hand, showed incremental increases in complexity between 1 and 5 syllables by +0.06 and +0.04, respectively.This further proves that the number of syllables contained within a target word are less important for L1 English speakers when it comes to rating that word's complexity, whereas for L2 English speakers, an increased number of syllables may result in greater word difficulty.This is especially true if that word contains 5 or more syllables.However, a p-value of 0.077 indicates that the complexity assignments given to 1 to 5 syllable words are not significantly different between the two sets of annotators.As such, these observations should/must be verified on a larger sample of L1 and L2 English speakers. . Character N-grams The 10 most complex bigrams and trigrams with a frequency greater than 10 found among the 940 shared tokens are presented within Table 3.Several observations suggest that certain derivations, character combinations, or morphemes, increase the perceived complexity of target words for L2 English speakers, yet have no affect on complexity assignment for L1 English speakers. The trigram "nes" within the WCL dataset and the CERF-J wordlist was found to be in more complex words than it was within the CompLex dataset.This trigram was found among 10 of the 940 shared words, and was part of such words as "awareness", "thickness", "kindness", "weakness", and "righteousness".Its associated words were on average assigned a complexity value (a difficulty rating) of 0.34 for the WCL dataset and a complexity value of 0.45 for the CERF-J wordlist by the original annotators of the datasets.Within the CompLex dataset, however, these words were rated with an average complexity value of 0.22 and thus were on average rated as being −0.18 less complex.This may indicate that for L2 English speakers, target words with the derivational suffix "-ness", are considered to be more complex than they are for L1 English speakers.On the other hand, the trigram "nes" was also found in words, such as "Chinese" or "honest", and given the small sample size, further investigation was needed to verify this finding.As such, we calculated the average complexity assignment for all of the words with the derivation "-ness" found within each dataset, including those words which were not shared.In total, the CompLex dataset was found to contain 82 words with the "-ness" derivation with an average complexity of 0.26, whereas the WCL dataset and the CERF-J wordlist contained 39 and 52 words with the "-ness" derivation with average complexity ratings of 0.34 and 0.48, respectively. The bigrams "io" and "ti", and the trigram "ati" belonged to words with noticeably higher complexity values within the WCL dataset and the CERF-J wordlist than they did within the CompLex dataset.The bigrams "io" and "ti", were part of 118 and 111 words, respectively, and the trigram "ati" was part of 54 words of the shared 940 words.Many of these words had all three n-grams as they contained the suffix "-tion", for example: "isolation", "separation", "discrimination", "classification", and "communication".Those words that contained the bigrams "io" and "ti" had an average complexity value of 0.37 for the WCL dataset and an average complexity value of 0.4 for the CERF-J wordlist.The trigram "ati" was part of words with an average complexity of 0.4 for the WCL dataset and an average complexity of 0.41 for the CERF-J wordlist.In comparison, the bigrams "io" and "ti", and the trigram "ati", were found to have been associated with average complexity values of 0.25, 0.24, and 0.25 for the Complex dataset, respectively.The suffix "-tion" would, therefore, appear to be present within words that are on average +0.15 more complex for L2 English speakers than they are for L1 English speakers.Words without derivation, hence root words (lemmas), appeared to be less complex than those words with an derivational prefix or suffix, regardless of the dataset or annotator.741 of the 940 shared words were root words.On average, root words were found to have complexity values of 0.23, 0.22, and 0.28 for the CompLex dataset, the WCL dataset, and the CERF-J wordlist, respectively.The 199 remaining derivational words, had average complexity values of 0.24, 0.36, and 0.42 for the CompLex dataset, the WCL dataset, and the CERF-J wordlist, respectively.As such, the 741 root words appeared to be on average -0.01, -0.1, and -0.13 less complex across the three datasets when compared to those words with derivation.Derivation therefore would appear to universally increase the complexity of a target word. L2 English speakers have also appeared to have found derivational word forms more troublesome than L1 English speakers.This is since L2 English speakers have assigned words with the derivations: "-ness" or "-tion" with greater complexity values than compared to the L1 English speakers of the CompLex dataset, as detailed above.However, there were only a few instances were both root and derivational word forms were shared across the three datasets, such as "complex" and "complexity", "effect" and "effectiveness", "portion" and "proportion", "relation" and "relationship", "action" and "interaction", and so forth.The average differences between these root words and their derivational forms were complexities values of +0.01, −0.17, and −0.16 for the CompLex dataset, the WCL dataset, and the CERF-J wordlist, respectively.This suggests that the prior assumption is correct.For example, for the L2 English speaking annotators of the WCL dataset and the CERF-J wordlist, these root words were on average −0.16 to −0.17 less complex than compared to the L1 English speaking annotators of the CompLex dataset.Nevertheless, without further root and derivational word pairs, this finding is inconclusive. . Familiarity and prevalence As expected, a negative correlation was observed between familiarity and complexity (see Figure 6).However, this was only seen within the WCL dataset and the CERF-J wordlist.For these datasets, an average increase in perceived complexity was found of +0.002 per every −10 decrease in familiarity.The complexity assignments of the CompLex dataset did not demonstrate this trend.Instead, familiarity appeared to have had no affect on complexity assignment, with complex and non-complex words depicting similar or varying degrees of familiarity.For example, the 100 most familiar words found within the WCL dataset and the CERF-J wordlist had an average complexity of 0.16 and 0.22, respectively, whereas their 100 least familiar words had an average complexity of 0.27 and 0.31, respectively.This resulted in a difference of +0.11 for the WCL dataset and a difference of +0.09 for the CERF-J wordlist.In contrast, the 100 most and least familiar words found within the CompLex dataset depicted average complexity values of 0.22 and 0.22, respectively amounting to a range of −0.003.The difference between the 100 least familiar words' complexity assignments between the two sets of annotators was also found to be significant with a p-value less than 0.001.In turn, familiarity has little to no impact on complexity within the CompLex dataset than in comparison to the WCL dataset and the CERF-J wordlist. A less strong negative correlation was observed between prevalence and complexity across all of the datasets than expected (see Figure 7).However, for the CompLex dataset this correlation was even less emphatic.Words with little to no prevalence appeared to have been assigned with similar complexity values to those words which were rated as being highly prevalent.The 100 most prevalent tokens, including such words as "party", "building", and "morning", were assigned an average complexity of 0.22 and 0.27, whereas the 100 least prevalent tokens, made up of such words as "honor", "economy", and "market", were rated an average complexity of 0.27 and 0.36 for the WCL dataset and the CERF-J wordlist, respectively.The 100 most prevalent tokens were therefore −0.05 and −0.09 less complex in comparison to the 100 least prevalent tokens for these datasets.For the CompLex dataset, however, the 100 most and least prevalent tokens were assigned respective average complexities of 0.229 and 0.233.Therefore, this marks a less impressive decrease in complexity by −0.004 for the 100 most prevalent tokens.A pvalue of 0.048 marks a slight significant difference between the assigned average complexities of the 100 least prevalent tokens between both sets of annotators.It would, therefore, appear that both familiarity and prevalence are good indicators of complexity, but only for complexity assignments made by L2 English speakers.This is since L2 English speakers demonstrated a far stronger positive correlation between these two features and complexity, than in comparison to L1 English speakers. . Concreteness A negative correlation was observed between concreteness and complexity (see Figure 8).This negative correlation is again more prominent within the WCL dataset and the CERF-J wordlist.Those tokens which were assigned concreteness values of 5, marking them as highly concrete, for example "tree", "sand", "house", "chair", and "water", were on average assigned complexity values of 0.19, 0.14, and 0.22 in the CompLex dataset, the WCL dataset, and the CERF-J wordlist, respectively.Those tokens which were given concreteness values of 0, identifying them as highly abstract, for instance "attitude", "online", "complex", "righteousness", and "impact", were on average assigned complexity values of 0.23, 0.27, and 0.31 across the three datasets, respectively.As such, for every −1.00 decrease in concreteness, the WCL dataset and the CERF-J wordlist depicted an average increase in complexity by +0.02.The CompLex dataset, on the other hand, showed a less impressive increase in complexity by +0.006 per −1.00 decrease in concreteness.Furthermore, a pvalue of 0.03 marks that this difference is significant.As such, concreteness appears to have more of an effect on perceived lexical complexity for L2 English speakers, than it does for L1 English speakers. Discussion . Statistical features and complexity A negative correlation was found between word and document frequency and complexity for the CompLex dataset, the WCL dataset, as well as the CERF-J wordlist.This was unsurprising given that previous studies (Paetzold and Specia, 2016;Yimam et al., 2018;Desai et al., 2021;Shardlow et al., 2021b) have demonstrated that word frequency is a good baseline indicator of lexical complexity.However, the strength of this negative correlation varied between L1 and L2 English speakers.The WCL dataset and the CERF-J wordlist, being annotated by L2 English speakers, depicted a significantly stronger negative correlation than in comparison to the CompLex dataset that was annotated by L1 English speakers. The same finding was also observed in regards to word length.The WCL dataset and the CERF-J wordlist both showed a strong positive correlation between the number of characters in a word and that word's assigned complexity, with the CompLex dataset again showing a significantly less strong correlation between the two.Word length has also been proven to be a good baseline indicator of lexical complexity (Paetzold and Specia, 2016;Yimam et al., 2018;Desai et al., 2021;Shardlow et al., 2021b).Nevertheless, since that both word frequency and word length vary in their correlations with lexical complexity between L1 and L2 English speakers, it can be assumed that uncommon words of a greater length are far more complex for L2 English speakers than they are for L1 English speakers. A possible explanation is that L2 English speakers are far less likely to be exposed to and thus be familiar with uncommon and long English words and would subsequently rate these words as being more complex than in comparison with L1 English speakers.Furthermore, English words that are over 6 characters long, are likely to contain a high number of syllables and would generally be hard to pronounce and learn given their length.This would likely increase the perceived complexity of such words, especially for someone who is unacquainted with English vocabulary or phonology.For instance, words such as "righteousness", "vanity", "conscience", "nucleus", and "genetics" received greater complexity values by L2 English speakers than compared to L1 English speakers.These tokens are all jargon that is specific to the religious or academic genre per the CompLex dataset (Shardlow et al., 2020).They were also of considerable length compared with other less complex tokens. . A phonological feature and complexity A positive correlation was observed between syllable count and complexity for both the WCL dataset as well as the CERF-J wordlist.However, this was not the case for the CompLex dataset.This may support the assumption that words with a high number of syllables are especially hard for L2 English speakers.However, this cannot be certain, since a p > 0.05 indicates no significant difference between the complexity assignments of 1-5 syllable words belonging to either set of annotators.In spite of this, if L2 English speakers were to find an increase number of syllables difficult to articulate and/or process, then this may again be due to a possible unfamiliarity with English phonology, or, a more likely explanation is a phenomenon known as cross-linguistic influence. Cross-linguistic influence is where an individual's L1 has an active effect on their L2 production.This phenomenon results in a variety of production errors.For instance, a L1 Chinese speaker whom is unfamiliar with English pluralization may incorrectly use the singular form of an English target word in a plural setting.This is because Chinese does not use inflection to dictate pluralization (Yang et al., 2017).Cross-linguistic influence can subsequently be linked to differing perceptions of lexical complexity, with some demographics finding specific English words, word forms, or pronunciations, to be more or less complex than others, depending on their L1. Certain vowels, diphthongs, or phonological patterns are specific to certain languages and are not present in English, or are unique to English yet are not found in other languages.As a consequence, those words with a high number of syllables are more likely to contain troublesome phonemes which may be . prone to cross-linguistic influence.An example of this, is the English /i@/ diphthong, as found in beer, pier, and weary (Enli, 2014).For Chinese L2 English speakers, this particular diphthong is hard to pronounce, since there is no equivalent sound in Chinese Mandarin.Therefore, the lack of a similar sounding Chinese phoneme causes those English words that contain the /i@/ diphthong to be hard for Chinese L2 English speakers to articulate and potentially learn (Enli, 2014).As a result, Chinese annotators may rate such words as being more complex than compared with those English words that they find easier to articulate and that have a less number of syllables.Thus, a greater number of syllables increases the likelihood of troublesome phonemes prone to cross-linguistic influence.This may explain the observed positive correlation between number of syllables and L2 English speaker's perceived lexical complexity depicted in Figure 3. . Morphological features and complexity . ."-ness" su x The "-ness" suffix is used to transform a noun, or a root word, to a countable noun, such as changing "thick" to "thickness", or "kind" to "kindness".It is also derivational in that it can be used to change the meaning of a word to denote a related but separate concept, such as "weak" to "weakness" as in "he was weak" to "his weakness is".Interestingly, words that contained the "-ness" suffix were rated as being significantly more complex by the L2 English speakers of the WCL dataset and the CERF-J wordlist, than in comparison to the L1 English speakers of the CompLex dataset. . . "-tion" su x The "-tion" suffix is used to transform verbs to abstract nouns, such as transforming "isolate" to "isolation", or "discriminate" to "discrimination".Words that contained the "-tion" suffix were also found to be more complex for L2 English speakers than L1 English speaker in the respective datasets. . . Root words Unlike words with the above derivations, those words without derivation appeared to be universally less complex than in comparison to those with derivation.This being across all annotators and datasets. Prior research has attempted to explain the neural processing of L1 derivations (Kimppa et al., 2019).However, less research has been conducted on how speakers process derivations within their L2 (Kimppa et al., 2019).Several hypotheses have been put forward that attempt to explain why production errors are caused in connection to L2 derivation and in turn, why L2 English speakers may perceive such words, such as those words with the suffix "-ness" or "-tion", to be more complex than root words without derivation (Gor, 2010). L2 processing is believed to be more cognitively demanding.It is theorized to require more cognitive resources than L1 processing.As result, it can lead to delayed response time and production errors even among highly proficient L2 speakers (McDonald, 2006;Clahsen and Felser, 2018).McDonald (2006), conducted several experiments and found that L1 English speakers produced the same grammatical errors as L2 English speakers when in a stressful and high processing environment, such as dealing with noisy data or given a short time to respond.McDonald (2006) concluded that L2 speakers must therefore experience high cognitive demand whenever they process their L2.Hopp (2014) goes on to explain this further.He demonstrated that increased cognitive demand during L2 processing results in there being insufficient resources for syntactic processing.This may subsequently explain why such forms as "-ness" and "-tion" as well as other derivations, are perceived to be more complex for L2 than compared with L1 English speakers.Inadequate working memory may cause L2 English speakers to have difficulty with decoding these derivations, whereas L1 English speakers having less cognitive load, can do so with ease.The shallow-structure hypothesis (Clahsen andFelser, 2006a,b, 2018), is unlike the above explanations.Alternatively, this hypothesis infers that a difference in L2 processing is responsible for differences in complexity assignment between L1 and L2 English speakers, rather than an increased cognitive load and inadequate cognitive resources.It suggests that L2 learners rely more on lexical and semantic information than syntactic cues when attempting to derive the meaning of a given sentence.In other words, the shallowstructure hypothesis puts forward that an L2 learner's syntactic representations are often "shallower and less detailed" in their L2.It hypothesizes that this is a result of direct form-function mapping, memorizing a particular form of a given L2 word, rather than ascertaining that form from learned L2 syntactic rules (Dowens and Carreiras, 2006).As such, the shallow-structure hypothesis is also different from cross-linguistic influence, since the former is concerned with differences in L1 and L2 processing, whereas the latter is entirely a consequence of L1 influence on L2; however, some cross-over between the two does occur (Clahsen and Felser, 2018).If the shallow-structure hypothesis were to be true, then this would explain why such "-ness" and "-tion" words were perceived to be more complex for L2 in comparison to L1 English speakers.For instance, L2 English speakers may only be able to recall those word forms which they are familiar with, having memorized the word: "awareness", rather than learning the uses of the derivation "-ness".L1 English speakers, on the other hand, may be better equipt to infer the meaning of an unseen word form, based on their prior syntactic knowledge of that derivation: "-ness".In turn, such words would appear less complex for L1 than L2 English speakers. . Psycholinguistic features and complexity . . Familiarity and prevalence It was expected that both familiarity and prevalence would demonstrate a negative correlation with complexity, with higher familiarity and prevalence ratings resulting in reduced perceived lexical complexity.Results showed this to be true, but only for the WCL dataset and the CERF-J wordlist.The CompLex dataset showed no such trend between familiarity and complexity, with only a small negative correlation between prevalence and complexity being present. A possible explanation may be found in the annotation process of the provided familiarity and prevalence ratings taken from the MRC psycholinguistic database (Wilson, 1988) and Brysbaert et al. (2019)'s dataset, respectively.Both of these datasets acquired their familiarity and prevalence ratings from a set of L2 English speaking annotators of mixed proficiency.It can, therefore, be expected that a stronger correlation would exist between other L2 English speakers' complexity ratings and the familiarity and prevalence ratings provided by these datasets than in comparison to those complexity ratings provided by L1 English speakers.Nevertheless, this does not diminish the importance of familiarity and prevalence in regards to lexical complexity.Another potential explanation is that L1 and L2 English speakers are more or less familiar with differing words as previously mentioned in Section 5.1.L1 English speakers may be aware of regional varieties, dialects, or vernacular.For instance, "wild" could be considered to have vernacular connotations in British and American English.This may explain why similar vernacular words were rated as having high familiarity and low complexity by L1 English speakers, yet low familiarity and high complexity by L2 English speakers.Various studies (Desai et al., 2021;Shardlow et al., 2021b) have also proven that there is correlation between familiarity, prevalence and lexical complexity.As such, our finding that the CompLex dataset did not reflect a negative correlation between these features and lexical complexity, may be a direct result of the poor generalizability of the MRC psycholinguistic database (Wilson, 1988) and Brysbaert et al. (2019)'s dataset toward L1 English speakers, rather than there being no such correlation. . . Concreteness Concreteness was found to negatively correlate greater with the complexity assignments of L2 English speakers than compared with those belonging to L1 English speakers.It is well-documented that concrete nouns are learned before, processed faster, and recalled more easily than abstract nouns (Altarriba and Basnight-Brown, 2011;Vigliocco et al., 2018).The same can be said for L2 English learners.Altarriba and Basnight-Brown (2011), conducted a Stroop color-word test.They measured their L2 English speaking participants reaction time to various concrete and abstract nouns.They discovered that concrete nouns were responded to significantly faster than abstract nouns.Martin and Tokowicz (2020) discovered a similar finding.They tested L1 English speakers ability at learning L2 concrete and abstract nouns.It was recorded that concrete nouns "were responded to more accurately than abstract nouns" (Martin and Tokowicz, 2020).Mayer et al. (2017) conducted a vocabulary translation task during fMRI scans on L2 English learners.They found that L2 concrete and abstract nouns elicited the same responses as L1 concrete and abstracts nouns and subsequently concluded that L2 nouns are likely to be prone to the same concreteness effects as L1 nouns. The above studies exemplify a phenomenon known as the concreteness effect.This phenomenon refers to the negative correlation between a noun's level of concreteness and its overall acquisition difficulty and processing time.It can subsequently be used to describe the negative correlation found within this study between concreteness and perceived lexical complexity.This is since those words which are learned later and take longer to process would likely be considered more complex.There are several possible explanations for this phenomenon. The context availability hypothesis (Martin and Tokowicz, 2020), states that differences in concrete and abstract noun complexity is due to the differing contexts in which these words are found.For instance, a concrete noun, such as "chair" is likely to be more common and appear in more contexts than the abstract noun "communism".The dual-coding theory (Paivio, 2006) as well as the different organizational frameworks theory (Crutch et al., 2009), suggest that the human mind represents concrete and abstract words differently.Concrete nouns, for instance "chair", are believed to be encoded with visual cues in accordance to their real world manifestation, such as "cushion", "chair leg", or "arm rest".Abstract nouns, however, for example "communism", are represented as concepts, having less emphatic visual identifiers with more symbolic associations, "red", or "hammer and sickle". Spelling error classification . Use case Spelling errors are symptomatic of lexical complexity.Complex words are more likely to be misspelt than non-complex words.An individual that is familiar with a word is more likely to know that word's orthography than in comparison to an unfamiliar and unknown word (Paola et al., 2014).With this in mind, we hypothesized that the above differences in perceived lexical complexity between L1 and L2 English speakers could be used to differentiate between spelling errors made by these two target populations. It is well-documented that L2 English speakers make different types of spelling errors compared to L1 English speakers (Napoles et al., 2019).However, the connection between spelling error and lexical complexity as defined within the field of natural language processing has been left fairly unexplored (North et al., 2022).A doctoral thesis by Wu (2013) looked into the relationship between self-reported word frequency, familiarity, and morphological complexity with spelling error.A sample of 220 5th to 7th grade L1 English speakers were taken from American schools.Results indicated a strong negative correlation between word frequency and spelling error for 7th grade students, a slight negative correlation between familiarity and spelling error for all students, and a positive correlation between morphological complexity and spelling error that decreased with age. It is plausible that words that are frequently misspelt may exhibit the same features that mark words as being complex.The previous analysis in Section 5 indicates that features such as word frequency, length, syllable count, familiarity and prevalence, and concreteness are more greatly correlated with L2 than L1 English speakers' perception of lexical complexity.We, therefore, used these features to train several binary machine learning (ML) classifiers to distinguish between spelling errors made by L1 and L2 English speakers.This was done to test the validity of our previous observations and to provide a use case for our findings within TS and CALL technologies. . Dataset and models Our original analysis was conducted on a sample of 940 sharedtokens.To obtain the best possible performance, we included all instances from the CompLex dataset (3,144) and a equal number of instances from the WCL and CERF-J wordlist (3,144) for our train set. The test set was obtained from the dataset introduced by Napoles et al. ( 2019), being a different dataset from those used within the above analysis.This dataset was created for grammatical error correction and provides spelling mistakes made by L1 and L2 English speakers.It houses 1,984 and 1,936 sentences extracted from formal Wikipedia articles written by L1 English speakers and a collection of student essays written by L2 English speakers, respectively.Napoles et al. ( 2019) asked a set of 4 trained annotators to examine each sentence and identify grammatical and spelling errors.We selected 396 of the spelling errors made by L1 and 287 of the spelling errors made by L2 English speakers.We used these spelling errors as our test set having labeled each instance with a corresponding L1 or L2 spelling error label (shortened to L1 or L2). We trained a total of five binary ML classifiers.These included a Random Forest (RF), Support Vector Classifier (SVC) and a Naive Bayes (NB) model, as well as two baseline models in the form of a majority classifier (MC), and a random classifier (RC).These models were trained on the aforementioned features described in Section 2.3 with the exception of character ngrams.Given the size of our test set, models were unable to draw meaningful correlations between character n-grams and spelling error.Average age-of-acquisition (AoA) was also included as an additional feature.AoA is defined as the age at which a word's meaning is first learned (Desai et al., 2021).It was calculated by averaging the AoAs provided by Brysbaert and Biemiller (2017).Each model was also trained on four feature sets, as explained below.These feature sets contained a combination of different features based on their individual performances: .Performance Models were assessed on their macro f1-scores since there was an equal distribution of class labels within the train and test sets.Marco f1-score being the average f1-score achieved per-class label.Despite the RF, SVC, and NB models having achieved similar performance to the RC and MC baseline models, increases in performance were observed when certain features were taken into consideration.Table 4 lists all model performances for each feature and feature set. Frequency was found to improve our RF model's performance to a macro f1-score of 0.577 surpassing the highest performance achieved by our baseline models.Figure 9 depicts the class predictions of our RF when trained on frequency.Our RF model was able to use frequency to correctly predict a large number of the L1 English speakers' spelling errors, whilst simultaneously being less successful with predicting spelling errors made by L2 English speakers. The psycholinguistic features of prevalence, concreteness and AoA likewise improved our RF's performance achieving macro f1scores of 0.552, 0.572, and 0.570, respectively.However, in contrast to frequency, these features improved our RF's ability to predict L2 English speakers' spelling errors.Figure 10 shows the class prediction of our RF when trained on feature set C comprising these psycholinguistic features plus frequency.After having been trained on this feature set, our RF was able to correctly predict a larger number of spelling errors made by L2 English speakers alongside those instances already correctly predicted as belonging to L1 English speakers by our previous frequency-based RF.Feature set C resulted in our RF achieving it's highest performance with a macro f1-score of 0.611. To determine whether our RF's performance using feature set C was statistically significant to that achieved by our RC baseline, predictions using feature set C were generated ten times.A t-test was then applied to the f1-scores achieved by our RF and RC models.The p-value obtained is lower than 0.05 indicating that the performance of our RF trained on feature set C is statistically significant compared to our RC baseline. Feature sets B and D surprisingly performed less well than feature set C having produced slightly worse macro f1-scores of 0.562 and 0.559, respectively.We contribute this to feature set C's inclusion of AoA.Feature set B did not include this feature, whereas feature set D included all features which likely convoluted class boundaries. . Transferable features Frequency, prevalence, concreteness, and AoA are features that can be gained from LCP datasets and then used to train a binary classifier for predicting spelling errors made by L1 or L2 English speakers.Contrary to our previous observation in Section 5, frequency was found to be indicative of L1 English speakers' spelling errors more so than spelling errors made by L2 English speakers.This is likely a result of the strong correlation observed between frequency and lexical complexity, regardless of the target population (Wu, 2013).Prevalence, concreteness, and AoA, on the other hand, were able to predict more spelling errors made by L2 English speakers.This supports our previous finding.It suggests that words that are less prevalent, less concrete, and are learned at a later age are prone to being misspelt by L2 English speaker's, hence are likely to be considered more complex by this demographic. Conclusion and outlook This study aimed to discover whether sizable differences exist between the lexical complexity assignments of L1 and L2 English speakers.The complexity assignments of 940 shared tokens were extracted and compared from three LCP datasets: the CompLex dataset (Shardlow et al., 2020), the WCL dataset (Maddela and Xu, 2018), and the CERF-J wordlist (Tono, 2017).It was found that word frequency, length, and syllable count had a greater effect on perceived lexical complexity for L2 English speakers than they did for L1 English speakers.Various derivations: "-ness" and "-tion" increased lexical complexity for L2 English speakers more so than L1 English speakers.Root words were seen to be universally less complex by comparison with their derivational forms.Familiarity and prevalence influenced lexical complexity for L2 English speakers, yet for L1 English speakers only prevalence had an effect.Concreteness was found to predict lexical complexity mainly for L2 English speakers.However, a less emphatic correlation was also observed for L1 English speakers. Our findings were lastly applied to the task of automatically classifying spelling errors made by L1 and L2 English speakers.It was found that frequency obtained from the CompLex dataset was able to predict spelling errors made by L1 English speakers.Familiarity, prevalence, and concreteness, on the other hand, obtained from the WCL dataset and the CERF-J wordlist, were able to predict more spelling errors made by L2 English speakers.As such, we demonstrate that several features ascertained from LCP datasets are transferable and that several of our findings are generalizable.We aim to use the differences between L1 and L2 English speakers perception of lexical complexity to better develop personalized LCP systems, in turn, aiding future TS and CALL applications. FIGURE FIGUREAvg.complexity per relative document frequency within the BNC. FIGURE FIGUREAverage complexity per concreteness rating taken from Brysbaert et al. (). FIGURE FIGUREPredictions of RF trained on set C. TABLE Example of a sentence annotated with both binary and continuous complexity values from CWI and LCP systems, respectively, taken from the CompLex dataset TABLE Example of target words shared between the three datasets: CompLex, WCL, and CERF-J. TABLE Average complexities and frequency of the top most complex bigrams and trigrams across the three datasets. TABLE Macro f -scores produced by each feature and feature set ordered from highest to lowest score on feature set C, being the best performing feature set.Best performances are in bold.Dash lines separate baseline models: RC and MC, and feature set A, from non-baseline experiments. FIGUREPredictions of RF trained on frequency.
2023-12-05T16:35:28.399Z
2023-11-30T00:00:00.000
{ "year": 2023, "sha1": "4c30b065b156a0b9bd59065cf8eb5a2eba293b3c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/frai.2023.1236963/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2cabb0e0135417af775e8f6c2fcbba59635700d9", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [] }
233457595
pes2o/s2orc
v3-fos-license
Anxiety and depression severity in neuropsychiatric SLE are associated with perfusion and functional connectivity changes of the frontolimbic neural circuit: a resting-state f(unctional) MRI study Objective To examine the hypothesis that perfusion and functional connectivity disturbances in brain areas implicated in emotional processing are linked to emotion-related symptoms in neuropsychiatric SLE (NPSLE). Methods Resting-state fMRI (rs-fMRI) was performed and anxiety and/or depression symptoms were assessed in 32 patients with NPSLE and 18 healthy controls (HC). Whole-brain time-shift analysis (TSA) maps, voxel-wise global connectivity (assessed through intrinsic connectivity contrast (ICC)) and within-network connectivity were estimated and submitted to one-sample t-tests. Subgroup differences (high vs low anxiety and high vs low depression symptoms) were assessed using independent-samples t-tests. In the total group, associations between anxiety (controlling for depression) or depression symptoms (controlling for anxiety) and regional TSA or ICC metrics were also assessed. Results Elevated anxiety symptoms in patients with NPSLE were distinctly associated with relatively faster haemodynamic response (haemodynamic lead) in the right amygdala, relatively lower intrinsic connectivity of orbital dlPFC, and relatively lower bidirectional connectivity between dlPFC and vmPFC combined with relatively higher bidirectional connectivity between ACC and amygdala. Elevated depression symptoms in patients with NPSLE were distinctly associated with haemodynamic lead in vmPFC regions in both hemispheres (lateral and medial orbitofrontal cortex) combined with relatively lower intrinsic connectivity in the right medial orbitofrontal cortex. These measures failed to account for self-rated, milder depression symptoms in the HC group. Conclusion By using rs-fMRI, altered perfusion dynamics and functional connectivity was found in limbic and prefrontal brain regions in patients with NPSLE with severe anxiety and depression symptoms. Although these changes could not be directly attributed to NPSLE pathology, results offer new insights on the pathophysiological substrate of psychoemotional symptomatology in patients with lupus, which may assist its clinical diagnosis and treatment. AbstrAct Objective To examine the hypothesis that perfusion and functional connectivity disturbances in brain areas implicated in emotional processing are linked to emotionrelated symptoms in neuropsychiatric SLE (NPSLE). Methods Resting-state fMRI (rs-fMRI) was performed and anxiety and/or depression symptoms were assessed in 32 patients with NPSLE and 18 healthy controls (HC). Whole-brain time-shift analysis (TSA) maps, voxel-wise global connectivity (assessed through intrinsic connectivity contrast (ICC)) and within-network connectivity were estimated and submitted to one-sample t-tests. Subgroup differences (high vs low anxiety and high vs low depression symptoms) were assessed using independentsamples t-tests. In the total group, associations between anxiety (controlling for depression) or depression symptoms (controlling for anxiety) and regional TSA or ICC metrics were also assessed. Results Elevated anxiety symptoms in patients with NPSLE were distinctly associated with relatively faster haemodynamic response (haemodynamic lead) in the right amygdala, relatively lower intrinsic connectivity of orbital dlPFC, and relatively lower bidirectional connectivity between dlPFC and vmPFC combined with relatively higher bidirectional connectivity between ACC and amygdala. Elevated depression symptoms in patients with NPSLE were distinctly associated with haemodynamic lead in vmPFC regions in both hemispheres (lateral and medial orbitofrontal cortex) combined with relatively lower intrinsic connectivity in the right medial orbitofrontal cortex. These measures failed to account for self-rated, milder depression symptoms in the HC group. Conclusion By using rs-fMRI, altered perfusion dynamics and functional connectivity was found in limbic and prefrontal brain regions in patients with NPSLE with severe anxiety and depression symptoms. Although these changes could not be directly attributed to NPSLE pathology, results offer new insights on the pathophysiological substrate of psychoemotional symptomatology in patients with lupus, which may assist its clinical diagnosis and treatment. IntROduCtIOn SLE commonly affects the central nervous system (CNS). Patients with neuropsychiatric SLE (NPSLE) may present with focal neurologic deficits, or 'diffuse' clinical manifestations, including mood disturbances, anxiety disorders or cognitive dysfunction. 1 Anxiety and depression are among the most frequent neuropsychiatric (NP) events in NPSLE, 2 3 with major impact on patients' mental health and health-related quality of life. [4][5][6][7] Histopathological findings suggest that NPSLE is associated with brain lesions, not detectable by conventional brain MRI (cMRI) in over 50% of the cases. 8 9 Advanced MRI techniques have largely overcome the limitations of conventional MRI by detecting haemodynamic and functional changes which may Key messages What is already known about this subject? ► Anxiety and depression are among the most frequent neuropsychiatric events in neuropsychiatric SLE (NPSLE). ► Reduced perfusion in prefrontal white matter is associated with more severe anxiety symptomatology. What does this study add? ► Anxiety and depression severity in NPSLE are associated with haemodynamic and functional connectivity changes of the brain frontolimbic neural circuit. How might this impact on clinical practice or future developments? ► Understanding characteristics of brain function that parallel psychoemotional symptomatology in NPSLE may assist its clinical diagnosis and treatment. Lupus Science & Medicine correlate with neuropsychiatric manifestations in patients with SLE. 10 To this end, patients with primary NPSLE (ie, attributed to SLE) displayed significant hypoperfusion in cerebral white matter that appears normal on cMRI. 11 Functional compensatory changes involving enhanced prefrontal activity during performance of challenging working memory tasks has also been reported in these patients. 12 Resting-state functional MRI (rs-fMRI) is a noninvasive imaging technique, using blood oxygenation leveldependent (BOLD) signal, that has been widely used to investigate brain function in various CNS diseases, including NPSLE, where altered functional connectivity was shown both within and between several key brain networks. 13 Interestingly, rs-fMRI could provide evidence about neural activity and also about cerebral perfusion alterations, by using time-shift analysis, a promising new method that has been used to assess haemodynamics in previous studies. [14][15][16][17][18] According to this method, the haemodynamic transfer speed (haemodynamic lag or lead times) is evaluated by using the temporal shift of low-frequency BOLD signal fluctuations of rs-fMRI, and correlates with regional brain perfusion. 18 Few studies have used perfusion MRI in SLE with contradictory findings, ranging from non-significant perfusion changes in normal-appearing white matter and normal-appearing grey matter, 19-21 to regional hypoperfusion 11 22 23 or hyperperfusion 20 23 24 in patients with NPSLE or SLE. Even more, research on haemodynamic and functional changes accompanying anxiety and depression symptoms in NPSLE has been limited. Recently, we reported reduced perfusion in prefrontal white matter (serving both dorsolateral and ventromedial prefrontal cortices) associated with more severe anxiety symptomatology in a cohort of patients with NPSLE. 25 This observation poses further questions regarding the connection between the putative haemodynamic dysregulation and functional changes, including altered patterns of connectivity of frontolimbic regions in NPSLE. Another question is the extent to which anxiety and depression symptoms have common pathophysiological substrates, both psychosocial and neuronal, 26 27 in view of the very strong correlations between scores on self-report scales of anxiety and depression, 28 also found in patients with NPSLE. 7 29 Tο address these issues, we assessed voxel-level haemodynamic and functional connectivity indices derived from a single technique (rs-fMRI) in patients with NPSLE. For this, we integrated voxel-based estimates of regional haemodynamic transfer speed (haemodynamic lag or lead times from time-shift analysis) as indices of regional brain perfusion, with measures of functional connectivity in all AAL regions where significant subgroup differences were found in independent-samples t-test and in the amygdala, in view of its central role for emotion/depression and anxiety. [30][31][32] Limbic and frontolimbic network connectivity was assessed through voxel-based (using the intrinsic connectivity contrast metric) 33 and conventional ROI-based approaches. The following research questions were explored: (1) Do individual differences in self-reported depression and anxiety symptoms among patients with NPSLE correlate with haemodynamic transfer speed within limbic and prefrontal cortices? (2) Το what extent are associations between indices of brain function and anxiety symptoms dependent on comorbid depression symptoms and vice versa? MateRIals and MetHOds Participants Patients diagnosed with NPSLE (n=32), who regularly visited the outpatient clinic of the University Hospital of Heraklion, Greece, as part of routine follow-up evaluation, were invited to participate. Expert clinicians (GB, AF, PS, DB) obtained a detailed history from all participants prior to the MRI examination. All patients met the revised ACR 1997 classification criteria for SLE. 34 NPSLE diagnosis and attribution was based on physician judgement, following multidisciplinary approach and considering patient age and risk factors for NPSLE (anti-phospholipid antibodies, prior neuropsychiatric manifestation, generalised disease activity (estimated by SLEDAI-2K 35 ), findings of conventional MRI imaging and other diagnostic procedures). Organ damage was assessed by the SLICC/ACR damage index. 36 Patients with prior major psychiatric disorder were not considered for the study. Additional inclusion criteria were age over 18 years, ability to speak and read Greek fluently, negative history of thromboembolic cardiovascular disease or other primary CNS diseases. Neuropsychological and emotional status assessments were performed within 3 months of the MRI study, without any alterations in treatment of the patients in the meanwhile (see table 1). Comparison rs-fMRI data were available on 18 healthy volunteers (15 women, mean age=42.9, SD 15.1 years) for whom self-reported depression severity ratings were available on the same instrument that was administered to patients with NPSLE (see below). The two groups were comparable on age (p=0.3) and education level (p=0.2), although controls included a higher percentage of men (p=0.01). MR imaging Brain MRI examinations were performed on a clinical 1.5T whole-body superconducting imaging system (Vision/Sonata, Siemens/Erlangen), equipped with highperformance gradients (gradient strength: 40 mT/m, slew rate: 200 mT/m/ms) and a two-element circularly polarised head array coil. fMRI data preprocessing and denoising Each BOLD time series consisted of 150 dynamic volumes (the first five were ignored in all subsequent analyses to avoid T1 saturation effects). Preprocessing steps included slice-time correction, realignment, segmentation of structural data, normalisation into standard stereotactic Montreal Neurological Institute (MNI) space and spatial smoothing using a Gaussian kernel of 8 mm full-width at half-maximum using SPM8. As functional connectivity is affected by head motion in the scanner, 37 we accounted for motion artefact detection and rejection using the artefact detection tool (ART; http://www. nitrc. org/ projects/ artefact detect). In terms of signal denoising, grey matter, white matter and cerebrospinal fluid signal (CSF) mean signals were regressed out of all voxel time series in order to mitigate their effects on BOLD time courses. The first five principal components of white matter and CSF regions were regressed out of the signal, as well as their first order derivatives. These steps were completed using CompCor implemented within the CONN preprocessing module and executed in MATLAB. The fMRI time series were detrended and bandpass filtered in the 0.008-0.09 Hz range in order to eliminate low-frequency drift and high-frequency noise. These preprocessing steps were applied to the data used for voxel level intrinsic connectivity contrast (ICC), ROI ICC and functional connectivity measures described later. For time-shift analysis (TSA), only the CSF signal was regressed out of the BOLD fMRI time courses, while all the remaining steps were applied as described earlier. Global grey and white matter signals are not considered noise in the calculation of TSA and thus their influence on the voxel time series was not removed. Time-shift analysis Whole-brain voxel-wise TSA maps were calculated, as described in several other studies, 14-17 using in house MATLAB scripts. First, a mask of the major venous sinuses was created based on the standard brain. The reference BOLD time series was calculated as the mean of all voxel time series included in the venous mask. Then, voxel-wise crosscorrelations were calculated in reference to this regressor for lags of −3 TRs to 3 TRs (or −6.96 s to 6.96 s). This entails the computation of the lagged versions of each voxel time series (−3 TR to +3 TR) and of the correlation coefficient of each lagged version of the time series with the reference signal. The lag value corresponding to the highest correlation coefficient is then assigned to each voxel as its time shift/delay value. A broader range of signal delay (eg, −6 TRs to +6 TRs) was not selected mainly because of the relatively high TR value attainable in the present study. The following metrics were computed for each aal ROI: percentage of voxels displaying haemodynamic lag (as indicated by TSA value >1 TR); percentage of voxels displaying haemodynamic lead (as indicated by TSA value <−1TR); average TSA value across all voxels comprising each ROI. Voxel-wise functional connectivity Voxel-wise global connectivity was assessed through the ICC, an estimate of the degree of association between the time series of a given voxel with all the remaining voxels in the brain (in the present study, voxels included in all 90 regions of the AAL atlas) computed as the square root of the mean of each ROI's squared correlation values with all other ROIs. ROI-level connectivity We also computed ROI-to-ROI connectivity estimates between all AAL regions where significant subgroup differences were found in independent-samples t-tests. First, each AAL ROI representative time series was calculated as the mean of all voxel time series within that ROI. ROI-to-ROI functional connectivity estimates were then calculated using the Pearson correlation coefficient for each pair of ROI time series. Negative voxel-level correlations were discarded from the analyses. Measures of psychoemotional status Self-reported symptoms of anxiety and depression were recorded using the Greek version of the Hospital Anxiety and Depression Scale (HADS) 38 with a widely employed cut-off for clinically significant anxiety or depressive symptomatology of 7/8 points. Patient and public involvement Patients or the public were not involved in the design, conduct, reporting or dissemination plans of our research. statistical analysis In the main analyses, we aimed to identify brain regions where individual differences in hemodynamics and functional connectivity appeared to be associated with the severity of anxiety or depression symptoms among patients with NPSLE. In preliminary analyses, we identified regions where significant voxel clusters were found in group-level, one-sample t-tests conducted on wholebrain TSA and ICC maps using SPM12 (at p<0.001 with minimum cluster size of 30 voxels). Group differences (between patients with NPSLE scoring high vs NPSLE scoring low on anxiety and between patients with NPSLE scoring high vs NPSLE scoring low on depression) were further assessed in SPM12 using independent-samples t-tests (evaluated at p<0.001 uncorrected). Next, significant voxel clusters in independent-samples t-tests were overlaid on the aal atlas to compute regionspecific TSA and ICC metrics, respectively (mean TSA, mean ICC, percentage of voxels with TR >1 and percentage of voxels with TR <−1). In addition, pairwise connectivity indices were computed between all aadl ROIs where significant anxiety subgroup differences were found in independent-samples t-tests and separately among the set of aa ROIs where significant depression subgroup differences were found in independent-samples t-tests. These metrics were used to assess the unique associations of TSA, ICC and pairwise connectivity metrics with the severity of anxiety symptoms controlling for comorbid depression symptoms. In separate analyses, we assessed the unique associations of TSA, ICC and pairwise connectivity metrics with the severity of depression symptoms controlling for anxiety symptomatology. These analyses were conducted using partial correlation coefficients in SPSS V.20. In supplementary, post hoc analyses, we examined whether regional haemodynamic and functional connectivity measures associated with depression symptoms among patients with NPSLE (in correlational analyses and/or independent-samples t-tests comparing high-depression vs low-depression patients with NPSLE) could also account for such symptoms among healthy volunteers (n=18). Given that anxiety self-ratings were not available in the latter group, associations were assessed via zero-order correlations (ie, without controlling for anxiety symptoms) and were restricted to the regions where significant zero-order correlations with depression were found in the total NPSLE group. Results severity of anxiety and depressive symptomatology in patients with nPsle In agreement with previous reports, 2 3 5 we observed high frequency of significant depression (53.1%) and anxiety (65.6%) symptomatology among enrolled patients with NPSLE using clinically validated cut-off scores (HADS subscale scores ≥8 points). The correlation between HADS anxiety and depression scores was high (r=0.75), while both anxiety and depression symptom severity correlated modestly with disease duration (r=0.34, p=0.024 and r=0.37, p=0.015, respectively). Independent-samples t-tests revealed clusters of voxels displaying greater haemodynamic lag in the subgroup of patients with NPSLE reporting significant anxiety symptoms as compared with those with milder anxiety symptoms in bilateral dLPFC (middle frontal gyrus; BA 46/9) and in the right ACC (ie, the anterior cingulate gyrus minus the subgenual portion; see table 2 and figure 1D). In addition, the former subgroup displayed greater haemodynamic lead in voxel clusters located in the subgenual portion of the ACC (figure 1C), and reduced ICC in the left amygdala and right dlPFC (lateral orbitofrontal cortex ; figure 2C). These findings were refined by correlational analyses assessing the unique associations of TSA and ICC metrics with the severity of anxiety symptoms (HADS Anxiety score) in the total group of patients with NPSLE. Partial correlations were computed for TSA and ICC metrics derived from the six regions listed in table 2 plus the right amygdala (based on extant literature) and evaluated at Bonferroniadjusted p<0.05/7=0.007. Associations between anxiety and the resulting 21 pairwise ROI-to-ROI connectivity indices were evaluated at p<0.002. These analyses revealed significant unique positive associations of the severity of anxiety symptoms with (a) haemodynamic lead in the right amygdala (percentage of leading voxels; p=0.003; Figure 3A and table 3), (b) relatively lower ICC of the right dlPFC (lateral orbitofrontal cortex; p=0.001; figure 3B), (c) relatively lower connectivity between right dlPFC (middle frontal gyrus) and right vmPFC (medial frontal gyrus; p<0.001; Figure 3C), and (d) relatively higher connectivity between right ACC and right amygdala (p=0.0033; Figure 3D). Haemodynamic and connectivity correlates of depression symptoms in nPsle The two NPSLE patient subgroups were comparable on disease duration (p=0.1), SLEDAI (p=0.2) and SLICC/ ACR (p=0.2), although patients in the high depression subgroup were, on average, older (p=0.02). Independent-samples t-tests revealed clusters of voxels displaying greater haemodynamic lag in the subgroup of patients with NPSLE reporting significant depression symptoms as compared with those with milder depression symptoms in the left ACC and right precuneus (see table 4 and figure 4D; adjusting for age). In addition, the former subgroup displayed greater haemodynamic lead (figure 4C) combined with reduced ICC in voxel clusters located in the vmPFC (medial frontal and orbitofrontal cortices bilaterally; BA 9, 10, 11; figure 5C). Figure 2 Whole-brain ICC maps in the subgroup of patients with NPSLE reporting significant anxiety symptoms (A; n=21) and patients reporting milder symptoms (B; n=11) displaying voxels with significant ICC values within each group (p<0.001 uncorrected, minimum cluster size=30 voxels). Voxel clusters displaying significantly higher ICC in the subgroup of patients with high vs low anxiety symptoms are shown on appropriate axial views (independent-samples t-tests thresholded at p<0.001 uncorrected). Voxel clusters displaying significantly higher lead in the subgroup of patients with high vs low anxiety symptoms (independent-samples t-tests thresholded at p<0.001 uncorrected; (C) were found in the left amygdala (1) and right lateral orbitofrontal cortex (2). Lupus Science & Medicine In partial correlations performed on the total group of patients with NPSLE, we assessed unique associations of depression symptoms with TSA, ICC and pairwise connectivity metrics involving the eight regions listed in table 4, plus the amygdala (bilaterally). Coefficients were evaluated at Bonferroni-adjusted p<0.05/10=0.005 (for TSA and ICC metrics) and at p<0.001 (for pairwise ROI-to-ROI connectivity indices). These analyses revealed significant unique positive associations of the depression symptom severity with (a) haemodynamic lead in the two vmPFC regions (percentage of leading voxels in orbitofrontal cortex bilaterally; p=0.0035; online supplemental file 1A and table 3) and (b) relatively lower ICC of the right vmPFC (orbitofrontal cortex; p<0.003; online supplemental file 1B). It should be noted that analyses on ROIto-ROI connectivity among limbic and prefrontal regions indicated that the positive bivariate association between depressive symptoms and connectivity between the right ACC and amygdala (see table 3) was largely attributed to comorbid anxiety symptoms. The aforementioned associations were not mediated by SLE disease activity or organ damage (quantified by the SLEDAI and SLICC/ACR damage index, respectively) on haemodynamic lag/lead and ICC values (|r|<0.2 in all cases). Moreover, presence or number of lesions on conventional MRI did not correlate with any psychoemotional variables (|r|<0.1). Finally, patients with abnormal MRI (n=17) did not vary on anxiety or depression scores from those with normal MRI (n=16; p>0.9). It should be noted that correlational analyses on average TSA values computed across all voxels within a given aal region did not reveal significant effects. Haemodynamic and connectivity correlates of depression symptoms in healthy volunteers Preliminary analyses suggested that adequate variability of HADS depression scores was present in the HC group to warrant correlational analyses with haemodynamic and connectivity indices (range=1-13 points, IQR=4 points). As expected, however, the HC group scores on average lower on HADS depression (mean=5.29, SD 2.73) as compared with the total NPSLE group (mean=10.30, SD 5.61; p<0.001). The HC group comprised 12 persons who scored <8 points on the HADS Depression subscale and 6 participants who had scores indicative of significant depression symptoms (≥8 points). Pearson correlations were computed between HADS scores and each of the 10 TSA, 10 ICC and 2 ROI-to-ROI connectivity measures listed in the lower half of table 3. Results revealed a non-significant trend involving haemodynamic lead in the right vmPFC (lateral orbitofrontal cortex), which was in the same direction as in the NPSLE group (r=0.324, p=0.04). An additional, marginally significant correlation was found for ICC in the right vmPFC Biomarker studies (medial orbitofrontal cortex) which was in the opposite direction as compared with the NPSLE group (ie, positive; r=0.423, p=0.03). dIsCussIOn Our study provides novel evidence that individual differences in self-reported anxiety and depression symptoms among patients with NPSLE are associated with specific, complex patterns of haemodynamic and functional connectivity changes within limbic and prefrontal brain regions. Anxiety symptom severity was distinctly associated with faster haemodynamic transfer times (ie, haemodynamic lead) in the right amygdala, relatively lower intrinsic connectivity of orbital dlPFC, and relatively lower bidirectional connectivity between dlPFC and vmPFC combined with relatively higher bidirectional connectivity between ACC and amygdala. Conversely, depression symptom severity was specifically associated with haemodynamic lead in the vmPFC regions in both hemispheres (lateral and medial orbitofrontal cortex) combined with relatively lower intrinsic connectivity in the right medial orbitofrontal cortex. distinct haemodynamic and functional connectivity correlates of anxiety in nPsle The severity of anxiety symptoms in our cohort was specifically associated with faster haemodynamic transfer times within the right amygdala. Previous work on anxiety disorders has highlighted the crucial role of amygdala in anxiety. 30 39-41 Experiments using optogenetics in rodents have shown that precisely modulating amygdala circuits can modulate anxiety-related behaviours on demand. 30 Increased resting metabolic activity, blood flow and reactivity in response to negative stimuli in the amygdala of patients with anxiety has been proposed to mediate typical anxiety symptoms, such as hypervigilance and biased threat detection. 39 40 Lupus Science & Medicine Table 3 Zero-order and partial correlations between emotional, haemodynamic and functional connectivity measured in the total sample of patients with NPSLE In addition, anxiety level among patients with NPSLE was associated with relatively lower levels of global connectivity of the lateral orbitofrontal cortex and relatively lower connectivity between the dlPFC and the vmPFC. In accordance, functional disturbances in orbitofrontal cortex have been found to associate with anxiety disorders, and anxiety symptoms have been shown to relate to decreased orbitofrontal cortex volumes in late-life depression. 39 42 43 Moreover, disturbed perfusion in the normal-appearing white matter underlying both dorsolateral and ventromedial prefrontal cortices has been shown to be strongly correlated with anxiety symptomatology in patients with NPSLE. 25 Both dlPFC and vmPFC are considered as major components of the emotion appraisal and/ or emotion regulation networks, in addition to their purported contribution to working memory and topdown control. [44][45][46][47] The relatively faster perfusion in amygdala and decreased connectivity of dlPFC and vmPFC may reflect an imbalance between emotional reactivity and control among patients with NPSLE who experience severe anxiety. Finally, relatively higher levels of self-reported anxiety symptoms were distinctly associated with increased functional connectivity between the right ACC and amygdala. The ACC has been often considered as the link between limbic (emotion-related) and prefrontal (cognition/topdown control) circuits, and therefore a crucial component for emotion regulation. 48 Particularly, increased ACC-amygdala coupling has been associated with stressinduced attentional bias to emotional stimuli and rest, as well as with the trait anxiety in healthy and anxious participants. [49][50][51] Moreover, developmental studies have indicated that increased ACC-amygdala connectivity is crucially involved in both anxiety and depression symptoms in early adulthood. 52 53 distinct haemodynamic and functional connectivity correlates of depression in nPsle The second important finding of the present study is that the association between depression symptoms and relatively faster haemodynamic transfer in the vmPFC (encompassing lateral orbitofrontal regions) bilaterally was shown to be independent of comorbid anxiety symptoms. Moreover, faster haemodynamic transfer within the right vmPFC was paralleled by reduced global connectivity as a correlate of depression symptoms among patients with NPSLE. Directly attributing these findings to the pathophysiology of NPSLE requires direct comparisons with appropriate control groups (comprising non-NPSLE patients with and without significant depression symptoms), which were not possible in the present study. Nevertheless, results from correlational analyses Lupus Science & Medicine Figure 5 Whole-brain ICC maps in the subgroup of patients with NPSLE reporting significant depression symptoms (A; n=17) and patients reporting milder symptoms (B; n=15) displaying voxels with significant ICC values within each group (p<0.001 uncorrected, minimum cluster size=30 voxels). Voxel clusters displaying significantly higher ICC in the subgroup of patients with high vs low depression symptoms (independent-samples t-tests thresholded at p<0.001 uncorrected; C) were found in the right lateral orbitofrontal cortex (1), right medial orbitofrontal cortex (2), left medial frontal gyrus (3), right medial frontal gyrus (4) and right angular gyrus (5). conducted within a group of healthy volunteers may provide some preliminary indications that the association between regional haemodynamics and depression symptoms are stronger among patients with NPSLE. Moreover, whereas depression symptoms were linked to relatively lower connectivity of the right vmPFC in patients with NPSLE, (at least) mild depression symptoms in healthy volunteers were associated with higher connectivity in this region. Mood disorders have been commonly linked to structural and functional changes in ventromedial and lateral orbitofrontal cortices. 54 55 Overall increased vmPFC engagement has been linked to higher trait rumination scores in bipolar patients, while patients with major depressive disorder exhibit decreased positive connectivity between mPFC and nucleus accumbens, increased negative connectivity between mPFC and amygdala, and increased connectivity within orbitofrontal cortex and other regions of the default mode network (ie, subgenual ACC, caudate, hippocampus, inferior parietal lobule, medial prefrontal cortex, middle temporal gyrus and precuneus). Engagement of this network is typically attributed to internally focused attention and self-orientation. [56][57][58] These reports combined with the apparent faster perfusion of vmPFC found in the present study consolidate the notion of distorted within-network selectivity associated with depression symptoms in patients with NPSLE. Notably, direct associations between depression symptoms and perfusion measures were not apparent in our previous study involving a partially overlapping cohort of patients with NPSLE using the DSC-MRI technique. 25 Apparently, haemodynamic indices derived from TSA of rs-fMRI data and DSC-MRI provide complementary measures of cerebral perfusion that differ in many respects. Ιn fact, the image intensity of these two imaging techniques depends on different contrast agents (deoxyhaemoglobin for rs-fMRI and gadolinium for DSC). In DSC-MRI, the injected contrast agent is visible in all vessel types, while arteries and arterioles are almost invisible to BOLD fMRI. Thus, the latter estimates haemodynamic lag or lead times, as compared with the average time series recorded from major venous structures, while much faster blood flow (from arteries and arterioles) is registered by the DSC-MRI method, resulting in undifferentiated time lags. 17 Even more, perfusion measurements by DSC-MRI in our previous study relied on expert-placed measurement ROIs within the normal-appearing white matter, while TSA provide voxel-based haemodynamic estimates in both grey and underlying white matter regions. study limitations There are important limitations in this study. First, direct associations between emotional status and brain function were only assessed among patients with NPSLE. Hence, we cannot conclude that the reported associations are Biomarker studies specific to the putative brain pathophysiology characteristic of NPSLE or, alternatively, that they reflect the combined direct impact of the disease on brain function and its indirect effects on emotional well-being through the physical burden incurred by lupus. Properly addressing this hypothesis requires direct comparisons of NPSLE patients with lupus patients who do not exhibit haemodynamics and connectivity indices with lupus patients who do not present with neuropsychiatric manifestations (or, at least, without psychiatric manifestations directly attributed to lupus pathology). In the present study, we conducted supplementary analyses on data from a healthy control group demonstrating sufficient variability on HADS depression scores to permit correlational analyses. However, the range of HADS scores and the percentage of healthy participants with scores in the clinically significant range (ie, ≥8 points) was considerably lower than that observed among patients with NPSLE. Therefore, failure to find similar, significant associations between brain function indices and self-reported depression symptom severity in this group does not constitute conclusive evidence of the specificity of our findings for NPSLE. Moreover, perfusion indices were derived from the analysis of time lags (or leads) in resting-state recordings across brain voxels. These measures of time shift in the phase of low-frequency BOLD oscillations are presumed to reflect haemodynamic transfer lag or lead times and, at least, in part serve as indirect indices of regional brain perfusion. 17 18 Indices of haemodynamic lag have been validated in cases of severe acute or chronic brain ischaemia due to large vessel occlusion 14 14 16 59 60 or milder widespread haemodynamic deficits in patients with Alzheimer's disease 61 and have been shown to represent hypoperfused tissue. On the contrary, the significance of haemodynamic lead has not been explored systematically. Finally, we did not assess potential dependencies between time-lag indices and functional connectivity measures. Clinical implications The findings of this study add to the growing body of evidence regarding correlates of emotional symptoms in patients with NPSLE, which could assist diagnosis and potentially inform interventions. Currently, the diagnosis of NPSLE is complex, often relying on the acumen of experienced physicians. 62 In the case of non-focal manifestations such as depression and anxiety, the diagnostic utility of conventional MRI is limited. Our fMRI study offers insights on the distinct brain substrates of anxiety and depression symptomatology in NPSLE, which thus could be monitored to assist clinical diagnostics. Moreover, as identifying brain regions and/or networks associated with psychoemotional resilience and/or psychopathology can lead to more targeted and effective treatments, 63 our results might support the development of personalised treatment regimens (pharmacological and/ or behavioural) in patients with NPSLE. COnClusIOn In the current study, we showed that anxiety and depression severity in NPSLE are associated with relative changes in haemodynamics and functional connectivity of the brain frontolimbic neural circuit, both estimated by using rs-fMRI. Anxiety symptoms were particularly related to faster perfusion dynamics in the right amygdala, decreased dlPFC intrinsic connectivity, decreased bidirectional connectivity between vmPFC and dlPFC, and relatively higher connectivity between ACC and amygdala. Depression symptoms were more closely linked to relatively faster perfusion dynamics in the vmPFC, bilaterally, paralleled by relatively lower intrinsic connectivity in the right vmPFC. Notably, the correlation between depressive symptoms and ACC-amygdala connectivity could be attributed to the comorbid anxiety. Pending further confirmation, these findings might assist the diagnosis and management of these complex manifestations in NPSLE. Competing interests None declared. Patient consent for publication Not required. ethics approval The University Hospital of Heraklion Ethics Committee approved the study and the procedure was thoroughly explained to all patients and volunteers who signed the written informed consent. Provenance and peer review Not commissioned; externally peer reviewed. data availability statement Data are available on reasonable request. The computed metrics derived from resting state fMRI recordings will be available on request. supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability
2021-05-01T06:17:23.102Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "c407599ad43c56b31634a6933c9a5c0953ffc025", "oa_license": "CCBYNC", "oa_url": "https://lupus.bmj.com/content/lupusscimed/8/1/e000473.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a823361292086cdb75e4fbed9ad5d4e89f2e65fc", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
245857901
pes2o/s2orc
v3-fos-license
NMB as a Novel Prognostic Biomarker in Colorectal Cancer Correlating with Immune Inltrates Background:Neuromedin B(NMB) is associated with the occurrence and development of a variety of cancers, However, the role of NMB in colorectal cancer is lacking in further studies. Methods:Transcriptome data and clinical data of CRC were downloaded and analyzed from the TCGA database and GEO database to study the differential expression of NMB. We analyzed the relationship between NMB expression and survival in patients with colorectal cancer using 8 public datasets from the Gene Expression Integration (GEO) database and the TCGA database. Meta-analysis was performed on the analysis results of TCGA and GEO data to determine the role of NMB in CRC. The receiver operating characteristic (ROC) curve was used to evaluate the accuracy of NMB in predicting survival rate in CRC patients. Wilcox. Test and Kruskal. Tests were used to study the relationship between clinicopathological features and the expression of NMB. Cox regression analysis was used to analyze the effect of NMB expression on survival. Gene collection enrichment analysis (GSEA) was performed using the TCGA database to screen the signaling pathway regulated by NMB. The Linkedomics platform was used to identify NMB co-expressed genes and explore the potential mechanisms of NMB mediation. Tumor Immune Estimation Resource (TIMER) site database was used to analyze the relationship between NMB expression level and immune inltration. Related genes were identied by co-expression analysis, and four genes (NDUFB10, SERF2, DPP7, and NAPRT) were screened out as a prognostic signature. The relationship between risk score and OS were studied to explore the predictive value of risk score for CRC. Nomogram was constructed to predict 1 - and 3-year survival in colorectal cancer patients. Results:NMB was highly expressed in colorectal cancer, suggested a poor prognosis. The ROC curve proved that NMB had a high accuracy in predicting the survival rate of CRC patients. Multivariate regression analysis demonstrated that NMB was an independent predictor of survival in patients with CRC.GSEA identied the pathways involved in NMB regulation, including the P53 Signaling pathway, VEGF Signaling pathway, JAK-STAT Signaling pathway, MAPK Signaling pathway,mTOR Signaling pathway, TGF-BETA Signaling pathway, and WNT Signaling pathway, etc. Then,6512 co-expressed genes were identied through the Linkedomics Platform to investigate the potential mechanisms of NMB regulation, including Hepatocellular carcinoma cell cycle, EGF/EGFR Signaling Pathway, VEGFA-VEGFR2 Signaling Pathway, etc. We also conclude that NMB is correlated with T cells CD8, T cells CD4 memory resting, Macrophages M0. Different mutational forms of NMB were associated with the immune inltration of 6 leukocytes. We determined the relationship between NMB and immune marker sets in colorectal cancer, such as CCR7, CD3E, CTLA4, HAVCR2, HLA-DPB1. The predictive ability of the risk score was signicantly better than that of T, N, and M stages. A new nomogram for predicting the 1-year and 3-year OS of CRC patients was constructed, showing good reliability and accuracy for improved treatment decisions. In addition, NMB may contribute to drug resistance in CRC. Conclusion:NMB is highly expressed in CRC and provides a potential biomarker for the diagnosis and prognosis of CRC. accuracy of NMB in predicting the survival of CRC patients. At same time, we evaluated the relationship between NMB expression level and overall survival (OS) using the Survival module in GEPIA(Quartile NMB expression, cutoff-high, 75%; Cutoff - Low, 25%). Kaplan-Meier (K-M) method used to compare different survival curves(cutoff by quartile expression level of NMB, Cutoff-High:75%, Cutoff-Low:25%). The results of the TCGA and GEO databases were meta-analyzed to determine the relationship between the expression of NMB and the survival prognosis of patients with colorectal cancer. expression of in poor overall survival Introduction Cancer plays an important role in in uencing morbidity and mortality in both developed and developing countries [1]. Due to the lack of understanding of the pathological process and regulatory mechanism of cancer, the mortality rate caused by cancer will be on the rise in the future. [2]Although the prognosis of cancer has been improved by various treatments, in many cancers, the prognosis has always been less than satisfactory. Among various types of cancer, colorectal cancer(CRC)is one of its major roles.CRC is the third most common cancer and the second most common cause of cancer-related death in the world.1.2 million patients are diagnosed with colorectal cancer and more than 600,000 die from the disease [3]. It has been reported that the 5-year survival rate for CRC is still very low, although there are many treatments available. [4,5] Therefore, it is of great necessity to identify key new biomarkers and potential mechanisms associated with CRC. NMB (Neuromedin B), a protein-coding gene, encodes members of a bomb-like family of neuropeptides that negatively regulate eating behavior. The protein encoded by this gene regulates smooth muscle contractions in the colon by binding to its cognitive receptor, the neurotransmitter B receptor (NMBR).NMBR is widespread in the gastrointestinal tract. [6]Currently, many studies have shown that NMB is closely related to the occurrence, development, recurrence, and metastasis of a variety of cancers. Including breast cancer cells [7], and lung cancer [8]. Previous studies have shown that in normal and malignant colonic epithelial cells, NMB and its receptors are co-expressed in proliferating cells in an autocrine manner [9]. In our study, we used the Cancer Genome Atlas (TCGA) database to investigate the diagnostic and prognostic value of NMB expression in CRC. In addition, we also analyzed the relationship between NMB expression and clinicopathological characteristics of patients with CRC and its prognostic value in CRC. We also further evaluated biological pathways to study the NMB regulatory pathway related to the pathogenesis of CRC by gene collection enrichment analysis (GSEA).To further investigate the underlying mechanisms of NMB regulation, we used the Linkedomics platform to identify the co-expressed genes of NMB and then investigate the regulatory pathways of these genes.TIMER database was used to study the relationship between NMB and immune in ltration. A prognostic signature was established through genes related to NMB, and then the risk score of each patient was calculated, and the relationship between the risk score and OS was explored. This study reveals the potential role of NMB in CRC, which will help us further understand the possible mechanism of CRC. Materials And Methods Study design and data processing Our study design was shown in a owchart ( Figure.1), including TCGA-based data collection and multiple bioinformatics analyses. All microarray data sets (612, including 44 normal samples and 568 tumor samples) and clinical information related to survival time of 711 CRC patients were downloaded from The Cancer Genome Atlas(TCGA)(https://portal.gdc.cancer.gov/repository). Get the data set using GEO.Expression data and clinical information of NMB were extracted from 8 datasets(GSE17536,GSD17538,GSE29623,GSE38832,GSE40967,GSE71187,GSE87211,GSE103479).Perl programming language was used to match gene expression information with clinical information and delete unknown or incomplete clinical information. Because sequencing data and clinical information are obtained through public databases, there are no ethical issues. Relative NMB expression levels between CRC and normal tissues. survival package, beeswarm package , and limma package had been utilized for analyzing whether there is a signi cant difference in the expression of NMB between CRC and normal tissues.Wilcox.Test was used to determine the P-value. Boxplot was used to plot the scatter difference plot and the paired difference plot to visualize the results. R Software (v. 4.0.4) was used to analyze the above. In addition, we veri ed the differential expression of NMB in colorectal cancer and normal tissues through Gene expression pro ling interactive analysis(GEPIA). Survival Analysis. Patients were divided into two groups according to their median NMB expression level (low NMB expression group and high NMB expression group). Survival analysis was performed using survival and survival software packages and the result was visualized using the Kaplan-Meier survival curve. The survival package,survminer package, and timeROC package were used to draw a receiver operating characteristic (ROC) curve to evaluate the accuracy of NMB in predicting the survival of CRC patients. At the same time, we evaluated the relationship between NMB expression level and overall survival (OS) using the Survival module in GEPIA(Quartile NMB expression, cutoff-high, 75%; Cutoff -Low, 25%). The Kaplan-Meier (K-M) method was used to compare different survival curves(cutoff by quartile expression level of NMB, Cutoff-High:75%, Cutoff-Low:25%). The results of the TCGA and GEO databases were meta-analyzed to determine the relationship between the expression of NMB and the survival prognosis of patients with colorectal cancer. Clinical correlation analysis Wilcox. Test was used to analyze the relationship between age, sex, distant metastasis , and the expression of NMB.Kruskal. test was used to analyze the relationship between T, N, stage, and the expression level of NMB.At last. We use a boxplot to visualize the results. In addition, logistic regression analysis was performed to investigate the association between NMB and clinicopathological features. Univariate and Multivariate Cox Regression Analyses. Univariate and multivariate analyses were performed using Cox proportional hazard regression model. We used univariate analysis to assess the independent predictors of survival of clinicopathological parameters and NMB expression. Multivariate Cox analysis was used to evaluate whether NMB could be used as an independent prognostic factor for CRC. The survival package and Survminer package were used for data analysis, and the results were visualized using a forestplot. Gene Set Enrichment Analysis (GSEA) GSEA(version4.1.0) was used to analyze the signaling pathway associated with NMB in CRC. Gene expression enrichment analysis was performed among different phenotypes determined by NMB expression level.Select the annotated gene set (c2.cp.kegg.v7.2.symbols.gmt) as reference gene sets.The sequence of gene sets was analyzed 1000 times. Normalized enrichment score (NES), nominal P-value, and false discovery rate (FDR) q-value were used to rank the enrichment paths for each group. Eventually, multiple cancer-related pathways were identi ed. To determine the co-expression genes of NMB and the potential mechanisms of NMB mediation. Using the Linkedomics Platform to identify the co-expressed genes of NMB.The GO_BP/CC/MF, KEGG pathway, Wiki pathway, and Reactome pathway were analyzed by overrepresentation enrichment analysis (ORA), The GO_BP/CC/MF, KEGG pathway were analyzed by gene set enrichment (GSEA). Forecast its potential function. Analysis of association between NMB expression level and Immune In ltrates To further explore the mechanism by which NMB is involved in the pathological progression of CRC, we investigated the relationship between NMB expression and immune in ltration. Firstly, the E1071 package, preprocessCore package, and Limma package were used to determine the content of immune cells in each sample, and then a histogram was plotted using the Corrplot package to show the results of the immune in ltration. We divided the patients into the high expression group and the low expression group according to the expression level of NMB, and then used the Limma package and Vioplot package to conduct the difference test of immune cells(pFilter=0.05). Then, the limma package,ggplot2 package, ggpubr package, and ggExtra package were used to test the correlation between the expression of NMB and the content of immune cells.(pFilter=0.05). Finally, we intersected the results of the two tests to obtain immune cells that were correlated with the expression of NMB.In addition, the SCNA module was used to explore the correlation between somatic copy number alteration (SCNA) of NMB and the immune abundance of 6 leukocytes. Correlation analysis of NMB expression level and tumor-in ltrating immune cells gene markers Tumor Immune Estimation Resource(TIMER), A Web Server for Comprehensive Analysis of Tumor-In ltrating Immune Cells.Including 10,897 samples, covering 32 cancer types from the Cancer Genome Atlas (TCGA). Correlation module was used to analyze the correlation between NMB expression level and Tumor-In ltrating immune cells gene markers. Construction and Evaluation of the Prognostic Signatures of NMB First, we obtained the 162 differential genes by constructing a co-expression network of NMB, (fdrFilter=0.00000001, logFC lter=10), and then performed Univariate COX analysis on these DEGs, and determined DEGs that are signi cantly different from OS, (pFilter=0.05 ). Finally, we performed multivariate Cox regression analysis on DEGs screened by univariate COX analysis.4 genes (NDUFB10, SERF2, DPP7, and NAPRT) were selected as a predictive signature. We used the risk score calculation formula to calculate the risk score of each patient: risk score = coef gene 1×gene 1 expression+ coef gene 2×gene 2 expression + ... + coef gene Ñ×gene Ñ expression. The risk score is obtained by weighting the expression level of the gene and the regression coe cient (coef). The risk ratio (HR) of the multivariate Cox regression analysis was logtransformed to calculate the coef value, and the expression of each gene involved in the prognostic characteristics was de ned as the expression of Ñ gene. Patients were divided into high-risk and low-risk groups based on the median risk score. The difference of OS between the high-risk group and the low-risk group was analyzed by the K-M method, and the survival curve was obtained. We then plotted Receiver Operating Characteristic (ROC) curves using the survivalROC package to assess the ability of risk score and other clinical features to predict CRC and to assess sensitivity and speci city by the area below the curve (AUC values).In order to determine whether the risk score can be used as an independent predictor of the prognosis of CRC patients, we incorporated age, gender, stage, T, M, N, and risk score into univariate and multivariate Cox regression analysis. Construction and Validation of Nomogram Based on Risk Score Nomogram can intuitively calculate the survival rate of colorectal cancer patients, which has important value in clinical application. We screened the prognostic factors of colorectal cancer patients, constructed a nomogram using the survival package and RMS package to predict 1-year and 3-year survival rates of colorectal cancer patients, and drew calibration curves to evaluate the accuracy of the nomogram. Finally, the ROC curve of the nomogram was drawn and the area under the curve (AUC) was calculated. Drug sensitivity analysis We investigated drug sensitivity of NMB-related genes by using Gene Set Cancer Analysis (GSCA) (http://bioinfo.life.hust.edu.cn/web/GSCALite/), a web-based platform for Gene Set Cancer Analysis GSCALite. Statistical Analysis. R software (version 4.0.4) was used for statistical analysis.Wilcox. Test and kruskal. Test were used to analyze clinicopathological parameters and the expression level of NMB. Kaplan-Meier analysis was used to investigate the relationship between survival rate and NMB expression level. Univariate and multivariate survival analyses were performed using Cox proportional hazard regression model. Results The differential expression of NMB between CRC tumor tissues and normal tissues Scatter difference plot(p<0.001, Figure.2A) and paired difference plot(p<0.001, Figure.2B) showed that the expression level of NMB in CRC tumor tissues was signi cantly higher than that in normal tissues. The same results were con rmed in GEPIA(p<0.001, Figure.2C). High expression of NMB in CRC tumor tissue suggests poor overall survival We evaluated the association between NMB expression and prognosis in CRC patients using Kaplan-Meier risk estimates. Compared with low NMB expression, high NMB expression was associated with signi cantly poorer overall survival(OS)(p<0.001, Figure. Clinical data from 711 patients with CRC from TCGA were analyzed and unknown and incomplete clinical information was deleted.The expression of NMB was only correlated with age(p=0.01, Figure. Table 1). GSEA identi es the NMB-related signaling pathway. To identify differentially activated signaling pathways in CRC, we performed the Gene Set Enrichment Analysis (GSEA) between low NMB and high NMB expression datasets. Various cancer-related KEGG pathways are enriched in high NMB phenotypes( Figure 6), for example, Cell cycle, DNA replication, P53 Signaling pathway, and VEGF signaling pathway. While KEGG pathways enriched in low NMB phenotypes were Colorectal cancer, ERBB Signaling pathway, JAK-STAT Signaling pathway, MAPK Signaling pathway, MTOR Signaling pathway, NOTCH Signaling pathway, pancreatic cancer,pathways in cancer, TGF-BETA Signaling pathway, and WNT Signaling pathway. Enrichment analysis of NMB co-expressed genes The co-expressed genes have similar functions and mechanisms. To further investigate the underlying mechanisms of NMB regulation, we used the Linkedomics Platform to identify 6512 signi cant genes. The volcano map ( Figure 7A) shows that there is a correlation between global genes and NMB by Pearson test. Heat maps ( Figure 7B) show the top 50 genes in CRC that are negatively and positively correlated with NMB.ORA showed that co-expressed genes were involved in Hepatocellular carcinoma cell cycle, RNA transport, RNA processing, Metabolism of RNA, negative regulation of gene expression, chromosome, EGF/EGFR Signaling Pathway, VEGFA-VEGFR2 Signaling Pathway, TGF-beta Signaling Pathway, Negative regulation of NOTCH4 signaling, Gene expression (Transcription), and Metabolism of proteins( Figure 8). Then, GSEA was performed to investigate the potential functions and pathways of NMB induction. We explored three main types of GO enrichment: biological process (BP), cellular component (CC), and molecular function (MF). In the BP category, we explored ribonucleoprotein complex biogenesis, tRNA metabolic process, generation of precursor metabolites and energy, establishment of protein localization to membrane, protein-containing complex disassembly, regulation of GTPase activity, regulation of vasculature development, cell-cell adhesion via plasmamembrane adhesion molecules, peptidyl-serine modi cation, and Ras protein signal transduction( Figure 9A).In the CC category, we explored mitochondrial inner membrane, mitochondrial matrix, cytosolic part, condensed chromosome, cell-cell junction, early endosome, cell leading edge, and apical part of cell( Figure 9B).In the MF category, we explored structural constituent of ribosome, catalytic activity, acting on RNA, lyase activity, guanyl-nucleotide exchange factor activity, cofactor binding, catalytic activity, acting on DNA, modi cation-dependent protein binding, protein serine/threonine kinase activity, nucleoside-triphosphatase regulator activity, and phospholipid binding( Figure 9C).For KEGG pathway analysis, we explored Ribosome, Proteasome, Spliceosome, Metabolic pathways, RNA transport, Parathyroid hormone synthesis, secretion and action,Proteoglycans in cancer, ECM-receptor interaction, Ras signaling pathway, and JAK-STAT signaling pathway. ( Figure 9D) Relationship Between NMB Expression and Immune In ltration in CRC Tumor in ltrating lymphocytes is an independent prognostic factor for survival in cancer patients.Like breast cancer [10], ovarian cancer, colorectal cancer, and gastric cancer. Therefore, we investigated the relationship between NMB expression and Immune In ltration in colorectal cancer. The histogram shows the number of immune cells in each sample( Figure 10A). The difference test between the expression of NMB and immune cells showed that T cells CD8(p<0.001), T cells CD4 memory resting(p=0.023), NK cells activated(p=0.029), and Macrophages M0 (p=0.016)were different in the high expression group and the low expression group( Figure 10B).Correlation test showed that the expression of NMB was correlated with T cells CD8(R=0.24,p<0.001),T cells CD4 naive(R=-0.14,p=0.014),T cells CD4 memory resting(R=-0.14,p=0.011),T cells CD4 memory activated(R=0.16,p=0.0029),T cells follicular helper(R=0.19,p<0.001),Macrophages M0(R=-0.13,p=0.017) and Macrophages M1(R=0.12,p=0.026).After the intersection of the two test results, it can be concluded that the expression level of NMB is signi cantly correlated with T cells CD8, T cells CD4 memory resting, Macrophages M0( Figure 10C).Besides,different mutational forms of NMB in CRC were associated with immune in ltration of 6 leukocytes(B cell,CD4+ T cells,CD8+ T cells,macrophage,neutrophil,dendritic).So it follows that NMB plays an important role in immune in ltration in CRC. Correlation Between Expression Level of NMB and Immune Marker Sets To further investigate the relationship between NMB expression and immunoin ltrating cells in colorectal cancer,we used TIMER database to detect the immune markers of T cells,CD8 + T cells, B cells,monocytes,neutrophils, NK cells, TAMS,M1 and M2 macrophages, and dendritic cells.Then,we also analyzed T cells with different functions, such as Th1 cells, Th2 cells, Tregs, Tfh cells, Th17 cells, and depleted T cells.Our results show that the expression level of NMB in CRC is closely related to immune marker sets in most immune cells (Table 3) Construction of the Prognostic Signature for CRC Patients Using the survival information of CRC patients to perform univariate Cox regression analysis on the 162 DEGs, it was found that there are 6 DEGs with signi cant prognostic differences. Perform multivariate Cox analysis on 6 genes with prognostic signi cance, and construct a prognostic signature composed of 4 genes, including NDUFB10, SERF2, DPP7, and NAPRT.Based on the prognosis signature, the risk score calculation formula was obtained: risk score = (0.547443833 ×DPP7 expression) + (-1.222239025 × NDUFB10 expression) + (0.400602782 × NAPRT expression) + (0.792523684 × SERF2 expression).The risk score was calculated for each CRC patient, and patients were divided into high a risk-group (n = 270) and a low-risk group (n = 270) based on the median.We constructed a heat map to show the expression of 4 genes in the high-risk group and the low-risk group, and the expression of 4 genes in the high-risk patients was higher than that in the low-risk patients (Figure.11A). Figure.11B shows the distribution of risk scores for CRC patients. Patients are divided into two groups, with risk scores increasing from left to right. Figure.11C shows the distribution of risk scores for CRC patients. Patients are divided into two groups, with risk scores increasing from left to right. K-M curve was used to compare the difference in OS time between the high-risk group and the lowrisk group (Figure.11D). Results showed that CRC patients with a high-risk score had signi cantly lower OS than GC patients with a low-risk score(P<0.001). We plotted a time-dependent ROC curve to predict survival in CRC patients, showing that the risk score had high sensitivity and speci city.AUC of risk score(AUC=0.711) was higher than that of age(AUC=0.646),gender(AUC=0.433),stage(AUC=0.709),T stage(AUC=0.673),N stage(AUC=0.656) and M stage(AUC=0.647) (Figure.11E). Figure.11F re ects the univariate Cox analysis of the relationship between the clinical features, risk score, and OS of CRC patients.Age(P <0.001),stage(P < 0.001),T(P <0.001),N(P <0.001),M(P <0.001) and risk score(P < 0.001) signi cantly affect the prognosis of GC patients. Figure.11G re ects a multivariate Cox analyzed the relationship between the clinical features, risk score, and OS of GC patients. Age (P < 0.001) ,T(P=0.019) and risk score (P < 0.001) are independent prognostic risk factors for CRC. Construction and Validation of the Nomogram We used factors such as age, stage, T, M, N, and risk score to construct a nomogram to predict the survival rate of CRC patients more conveniently (Figure.11H). According to the nomogram, the scores of CRC patients are calculated and then added to obtain the total score, thereby predicting the survival probability of 1 year and 3 years, which is bene cial to guide clinical decision-making. Because the closer the calibration curve is to the diagonal, the more accurate the prediction result will be. The calibration curves of the nomogram show that the nomogram has good accuracy in predicting survival rates at 1 and 3 years( Figure.11I, J). The 1-year(AUC = 0.711) and 3-year(AUC = 0.712) ROC curves also show that the forecasting ability of the nomogram is very accurate (Figure.11K). Analysis of drug sensitivity We selected 162 NMB-related genes through the co-expression network(fdrFilter=0.00000001,logFC lter=10), and performed Spearman correlation analysis with small molecule/drug sensitivity (IC50) to explore the correlation between DEGs and drug sensitivity. The results showed that NMB was signi cantly related to the drug resistance of many chemotherapeutic drugs and tumor-targeted drugs, including 5-Fluorouracil, Methotrexate, Belinostat, CUDC-101, vorinostat, and so on( Figure.12). Discussion Although cancer deaths have continued to decline since 1991 [4], Cancer continues to play an important role in in uencing human morbidity and mortality [1]. Colorectal cancer is the most common tumor of the digestive system, and the diagnosis of colorectal cancer currently relies mainly on Colonoscopy [11]. However, this diagnostic method has certain limitation [12]. Early diagnosis of cancer is key to improving overall survival, reducing disease-free progression, and reducing the risk of recurrence. Christopher E Barbieri's research shows that precision medicine has the greatest potential to affect the health of patients. De ne reliable predictive biomarkers and new therapeutic targets to truly improve the prognosis of patients. [13]In addition, Fang-Ze Wei's research shows that CLCA1 can be used as a prognostic marker of CRC and is related to immune in ltration. It may be a potential therapeutic target for CRC to improve the prognosis of patients [14]. Wei Xu's research also shows that Circulating lncRNA SNHG11 is related to the early diagnosis and prognosis of colorectal cancer [15]. These studies have proven that biomarkers play a key role in the early identi cation of colorectal cancer, helping to predict disease progression and response to treatment [16]. Therefore, it is necessary to research colorectal cancer biomarkers. Matusiak et al. used RT-qPCR to investigate the expression of the NMB gene in colon cancer cell lines (aco-2 and HT-29 cells), as well as normal colon epithelial cells( NCM-460), The results showed that the expression level of NMB in colon cancer cell lines was signi cantly higher than that in colon epithelial cells. In addition, the results of immunohistochemistry also showed that the expression of NMB in colon cancer cell lines was higher than that in normal colon epithelium. [9]However, no studies have been conducted on the diagnostic and prognostic value of NMB in colorectal cancer. Therefore, the potential role of NMB in CRC is the focus of this study. In our study, we downloaded CRC transcriptome data(612, including 44 normal samples and 568 tumor samples) and clinical information related to survival time of 711 CRC patients from the TCGA database. Scatter plots and paired plots were drawn by using the Survival package, Beeswarm package and Limma package, and the results showed that the expression level of NMB in colorectal cancer was signi cantly higher than that in normal tissues. In addition, we con rmed the differential expression of NMB in colorectal cancer and normal tissues through GEPIA. Then, the K-M curve was drawn to analyze the relationship between NMB expression and OS, and the ROC curve was drawn to analyze the predictive value of NMB expression in colorectal cancer prognosis. The results showed that high expression of NMB was signi cantly associated with poor overall survival. We performed a meta-analysis on the survival analysis results of the TCGA database and eight datasets from the GEO database. due to I 2 <50% and P>0.05(I 2 =1%, P=0.43), we chose a xed-effect model. The results of the metaanalysis showed that NMB was indeed a high-risk gene for colorectal cancer(HR=1.05,95%-CI:1.01-1.09). To investigate the relationship between NMB expression and clinicopathological features, histograms were drawn and logistic regression analysis was performed. The results showed that the expression level of NMB in colorectal cancer was only related to age and had no signi cant association with other clinicopathologic features. We then used Cox proportional hazard regression analyses to conclude that NMB was the important independent predictor of poor overall survival of CRC. The differential expression of NMB can provide a new perspective for the process of studying CRC and serve as a meaningful diagnostic biomarker for CRC. Through the Linkedomics Platform, we identi ed the co-expressed genes of NMB for further study. By ORA and GSEA analysis of these coexpressed genes, we have gained a further understanding of the underlying regulatory mechanisms of NMB.ORA showed that co-expressed genes were involved in a variety of cancer-related pathways, such as Hepatocellular carcinoma cell cycle, EGF/EGFR Signaling Pathway[26], VEGFA-VEGFR2 Signaling Pathway [27], TGF-beta Signaling Pathway [24], Negative regulation of NOTCH4 signaling. There is already evidence that morphine promotes tumorigenesis and cetuximab resistance through EGFR signal activation in human colorectal cancer [28]. DONG CHUL KIM's research shows that Notch-4 has a signi cant correlation with P2Y2R, and P2Y2R plays an important role in tumor progression and metastasis [29]. By using GSEA to perform three types of GO enrichment analysis, our study identi ed the regulatory mechanisms related to cancer. Such as tRNA metabolic process, regulation of GTPase activity, regulation of vasculature development, cell-cell adhesion via plasma-membrane adhesion molecules, Ras protein signal transduction,guanyl-nucleotide exchange factor activity, protein serine/threonine kinase activity, and phospholipid binding. Related studies have reported that the tRNA-yW Synthesizing Protein 2 (TYW2) undergoes promoter hypermethylationassociated transcriptional silencing in human cancer, particularly in colorectal tumors [30]. The research data of Tezcan G, Garanina EE et al. showed the role of Rab5 in the activation of in ammasomes, suggesting that this GTPase may be a potential therapeutic target for inhibiting in ammation in CRC [31]. The study of Shun Li et al. found that the vascular system in advanced tumors is more abundant and further enhances the proliferation of cancer cells [32]. An existing study con rmed that cell-cell adhesion leads to increased cell migration and invasion [33]. The study by Urosevic J et al. reported that genes regulated by RAS-ERK1/2 signaling mediate the recurrence of CRC [34]. As a guanine nucleotide exchange factor (GEF), GEF16 was con rmed by Bei Yu and others to have the ability to stimulate in vitro proliferation and migration in colorectal cancer cells [35]. The study by Wing-Hung Chan et al. reported that receptor tyrosine kinase fusion is an important alternative driving factor for serological pathways in the development of colorectal cancer [36]. Daojun Hu reported that Annexin A2, as a calcium-dependent phospholipid binding protein, can be used as a non-invasive and promising biomarker for the diagnosis of CRC, and the combined detection with CEA has good clinical application value in the diagnosis of CRC [37]. KEGG pathway analysis using GSEA showed that Proteoglycans in cancer, ECM-receptor interaction[38], Ras signaling pathway [39], and JAK-STAT signaling pathway [20] were enriched. Taken together, these results indicate that NMB is involved in the tumor-related KEGG pathway and GO pathway. To further study the relationship between NMB expression and prognosis of colorectal cancer. We investigated the relationship between NMB and immune in ltration and immune markers. We tested the difference and correlation between the expression of NMB and immune cells, and then intersected the results of the two tests. The results showed that the expression of NMB was signi cantly correlated with T cells CD8, T cells CD4 memory resting, Macrophages M0. Finally, we analyzed the relationship between NMB expression and immune marker sets with TIMER.It turns out that CD3E,HLA-DPB1,CD3D,CD79A,TGFB1,CD2,HLA-DPA1,ITGAX,HLA-DRA,CD86,CCR7,CD19,NRP1,HAVCR2,CTLA4,TBX21 showed a signi cant correlation with NMB expression in colon cancer(P < 0.001; Cor value ≥ 0.40).And in rectal cancer,CD3E,HAVCR2,IL10,CD1C,CEACAM8,CCR7,STAT3,CTLA4,ITGAM,HLA-DPB1,FOXP3,CCL2,KIR3DL1,STAT1 showed a signi cant correlation with NMB expression(P < 0.001; Cor value ≥ 0.40). The research of Yang Xiong et al. showed that CCR7 promotes lymph node metastasis of urinary bladder cancer (UBC) through lymphangiogenic effects and the interaction of CCL21/CCR7 promotes the migration and invasion of UBC cells through the MEK/ERK1/2 signaling pathway [40]. These correlations may indicate the potential mechanism of NMB in regulating immune cell function in colorectal cancer. We can calculate the risk score of each patient based on the prognostic characteristics. The study found that risk score is an independent risk factor affecting prognosis and can be used to predict OS in CRC. High-risk patients have a poor prognosis for lower-risk patients. Therefore, we speculate that the 4 DEGs that constitute prognostic signals are involved in the progression of CRC. Through the ROC curve analysis of OS of CRC patients, we found that this prognostic signature has a good predictive value for CRC (AUC = 0.711) and can be used to predict the prognosis of CRC patients. To facilitate clinical application and better predict the prognosis of patients with CRC, we constructed a nomogram to predict the survival rate of patients at 1 and 3 years. Judging from the calibration curve and ROC curve, the nomogram has a better predictive effect. At present, studies have con rmed that abnormal gene expression is related to chemotherapy resistance [41,42]. We used 162 DEGs to study in the GSCA database, and the results showed that the high expression of NMB may be related to the resistance of CRC chemotherapy and targeted therapy drugs, such as 5-Fluorouracil [43], Methotrexate [44], Belinostat [45], CUDC-101[46], vorinostat [45] and so on.Therefore, NMB can provide a new perspective on the research process of CRC and serve as a meaningful diagnostic biomarker for CRC. In conclusion, we identi ed NMB as a prognostic marker for CRC and as associated with immune in ltration. It may be a potential therapeutic target for CRC .To better predict the survival rate of patients for clinical application, we have constructed a nomogram with a higher predictive value. Our bioinformatics studies based on public databases provide directions for in vitro trials, improve the diagnosis rate of colorectal cancer, and provide new strategies for gene-targeted therapy to better prevent and treat colorectal cancer. Declarations Ethics approval and consent to participate Not applicable Consent for publication Not applicable Availability of data and materials The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article. Competing interests The authors declare that they have no competing interests. Figure 1 Overall design of this study A merged enrichment plot from gene set enrichment analysis (GSEA) including enrichment score and gene sets. 14 cancer-related pathways are shown here. Analysis of drug sensitivity associated with NMB. Red shows a positive correlation, with higher gene expression associated with greater drug sensitivity, while blue shows the opposite.
2022-01-12T16:08:09.344Z
2022-01-10T00:00:00.000
{ "year": 2022, "sha1": "4d99f72c1f9129c920203834f6ec6f79e8a9c648", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1233841/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "ee59f27c50a91fc8b90f5be1d84fff381426100a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
232222473
pes2o/s2orc
v3-fos-license
H2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^2$$\end{document}-regularity for a two-dimensional transmission problem with geometric constraint The H2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$H^2$$\end{document}-regularity of variational solutions to a two-dimensional transmission problem with geometric constraint is investigated, in particular when part of the interface becomes part of the outer boundary of the domain due to the saturation of the geometric constraint. In such a situation, the domain includes some non-Lipschitz subdomains with cusp points, but it is shown that this feature does not lead to a regularity breakdown. Moreover, continuous dependence of the solutions with respect to the domain is established. Introduction The H 2 -regularity of variational solutions to a two-dimensional transmission problem with geometric constraint is investigated, in particular when part of the interface becomes part of the outer boundary of the domain due to the geometric constraint, a situation in which the domain includes some non-Lipschitz subdomains with cusp points. Such a regularity is required in particular to guarantee that the variational solutions satisfy the strong formulation of the transmission problem. H 2 -regularity is, however, not true in general and known to depend heavily on the geometry and smoothness of the domain and the interfaces. In An extended version of this manuscript with the same title is available at https://arxiv.org/pdf/2103.07301. pdf. Some proofs being similar to [7] are only sketched herein but detailed proofs are supplied in the extended version for the sake of completeness. Geometry of (w) for a state w ∈S with non-empty coincidence set fact, when interfaces intersect the outer boundary of the domain, regularity of variational solutions to transmission problems in non-smooth domains is a challenging issue, even for transversal intersections, see [1-3, 5, 10, 11, 13] and the references therein. Motivated by the mathematical study of microelectromechanical systems (MEMS), we identify herein a class of two-dimensional domains possibly featuring cusps for which H 2 -regularity is true. We actually derive H 2 -estimates which hold uniformly with respect to suitable perturbations of the underlying domain. We point out that such quantitative estimates are not contained in the above mentioned literature, but they turn out to be instrumental for a thorough study of MEMS models [9]. To set up the geometric framework, let D := (−L, L) be a finite interval of R, L > 0, and let H > 0 and d > 0 be two positive parameters. Given a function u ∈ C(D, [−H , ∞)) with u(±L) = 0, we define the subdomain (u) of D × (−H , ∞) by for some positive constants σ 1 = σ 2 , and n (u) denotes the unit normal vector field to (u) (pointing into 2 (u)) given by In (1.2c), h u is a suitable function reflecting the boundary behavior of ψ u , see Section 2 for details. In addition, · denotes the (possible) jump across the interface (u); that is, whenever meaningful for a function f : (u) → R. Let us already mention that there are several features of the specific geometry of (u) which may hinder the H 2 -regularity of the solution ψ u to (1.2). Indeed, on the one hand, the interface (u) always intersects with the boundary ∂ (u) of (u) and it follows from [10] that this sole property prevents the H 2 -regularity of ψ u , unless σ and the angles between (u) and ∂ (u) at the intersection points satisfy some additional conditions. On the other hand, (u) and 2 (u) are at best Lipschitz domains, while 1 (u) may consist of non-Lipschitz domains with cusp points. The particular geometry (u) = 1 (u) ∪ 2 (u) ∪ (u), in which the boundary value problem (1.2) is set, is encountered in the investigation of an idealized electrostatically actuated MEMS as already pointed out and described in detail in [6,14]. Such a device consists of an elastic plate of thickness d which is fixed at its boundary {±L} × (0, d) and suspended above a rigid conducting ground plate located at z = −H . The elastic plate is made up of a dielectric material and deformed by a Coulomb force induced by holding the ground plate and the top of the elastic plate at different electrostatic potentials. In this context, u represents the vertical deflection of the bottom of the elastic plate, so that the elastic plate is given by 2 (u), while 1 (u) denotes the free space between the elastic plate and the ground plate. An important feature of the model is that the elastic plate cannot penetrate the ground plate, resulting in the geometric constraint u ≥ −H . Still, a contact between the elastic plate and the ground plate -corresponding to a non-empty coincidence set C(u)is explicitly allowed. The dielectric properties of 1 (u) and 2 (u) are characterized by positive constants σ 1 and σ 2 , respectively. The electrostatic potential ψ u is then supposed to satisfy (1.2) and is completely determined by the deflection u. The state of the MEMS device is thus described by the deflection u, and equilibrium configurations of the device are obtained as critical points of the total energy which is the sum of the mechanical and electrostatic energies, the former being a functional of u while the latter is the Dirichlet integral of ψ u . Owing to the nonlocal dependence of ψ u on u, minimizing the total energy and deriving the associated Euler-Lagrange equation demand quite precise information on the regularity of the electrostatic potential ψ u for an arbitrary, but fixed function u and its continuous dependence thereon. This first step of provisioning the required information is the main purpose of the present research. In the companion paper [9], we use the results obtained herein to analyze the minimizing problem leading to the determination of u and compute the associated Euler-Lagrange equation. Since the regularity of the variational solution ψ u to (1.2) is intimately connected with the regularity of the boundaries of (u), 1 (u), and 2 (u), let us first mention that (u) and 2 (u) are always Lipschitz domains and that the measures of the angles at their vertices do not exceed π, a feature which complies with the H 2 -regularity of ψ u away from the interface (u) [4]. This property is shared by 1 (u) when the coincidence set C(u) is empty, see Fig. 1, so that it is expected that ψ| i (u) belongs to H 2 ( i (u)), i = 1, 2, in that case. However, when C(u) is non-empty, the open set 1 (u) is no longer connected and the boundary of its connected components is no longer Lipschitz, but features cusp points. Moreover, there is an interplay between the transmission conditions (1.2b) and the boundary condition (1.2c) when C(u) = ∅. Whether ψ| i (u) still belongs to H 2 ( i (u)), i = 1, 2, in this situation is thus an interesting question, that we answer positively in our first result. For the precise statement, we introduce the functional setting we shall work with in the sequel. Specifically, we set . Clearly, the coincidence set C(u) is empty if and only if u ∈ S. In addition, the situation already alluded to, where C(u) is non-empty and 1 (u) is a disconnected open set in R 2 with a non-Lipschitz boundary, corresponds to functions u ∈S\S. Also, we include the constraint ± σ ∂ x u(±L) ≤ 0 in the definition of S andS to guarantee that the way (u) and ∂ (u) intersect does not prevent the H 2 -regularity of ψ u in smooth situations (i.e. u ∈ S ∩W 2 ∞ (D)), see [10]. (a) For each u ∈S, there is a unique variational solution ψ u ∈ h u + H 1 0 ( (u)) to (1.2). Moreover, ψ u,1 := ψ u | 1 (u) ∈ H 2 ( 1 (u)) and ψ u,2 := ψ u | 2 (u) ∈ H 2 ( 2 (u)), and ψ u is a strong solution to the transmission problem (1.2). (b) Given κ > 0, there is c(κ) > 0 such that, for every u ∈S satisfying u H 2 (D) ≤ κ, It is worth emphasizing that, for i ∈ {1, 2}, the restriction of ψ u to i (u) belongs to H 2 ( i (u)) for all u ∈S. In particular, there is no regularity breakdown when the coincidence set C(u) is non-empty. Moreover, the H 2 -regularity of ψ u is uniformly valid when u ranges in a bounded subset ofS. A similar observation is made in [7] for a different geometric setting when one of the two subsets does not depend on the function u. Identifying other non-smooth geometries for which H 2 -regularity of the variational solution to a transmission problem depends in a somewhat uniform way on some specific features of the domain is an interesting issue, which is worth a forthcoming investigation. Remark 1.2 When the upper part 2 (v) is clamped at its lateral boundaries in the sense that Theorem 1.1 applies whatever the values of σ 1 and σ 2 . Theorem 1.1 is an immediate consequence of Proposition 4.9 below. Its proof begins with quantitative H 2 -estimates on ψ u depending only on u H 2 (D) for sufficiently smooth functions in S, the H 2 -regularity of ψ u being guaranteed by [10] in that case. Since the class of functions for which these estimates are valid is dense inS, we complete the proof with a compactness argument, the main difficulty to be faced being the dependence of (u) on u. More precisely, we begin with a variational approach to (1.2) and first show in Section 3 by classical arguments that, given u ∈S, the variational solution ψ u to (1.2) corresponds to the minimizer on h u + H 1 0 ( (u)) of the associated Dirichlet energy Thanks to this characterization, we use -convergence tools to show the H 1 -stability of ψ u with respect to u in Sect. 3.2. Section 4 is devoted to the study of the H 2 -regularity of ψ u which we first establish in Sect. 4.1 for smooth functions u ∈ S ∩ W 2 ∞ (D) (thus having an empty coincidence set), relying on the analysis performed in [10]. It is worth mentioning that the constraint involving σ in the definition of S comes into play here. For u ∈ S ∩ W 2 ∞ (D), we next derive quantitative H 2 -estimates on ψ u which only depend on u H 2 (D) as stated in Theorem 1.1 (b), see Sect. 4.2. The building block is an identity in the spirit of [4, Lemma 4.3.1.2] allowing us to interchange derivatives with respect to x and z in some integrals involving second-order derivatives, its proof being provided in Appendix 1. We then combine these estimates with the already proved H 1 -stability of variational solutions to (1.2) and use a compactness argument to extend the H 2 -regularity of ψ u to arbitrary functions u ∈S in Sect. 4.3. In this step, special care is required to cope with the variation of the functional spaces with u. In fact, as a side product of the proof of Theorem 1.1, we obtain qualitative information on the continuous dependence of ψ u with respect to u, which we collect in the next result. In addition, if i ∈ {1, 2} and U i is an open subset of i (u) such thatŪ i is a compact subset of i (u), then Also, for any p ∈ [1, ∞), Clearly, the quantity M introduced in Theorem 1.3 is finite due to (1.3) and the continuous embedding of H 1 (D) in C(D). Remark 1.4 An interesting issue is the extension of the above results to a three-dimensional setting, where D is a bounded domain of R 2 instead of an interval. There are, however, at least two difficulties to overcome, which are both of geometric nature. On the one hand, the coincidence set C(u) defined in (1.1) is no longer a countable union of open intervals when D is a two-dimensional domain and it might have a much more complicated structure. The former property plays an essential role in the proof of Proposition 4.9 (a) below. On the other hand, the -convergence argument involved in the proof of Proposition 3.3 strongly makes use of the two-dimensional geometry of (u). In fact, the literature on regularity of solutions to transmission problems in non-smooth three-dimensional domains when the interfaces intersect the outer boundary seems to be rather sparse and restricted to specific geometries. We refer to [1,3,5,11,13] for results in that direction. Throughout the paper, c and (c k ) k≥1 denote positive constants depending only on L, H , d, σ 1 , and σ 2 . The dependence upon additional parameters will be indicated explicitly. The boundary values We state the precise assumptions on the function h v occurring in (1.2c). Roughly speaking, we assume that it is the trace on ∂ (v) of a function h v ∈ H 1 ( (v)) which is such that h| i (v) belongs to H 2 ( i (v)) for i = 1, 2 and satisfies the transmission conditions (1.2b), as well as suitable boundedness and continuity properties with respect to v. Specifically, for every v ∈S, let and suppose that h v satisfies the transmission conditions Moreover, given v ∈S and a sequence (v n ) n≥1 inS satisfying Observe that the convergence of (v n ) n≥1 , the continuous embedding of From now on, we impose the conditions (2.1) throughout. We finish this short section by providing an example of h v satisfying the imposed conditions (2.1). Then (2.1a)-(2.1e) are satisfied. In addition, In the context of a MEMS device alluded to in the introduction, these additional properties mean that the ground plate and the top of the elastic plate are kept at constant potential. For instance, ζ(r ) := V min{1, (r − 1) 2 /d 2 } for r > 1 and ζ ≡ 0 on (−∞, 1] will do. Variational solution to (1.2) In this section we investigate the properties of the variational solution ψ v to (1.2) for v ∈S and, in particular, its H 1 -stability. A variational approach to (1.2) Given v ∈S we introduce the set of admissible potentials The variational solution ψ v to the transmission problem (1.2) is then the minimizer of the functional J (v) on the set A(v): In addition, readily follows from the direct method of calculus of variations due to the lower semicontinuity and coercivity of J (v) on A(v), the latter being ensured by the assumption σ ≥ min{σ 1 , σ 2 } > 0 and Poincaré's inequality. The uniqueness of ψ v is guaranteed by the strict convexity of For further use, we report the following version of Poincaré's inequality for functions in Hence, after integration with respect to (x, z) over (v), from which we deduce the stated inequality. H 1 -stability of à v The purpose of this section is to study the continuity properties of the solution ψ v to (3.2) with respect to v. More precisely, we aim at establishing the following result. 4) and set 5) which is finite by (3.4 ) and the continuous embedding of H 1 (D) in C(D). Then To prove Proposition 3.3, we make use of a -convergence approach and argue as in [7, Section 3.2] with minor changes. We thus omit the proof here and refer to the extended version of this paper [8] for details. H 2 -regularity In the previous section we introduced the variational solution In Sect. 4.3 we extend these estimates to the general case v ∈S by means of a compactness argument. H 2 -regularity for Assuming that v is smoother with an empty coincidence set, see Fig. 1, the existence of a strong solution ψ v to (1.2) is a consequence of the analysis performed in [10]. and the transmission problem Besides [10], the proof of Proposition 4.1 requires the following auxiliary result. .2) Proof We set e x = (1, 0) and e z = (0, 1). Given θ ∈ C ∞ c (v) and j ∈ {x, z} we note that Proof of Proposition 4. 1 We check that the transmission problem (4.1) fits into the framework of [10]. Since v ∈ S ∩ W 2 ∞ (D) and v(±L) = 0, the boundaries of 1 (v) and 2 (v) are W 2 ∞ -smooth curvilinear polygons and the interface (v) meets the boundary ∂ (v) of (v) at the vertices A ± := (±L, 0). Moreover, at the vertex A ± , the measures ω ±,1 and ω ±,2 of the angles between −e z and (1, ∓∂ x v(±L)) and between (1, ∓∂ x v(±L)) and e z , respectively, satisfy ω ±,1 + ω ±,2 = π, as well as by definition of S. According to the analysis performed in [10], these conditions guarantee that the variational solution ψ v to (3.2) provided by Lemma 3.1 satisfies ) for i = 1, 2 and solves the transmission problem (1.2) in a strong sense. Next, owing to the just established H 2 -regularity of ψ v,1 and ψ v,2 , we may differentiate with respect to x the transmission condition ψ v (x, v(x)) = 0, x ∈ D, and find that The stated H 1 -regularity of ∂ x ψ v + ∂ x v∂ z ψ v then follows from Lemma 4.2 and the boundedness of ∂ x v and ∂ 2 x v. In the same vein, due to (1.2b), the regularity of v, and the identity H 2 -Estimates on The H 2 -regularity of ψ v being guaranteed by Proposition 4.1 for v ∈ S ∩ W 2 ∞ (D), the next step is to show that this property extends to any v ∈S. To this end, we shall now derive quantitative H 2 -estimates on ψ v , paying special attention to their dependence upon the regularity of v. As in [7], it turns out to be more convenient to study a non-homogeneous transmission problem with homogeneous Dirichlet boundary conditions instead of (4.1). where ψ v ∈ H 1 ( (v)) is the unique solution to (4.1) provided by Proposition 4.1. Since ) for i = 1, 2, we readily infer from (2.1a) and (4.3) that We omit in the following the dependence of χ on v for ease of notation. According to (2.1a), (2.1b), and Proposition 4.1, χ solves the transmission problem For that purpose, we transform (4.5) to a transmission problem on the rectangle R := D × (0, 1 + d). More precisely, we introduce the transformation mapping 1 (v) onto the rectangle R 1 := D × (0, 1), and the transformation mapping 2 (v) onto the rectangle R 2 := D × (1, 1 + d). The interface separating R 1 and R 2 is It is worth pointing out here that Then, (4.4) implies For further use, we also introducê and derive the following fundamental identity for , which provides a connection between some integrals involving products of second-order derivatives of and is in the spirit of [ Consequently, since (∂ x 1 , ∂ x 2 ) lies in H 1 (R 1 ) × H 1 (R 2 ) by (4.8), we may argue as in the proof of Lemma 4.2 and deduce from (4.9) that Lemma 4.3 Given Moreover, by (4.8), Similarly, setting we derive from (4.8) that G i := G| R i ∈ H 1 (R i ) for i = 1, 2, while (4.5b), (4.6), (4.7), and (4.8) imply that, for x ∈ D, that is, G = 0 on 0 , and we argue as in the proof of Lemma 4.2 to conclude that In addition, by (4.8), Owing to (4.10), (4.11), and the H 1 -regularity of F and G, we are in a position to apply Lemma A.1 (see Appendix 1) with (4.12) Using the definitions of F and G, the identity (4.12) reads Noticing that the first terms on both sides of the above identity are the same and that the assertion follows, recalling that ∂ xσ = 0 in R 2 . We now translate the outcome of Lemma 4.3 in terms of the solution χ to (4.5). Lemma 4.5 Let Proof Let us first recall the regularity of stated in (4.8) which validates the subsequent computations. Using the transformations T 1 and T 2 introduced in (4.6) and (4.7), respectively, we obtain We use Lemma 4.3 to express the first integral on the right-hand side and get We then compute separately the integrals over R i , i = 1, 2, and begin with the contribution of R 1 . We complete the square to get Thanks to the identities and the property ∂ η 1 (±L, η) = 0 for η ∈ (0, 1) stemming from (4.8), we may perform integration by parts in the last two integrals on the right-hand side of the previous identity and obtain Transforming the above identity back to 1 (v) yields (4.14) Next, arguing in a similar way, Transforming this formula back to 2 (v) yields and we deduce from (4.13), (4.14), (4.15), and the above identity that It remains to simplify the last two integrals on the right-hand side of (4.16). To this end, we first recall that the regularity of χ allows us to differentiate with respect to x the transmission condition χ = 0 on (v) to deduce that while the second transmission condition in (4.5b) reads In particular, (4.17) and (4.18) imply that, on (v), Therefore, Hence, Consequently, (4.16) and (4.19) entail as claimed. In order to estimate the boundary and the transmission terms in Lemma 4.5, we first report the following trace estimates. Lemma 4.6 Given κ > 0 and α ∈ (0, 1/2], there is c(α, κ) > 0 such that, for any v ∈S satisfying v H 2 (D) ≤ κ and θ ∈ H 1 ( 2 (v)), Proof Let θ ∈ H 1 ( 2 (v)). Using the transformation T 2 defined in (4.7) which maps 2 and so that the continuous embedding of H 2 (D) in W 1 ∞ (D) and the assumed bound on v readily imply that (4.21) By complex interpolation, from which we deduce that Since α > 0, the trace maps H α+1/2 (R 2 ) continuously on H α (D × {1}), and we thus infer from (4.20) and (4.21) that Based on Lemma 4.6 we are in a position to estimate the boundary and transmission terms in the identity provided by Lemma 4.5. (4.23) Proof To prove (4.22), let us first note that H ζ −1/2 (D) embeds continuously into L 4 (D). We use the Cauchy-Schwarz inequality and Lemma 4.6 with α = ζ − 1/2 and deduce H 1 ( 2 (v)) . As for (4.23) we obtain analogously and (4.25) At this point, we use (4.17) and (4.18) to show that Consequently, so that Owing to (4.25) and the above inequality, we may then argue as in the proof of (4.24) to conclude that , as claimed in (4.23). We now gather the previous findings to deduce the following crucial H 2 -estimate on the solution ψ v to (4.1) for v ∈ S ∩ W 2 ∞ (D), which only depends on the H 2 (D)-norm of v (but not on its W 2 ∞ (D)-norm). There is a constant c 0 (κ) > 0 such that the solution ψ v to (4.1) satisfies and Since σ is constant on 1 (v) and on 2 (v), it readily follows from (4.5a) that we infer from Lemma 4.5 and the above two formulas that Using Lemma 4.7 with ζ = 7/8, along with the identity , we further obtain Hence, thanks to Young's inequality, Recalling that by (4.5) and that min{σ 1 , σ 2 } > 0, we conclude that (4.27) Owing to the continuous embedding of H 2 (D) in C(D), combining (4.27) and Lemma 3.2 leads us to the estimate . The bound (4.26a) then readily follows from the assumptions (2.1a) and (2.1c). Finally, (4.26a), together with (2.1a) and (2.1c), yields (4.26b). . 3 Geometry of (w) for a state w ∈S with non-empty and disconnected coincidence set regularity of v assumed in the previous sections and also allow for a non-empty coincidence set. H 2 -regularity and and is a strong solution to the transmission problem (4.1). Moreover, there is c 1 (κ) > 0 such that The proof involves three steps: we first establish Proposition 4.9 (b) under the additional assumption Building upon this result, we take advantage of the density of S ∩ W 2 ∞ (D) inS and of the estimates derived in Proposition 4.8 to verify Proposition 4.9 (a) by a compactness argument. Combining the previous steps leads us finally to a complete proof of Proposition 4.9 (b). We thus start with the proof of Proposition 4.9 (b) when the solutions (ψ v n ) n≥1 to (4.1) associated with the sequence (v n ) n≥1 satisfies the above additional bound. We state this result as a separate lemma for definiteness. Lemma 4.10 Let κ > 0 and v ∈S be such that v H 2 (D) ≤ κ and consider a sequence (v n ) n≥1 inS satisfying (4.29). Assume further that, for each n ≥ 1, (ψ v n ,1 , ψ v n ,2 ) belongs to H 2 ( 1 (v n )) × H 2 ( 2 (v n )) and that there is μ > 0 such that The proof is very close to that of [7, Proposition 3.13 & Corollary 3.14], so that we omit the details here and refer to the extended version of this paper [8] instead. Proof of Proposition 4.9 (a) Let v ∈S be such that v H 2 (D) ≤ κ. We may choose a sequence Owing to (4.32) and the regularity property v n ∈ S ∩ W 2 ∞ (D), n ≥ 1, Proposition 3.3 guarantees that (ψ v n ,1 , ψ v n ,2 ) belongs to H 2 ( 1 (v n )) × H 2 ( 2 (v n )) and (ψ v n ) n≥1 satisfies (4.30) with μ = c 0 (2κ). We then infer from Lemma 4.10 that (ψ v,1 , ψ v,2 ) belongs to Combining the above bound with (2.1d) and Lemma 4.10 gives (4.28). Checking that ψ v is a strong solution to (4.1) is then done as in [7,Corollary 3.14], see also the extended version of this paper [8] for a complete proof. We supplement the H 2 -weak continuity of ψ v with respect to v reported in Proposition 4.9 with the continuity of the traces of ∇ψ v,2 on the upper and lower boundaries of 2 (v). Proposition 4.11 Let κ > 0 and v ∈S be such that v H 2 (D) ≤ κ and consider a sequence (v n ) n≥1 inS satisfying (4.29). Then, for p ∈ [1, ∞), ∇ψ v n ,2 (·, v n ) → ∇ψ v,2 (·, v) in L p (D, R 2 ) , (4.33) ∇ψ v n ,2 (·, v n + d) → ∇ψ v,2 (·, v + d) in L p (D, R 2 ) , (4.34) and ∇ψ v,2 (·, v) L p (D,R 2 ) + ∇ψ v,2 (·, v + d) L p (D,R 2 ) ≤ c( p, κ) . Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. Next, after integrating by parts, Since V (x, 0) = V (x, 1 + d) = 0 for x ∈ D by (A.1a) and the second and fourth terms cancel each other out, we obtain
2022-09-09T15:28:21.271Z
2022-09-07T00:00:00.000
{ "year": 2022, "sha1": "06ecc0ec487481d60d514933eff22ce06ecb3f7e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00209-022-03115-3.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "672089418d0587a1954eb6a118c0e46b221c0f18", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
4640018
pes2o/s2orc
v3-fos-license
The Relationship between Parasite Fitness and Host Condition in an Insect - Virus System Research in host-parasite evolutionary ecology has demonstrated that environmental variation plays a large role in mediating the outcome of parasite infection. For example, crowding or low food availability can reduce host condition and make them more vulnerable to parasite infection. This observation that poor-condition hosts often suffer more from parasite infection compared to healthy hosts has led to the assumption that parasite productivity is higher in poor-condition hosts. However, the ubiquity of this negative relationship between host condition and parasite fitness is unknown. Moreover, examining the effect of environmental variation on parasite fitness has been largely overlooked in the host-parasite literature. Here we investigate the relationship between parasite fitness and host condition by using a laboratory experiment with the cabbage looper Trichoplusia ni and its viral pathogen, AcMNPV, and by surveying published host-parasite literature. Our experiments demonstrated that virus productivity was positively correlated with host food availability and the literature survey revealed both positive and negative relationships between host condition and parasite fitness. Together these data demonstrate that contrary to previous assumptions, parasite fitness can be positively or negatively correlated with host fitness. We discuss the significance of these findings for host-parasite population biology. Introduction Parasites play a significant role in the ecology and evolution of their hosts. For example, parasites can regulate host population dynamics [1][2][3], drive the maintenance of host sexual reproduction [4][5][6], and shape the evolution of sexually dimorphic traits [7]. Environmental variation can play a large role in mediating the immediate outcome of parasite infection, as hosts that are reared in crowded conditions or with limited food can suffer greater morbidity or mortality from parasitism compared to hosts in better health [8][9][10][11][12][13][14]. Far less is known about how stressful conditions for the host such as crowding or food limitation affect the fitness of the parasites. Examining this question is a subtle but significant departure from most host-parasite studies, where the focus is primarily on host performance. Understanding how environmental factors affect parasite fitness might result in more accurate predictions regarding the number of parasite propagules available for subsequent infection. This information can in turn result in more accurate predictions regarding both the likelihood of infection, and the severity of infection. How might variation in the host's environment affect parasite fitness? For parasites that depend solely on their hosts for resources and shelter, a poor environment for the host may translate into a poor environment for the parasite. For example, parasites inhabiting low-quality hosts may have less to eat (both quantitatively and qualitatively), which may reduce parasite production [15,16]. Conversely, hosts in poor condition may have fewer resources to allocate to immune functions or to other defenses against parasites [17,18] thus leaving parasite growth and or reproduction less inhibited by attack from host defenses. As a sidebar, we note here that in general, lifetime parasite fitness is typically defined as the parasite basic reproductive ratio, R 0 , but because of the multiple components that make up R 0 [19][20][21][22], many studies instead use parasite productivity as a measure of parasite fitness (e.g. [16,[22][23][24] but see [25] for measures of lifetime parasite fitness). Parasite productivity is a reasonable proxy for R 0 , if productivity is correlated with the number of transmission propagules produced, and if the latter is positively correlated with the likelihood of infecting a susceptible host (e.g. [26,27]). Here we use the term 'potential parasite fitness' (PPF) because we do not directly measure parasite R 0 . Rather, we measure components of parasite fitness that are typically positively correlated with R 0 . In this study we ask whether parasite productivity is positively correlated with host food availability (a proxy of host condition) in the virus Autographa californica multiple nucleopolyhedrovirus (AcMNPV), and one of its natural hosts, the cabbage looper moth (Trichoplusia ni, Hübner, Lepidoptera: Noctuidae). Parasite biology AcMNPV is the type species of the genus Alphabaculovirus in the family Baculoviridae [28]. Baculoviruses are DNA viruses that primarily infect Lepidoptera [29,30]. Caterpillars typically become infected upon ingesting virus occlusion bodies (OB), which are proteinaceous structures that contain virions (virus particles) [30]. Virions released by OBs spread throughout the larval body, and eventually the bulk of host tissue is converted into OBs [30][31][32]. At the end of a successful infection, the larva dies and OBs are released into the environment. AcMNPV has a wide host range and can infect species of at least 15 families of Lepidoptera [30]. Host biology Trichoplusia ni are typically found in the subtropics worldwide [33] and are also common pests of greenhouse vegetables and agricultural cole crops at higher latitudes [34]. AcMNPV has been considered as a possible biological control agent of T. ni [35,36] because it infects T. ni in the wild and has high virulence. This virus-host system is thus ideal for addressing questions related to PPF because the laboratory results could be applicable in nature, as well as to other lepidopteran hosts. Insect collections and colony maintenance Cabbage loopers were collected from commercial greenhouses in the lower mainland of British Columbia, Canada and maintained continuously in the laboratory at the University of British Columbia for 10 years (,50 generations). AcMNPV was originally isolated from naturally infected T. ni early 2000s. The virus was used in various laboratory experiments and was purified and stored at 220uC when not in use [37]. To maintain T. ni colonies, neonates were reared in groups of 25 in 200 mL Styrofoam cups filled with 25 mL wheat-germ based diet [38]. Pupae were transferred to in emergence cages. Adults mated in these cages and females laid their eggs on a paper towel lining of the mating cage. Larval rearing cups and adult mating cages were maintained at 2661uC 16:8 light:dark. Egg-impregnated paper towels were stored at 5u until eggs were needed. Trichoplusia ni eggs readily hatch at room temperature. Experimental design We conducted two experiments to examine the relationship between parasite potential fitness and host condition in this hostparasite system. In both experiments we infected 4 th instar larvae with virus, but the larvae used in experiment 1 were of lower initial condition than those used in experiment 2. We conducted these two experiments to gain a preliminary understanding in how host condition at the time of infection affects both host and parasite overall response to infection. Table 1 lists the differences between the two experiments. The experiments were conducted at two different times because of logistical constraints. For each experiment, 120 larvae were each assigned to one of three food regimes: low (4-5 hours access to food/day), medium (12 hours food/day), or high (continuous access to food). Larvae were reared in 25 mL cups and the food source was a wheat germbased diet modified from [38]. Larvae in experiment 1 were assigned to their food treatment after infection and larvae in experiment 2 were assigned to their food treatment before infection (Table 1). One day after moulting into 4 th instar, 90 of the 120 larvae were each given one 0.125 cm 3 piece of diet dosed with 5 mL of 1000 OB/mL virus suspension. Preliminary data have shown this infection method to be sufficient to infect $95% of larvae. After 24-hour access to the virus-dosed diet, larvae were assigned to their food treatment (experiment 1), or returned to their initial food treatment (experiment 2). The remaining thirty larvae were each fed a 0.125 cm 3 piece of diet dosed with 5 mL distilled water. These uninfected larvae were randomly and evenly distributed into the three food treatments (i.e. 10 uninfected larvae per food treatment, experiment 1), or returned to their original food treatment (experiment 2). Data collected and statistical analyses Infected larvae were maintained on low, medium or high food treatments until death. One day prior to death, when larvae were rendered immobile, bloated and discoloured by virus infection, larvae were weighed and transferred to 1.5 mL eppendorf tubes. After death, the tube containing the virus-killed larva was filled with distilled water so that the total volume (dead larva plus water) equaled 1 mL. The entire sample was macerated and total OB number was quantified using a hemocytometer. Virus OBs were counted in each of ten 0.260.2 mm squares. The average number of OBs per ten squares was then multiplied by 4610 6 to obtain the total OB per larva. We collected data on days to death, weight at death, and total OBs per larva. We use virus OB number as our measure of PPF. We used ANOVA to examine whether food treatment had a statistically significant effect on larval weight at death, days to death, and on virus OB number. We transformed both OB number (log OB number +1) and larval weight (log-weight) to meet assumptions of ANOVA. To better understand the functional relationship between virus productivity and host size, we used ANCOVA to examine whether food treatment mediated the relationship between OB number and larval weight (dependent variable: log OB+1), explanatory variable: food treatment, covariate: log (weight at death); interaction: food * log(weight at death). All statistical analyses were conducted in R version 3.0.2 (R Core Team 2013). Because the two experiments were conducted at different times and thus larvae could have been exposed to different environmental conditions in the laboratory, all statistics were run separately for the two experiments. Literature survey We used the Web of Science to search for papers that experimentally addressed whether parasite fitness (e.g. parasite growth rate, reproduction, development, transmission potential) was affected by host condition (e.g. food quality or quantity). We did not include parasitoids in our search. We examined Table 2 correlational or observational studies between host quality and potential parasite fitness separately from experimental papers. We do not claim to have found all relevant published studies: our goal was primarily to develop a broad understanding of whether general patterns exist between host and parasite fitness. Infection rate and overall sample size None of the control, uninfected cabbage loopers died of viral infection. Because the goal of this study was to examine the effect of host food levels on parasite fitness, these uninfected control larvae were not included in statistical analyses. Larvae that did not develop full viral infections were also not included in the analysis. In experiment 1, three larvae that had been dosed with virus did not produce any virus OBs (1 larva from the low food treatment; 2 from the medium food treatment). Also for one or two larvae from each food treatment we missed the death date, or were unable to collect weight data because the larva had burst before its intact post-death weight could be recorded. Across the three food levels, data for days to death were collected from 81 larvae, and data for OB number and weight at death were collected from 83 larvae. In experiment 2, no virus OBs were produced in 18 larvae (n = 13, 1, 4, in the low, medium and high food treatments respectively). Because almost half of the larvae (13/30) in the low food treatment did not develop a typical virus infection, we took a closer look at these unsuccessful infections. It appears that not only did these larvae not develop a proper virus infection, but they also barely grew at all. Unsuccessfully infected larvae died at a much lighter weight than infected larvae (F 1,26 = 13.78, = 0.001), but there was no difference in days to death (Kruskal-Wallis x 2 = 1.78, p = 0.18). It is unclear why so many of the larvae in the low food treatment did not become infected with virus or why they did not grow in general. Overall, the infection rate was 57%, 97% and 87% for the low, medium and high food treatments respectively. In addition, two infected larvae from the low food treatment burst before they were weighed, so we could not collect weight data for those individuals. Thus, across the three food levels, data for OB number and days to death were collected from 72 larvae, and weight at death were collected from 70 larvae. Host weight at death, time to death, and virus OB production Host weight at death and virus OB production increased with increasing food availability in both experiments (Fig. 1a, d; Fig. 1c, f; Table 2). In experiment 1, larvae that were fed medium and high food lived longer than those given low food ( Fig. 1b; Table 2). Food treatment did not affect days to death in experiment 2 ( Fig. 1e; Table 2). Relationship between virus production, host weight and food treatment We examined the slope of the relationship between virus production and host weight for each food treatment. In both experiments 1 and 2, the slope of this relationship was shallowest for the low food treatment (Fig. 2a, b; Table 2). Literature survey We found 21 studies that demonstrated an increase in PPF with increasing host condition. We also found five studies where PPF decreased with increasing host condition, seven studies where PPF increased or decreased with host condition (depending on the parasite trait), and one study in which there was no change in parasite potential fitness (Table 3). Of these 34 papers, 17 documented host-parasite interactions in invertebrate hosts, 12 in vertebrate hosts, four in plant hosts and one paper investigated parasites of protists. The types of parasites investigated included virus, bacteria, fungi, tapeworms, trematodes, nematodes, protozoans, insects, cowbirds and mistletoes. The parasite fitness traits quantified Table 2). Although they were not included in Table 3, we also found seven studies that used correlational or observational studies to examine the relationship between parasite fitness traits and host condition. These papers included parasite fitness data for vertebrate hosts (stickleback/cestodes [15], voles/trypanosomes [39], doves/lice [40], rodent/cestode [41]; small mammals/ protozoans and helminthes: [42]), and in a plant-mistletoe hostparasite system [43,44]. The results of these correlational studies showed no clear relationships between parasite fitness and host condition. Empirical results Our experiments revealed a strong positive relationship between virus productivity and host food availability. These results suggest that virus potential fitness likely benefits from increased resource availability to hosts in this host-parasite system. Table 2 We have also shown that the rate of virus OB production was lowest in hosts given the lowest access to food (Fig. 2a, b; Table 2). This result implies that in poorly-fed hosts, the virus is less efficient at converting host tissue to virus tissue. It is unclear why this might be the case; perhaps a stressed, low-condition larva translates into a low-quality or low-quantity resource for the virus. In a laboratory experiment with western tent caterpillar, low food availability appeared to reduce the susceptibility of western tent caterpillars to NPV infection [45]. The authors suggested that this result was related to the immune function of the host when food deprived, or the ability of the virus to replicate for some other reason. The overall positive relationship between virus productivity and host food availability across the three food treatments could also be linked to the relationship between larval mass and larval volume; in other words, virus OB production may be constrained by the volume of the insect. Previous literature [46] examined the relationship of larval weight to volume in Heliconius cydno (Lepidoptera: Nymphalidae) and Trirhabda germinata (Coleoptera: Chyrsomelidae) and found it to be linear on a log-log scale. The slope of the relationship was 1.03 for T. germinata and 0.95 for H. cydno. Overall, this linear log-log relationship is similar to the pattern observed in our experiments, and may suggest that virus production may be bounded by the rate at which larval volume increases with larval mass. We conducted two experiments with slightly different starting conditions in order to gain a preliminary understanding of how host condition at the time of infection might affect overall host condition and parasite fitness. Because of logistical constraints, the two experiments were conducted at different times, so we make comparisons between the two experiments with some caution. Data for final host weight suggest that larvae in experiment 1 were had lower overall condition than those in experiment 2 (Fig. 1a, d). In fact the final weight of larvae in the lowest food treatment in experiment 2 overlapped with the final weight of larvae in the highest food treatment in experiment 1. Thus, transferring larvae from group rearing cups to individual cups at the third instar stage had a considerable affect on the overall size of the larvae, and on overall virus production. Unfortunately because of the small timewindow available for larval infections at the early 4 th instar stage, we did not collect data on initial larval weight, so we do not have data on how much growth took place during the experiments. However, we still feel confident in the overall conclusion that increases in host condition are beneficial to the PPF in this hostparasite system. With respect to 'days to death', infected larvae that had greater access to food also lived longer in Experiment 1, but in Experiment 2 food availability did not affect survival. The range of days to death observed here (4.5,6.5 days; experiments combined), is within the range documented by other experiments with T. ni and AcMNPV [47][48][49], and the results for experiment 2 approach the upper end of those seen in comparable studies. The data suggest that across the two experiments hosts that were fed more also took longer to die, but again we say this with caution because the experiments were conducted at two different times. Although we were unable to find published studies that similarly experimentally examined virus yield in response to host food availability, other authors have demonstrated that virus OB production was influenced by the type of plant fed to the host [50,51]. Virus yield was highest in larvae of the western tent caterpillar (Malacosoma californicum pluviale) that were fed alder plants, versus wild rose or apple [50]. Similarly, virus yield was highest when winter moth (Operopthera brumata) larvae were fed oak, versus Sitka spruce or heather [51]. Together these data suggest that viral fitness is potentially affected by both larval food quantity and quality. Literature survey The literature survey revealed that potential parasite fitness typically increased with host food availability in invertebrate, plant and protist hosts (Table 3). Our empirical data are consistent with these results. For vertebrate hosts, the relationship between potential parasite fitness and host condition was more variable and tended to depend on the host-parasite system or on the parasite trait measured. Overall, data from studies included in this literature survey suggest that PPF can both increase and decrease with host condition. We caution that this is a preliminary survey of the literature and a more comprehensive literature review or metaanalysis, which corrects for phylogenetic biases and variation in sample sizes are required to discuss any trends with quantitative rigor. Implications for understanding parasite fitness and disease ecology Previous work in this area [13,52] postulated that low condition hosts can be both more susceptible to parasites, and can suffer more from infection, than hosts in better condition. This increased susceptibility and suffering lead to what the authors termed a 'vicious circle', in which poor condition leads to higher parasite loads, which in turn keeps the host in poor condition. These 'vicious circles' can lead to individual reproductive failure and death, as well as host population decline [13]. Our empirical data and preliminary literature survey have demonstrated that poor condition hosts may have lower parasite productivity than higher condition hosts; thus in some host-parasite combinations, the 'vicious circle' may not lead to a continuous increase in parasite propagule pressure. Poor condition hosts may still suffer more from parasites than hosts in better condition, but they may end up contributing fewer parasites to the overall parasite population pool. If environmental conditions are poor across a large landscape (e.g. widespread drought), this may result in large numbers of poor-condition hosts, and for some taxa, in a decrease in parasite population size, rather than an increase. We propose that the next step to a better understanding the relationship between host and parasite fitness is to examine in greater detail whether groups of taxa exhibit similarities with respect to the effect of environmentally-mediated variation in host condition on parasite fitness. Analytical or simulation models can then incorporate these general patterns to make predictions regarding the overall effect of variation in host condition on hostparasite dynamics (e.g. Daphnia/bacteria: [53]; Lepidopteran/ virus: [26,54,55]. Given that the world is a heterogeneous place, environmentallymediated variation in host condition is likely to be ubiquitous in nature. However, the outcome of parasite infection is not only affected by food availability or quality, as documented here, but also by factors such as variation in temperature [56][57][58][59] and salinity [60]. The goal moving forward is to determine whether broad patterns exist in how hosts and parasites respond to variation in these biotic and abiotic factors, and to use these patterns to inform how environmental heterogeneity affects hostparasite interactions at the population and community levels.
2018-04-03T05:01:43.232Z
2014-09-10T00:00:00.000
{ "year": 2014, "sha1": "71975a93991612f2b79088ce38fd9d7164fb14af", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0106401&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71975a93991612f2b79088ce38fd9d7164fb14af", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235377104
pes2o/s2orc
v3-fos-license
Hybrid coupling of finite element and boundary element methods using Nitsche's method and the Calderon projection In this paper we discuss a hybridised method for FEM-BEM coupling. The coupling from both sides use a Nitsche type approach to couple to the trace variable. This leads to a formulation that is robust and flexible with respect to approximation spaces and can easily be combined as a building block with other hybridised methods. Energy error norm estimates and the convergence of Jacobi iterations are proved and the performance of the method is illustrated on some computational examples. Introduction The coupling of finite element (FEM) and boundary element (BEM) methods is the most widely used approach for solving multi-physical problems on an unbounded domain. It allows to take advantage of both methods. On the one hand, the BEM reduces the dimension of the problem by using the boundary integral equation, hence it is commonly used in exterior unbounded domains. On the other hand, the FEM is known from their robustness and universal applicability even for problem of inhomogeneous or non-linear nature. The first coupled procedure was introduced by Zienkiewicz, Kelly and Bettess [ZKB77]. It has been analysed by Brezzi, Johnson and Nédélec [BJN78], [BJ79] and [JN80] for problem in unbounded domains. It is often referred to as the Johnson-Nédélec coupling. Extension for higher order equations was considered by Wendland [Wen86]. The convergence analysis requires compactness of the double layer potential that can be obtain on the smooth boundary. Furthermore, even for a symmetric discretisation scheme, the coupling method produces a system of equations with a non-symmetric coefficient matrix. In order to avoid these disadvantages, a symmetric coupling of FEM and BEM was devised by Costabel [Cos87] and Han [Han90]. The independence of the compactness condition was obtain by using both equations of the Calderón system, contrary to the previously introduced methods that employ only one of the two equations of the Calderón system. Some years later, Sayas [Say09] showed that the weaker assumption of a Lipschitz coupling interface is sufficient for the Johnson-Nédélec coupling. His analysis has since been simplified by Steinbach [Ste11]. More recent developments have focused on the coupling of BEM with mixed FEM. In [BCS96] and [MVMP96] the authors analysed symmetric coupling of BEM and mixed FEM that uses Raviart-Thomas elements. Further work on coupling BEM with mixed FEM with such elements as Brezzi-Douglas-Marini or Brezzi-Douglas-Marini-Fortin spaces was analysis by Carstensen and Funken [CF00]. Gatica, Heuer and Sayas [GS06] and [GHS10] introduced the first the coupling of BEM and discontinuous Galerkin (DG) methods, in order to exploit the possibility to easily use high order approximation in the latter. Another coupling of interior penalty DG methods with BEM was presented by Of, Rodin, Steinbach and Taus [ORST12]. A general approach using the unified hybridisation technique was presented by Cockburn and Sayas [CS12]. The class of FEM considered includes the mixed, the DG and the hybridisable discontinuous Galerkin (HDG) methods. Further collaboration of these authors with Gúzman led to a new convergence results published in [CGS12]. In this paper, we present the coupling of FEM and BEM using weak imposition of coupling conditions. Nitsche's method [Nit71] is widely used in the context of FEM for imposing boundary conditions. In addition, methods based on Nitsche's method have been successfully utilized for BEM domain decomposition problems in [GHH09] and [CH12], and more recently for weakly imposing boundary conditions for BEM in [BBS19]. Merging these two approaches for FEM and BEM we weakly impose both coupling conditions in a hybridised formulation on the boundary. The hybridisation is made by introducing a trace variable and imposing the coupling in the form of a Nitsche type Dirichlet condition on the two systems. The use of Nitsche's method allows us to use the Dirichlet trace as the hybrid variable, ensuring continuity by a consistent penalty term. The test function partner of the trace variable, then acts to ensure continuity of fluxes. The global system can be constructed using arbitrary approximation orders for the two sub systems and the trace variable and the sub problem can be solved independently. The stability of the method poses no constraint on the approximation spaces and mesh refinement does not require special treatment as in the case of Johnson-Nédélec coupling. This means that the two systems can have independent meshes, that both have to be integrated only against the trace variable. We here consider the standard continuous FEM, but the formulation is by and large agnostic to the choice of FEM used in the bulk and the method can be applied with discontinuous FEM as well, such as DG, HDG [CGL09], or HHO [DPEL14] using a hybridised coupling on the interior domain boundary. In the case of using discontinuous FEM, our formulation can be interpreted as a hybridised interior penalty formulation of the class of methods discussed in [CS12]. Finally we note that, thanks to the use of Nitsche type mortaring, the method proposed herein can be used in the framework for unfitted hybridised methods introduced in [BEH + 19]. In that case a surface mesh is required for the definition of the BEM method, but the FEM approximation on the interior domain can be computed on an unfitted bulk discretisation. As many existing approaches of coupling FEM and BEM, we use Finite Element Tearing And Interconnecting (FETI) and Boundary Element Tearing And Interconnecting (BETI) type of methods [LS05] to solve the reduced system for the hybrid variable. FETI is formulated using a Schur complement formulation, while BETI is usually formulated in terms of Steklov-Poincaré operators. Although Nitsche's method is an established framework for domain decomposition for finite elements methods such as FETI, it was not recognised by BETI community. In this paper, we demonstrate how the hybrid Nitsche approach can be integrated into the BETI framework. The rest of the paper is organized as follows. We introduce the model problem in this section. In Section 2, we present on continuous level the symmetric coupling of BEM and FEM formulation known from [Cos87] and [Han90]. The discrete formulation including weakly imposed coupling condition is introduced and analysed in Section 3. Although the formulation obtained is not symmetric, we comment of how symmetry can be obtained. We discuss iterative domain decomposition in the model case of a simple relaxed Jacobi algorithm in Section 4 and prove its convergence. In Section 5, we present some numerical results and we conclude with some remarks in Section 6. Model problem Let us consider the unbounded domain Ω = R 3 . We divide Ω into a bounded internal part Ω − and an unbounded external part Ω + with common Lipschitz boundary Γ, with n the outer unit normal vector of the domain Ω − on Γ. We let ∂ n u := ∂u ∂n denote the outward normal derivative, f ∈ L 2 (Ω) be a function with support in Ω − and introduce a function ε ∈ L ∞ (Ω), ≥ 0. Then we can formulate our model problem as follows The function ε is introduced to make the interior problem heterogeneous and hence unsuitable for treatment using the boundary element method alone. Remark 1. It is straightforward to extend the discussion to the case with a smoothly varying diffusion coefficient in Ω − which has a jump over Γ. Lemma 1 (Duality pairing relation). For any λ ∈ H − 1 2 (Γ) and v ∈ H 1 2 (Γ), the following relation holds Proof. It is obvious for v = 0 and immediately by the definition of the dual norm We start with the variational formulation of the internal problem. Applying integration by parts for first equation of (1.1) for every v ∈ H 1 0 (Ω − ) we have We define the Green's function for the Laplace operator in R 3 as follows In this paper, we focus on the problem in R 3 . A similar analysis can be used for problems in R 2 , in which case this definition should be replaced by G(x, y) := log |x−y| 2π . Following the standard approach (see, e.g. [Ste08, Chapter 6]), we introduce single layer and double layer operators V : H − 1 2 (Γ) → H 1 (Ω + ) and K : H where x ∈ Ω + \ Γ and n y is an outer unit normal vector (for Ω i , i ∈ {−, +}) in the point y. Following [Ste08, Chapter 1], we define the Dirichlet and Neumann traces where H 1 (∆, Ω i ) := v ∈ H 1 (Ω i ) : ∆v ∈ L 2 (Ω i ) , for i ∈ {−, +}, and n x is an outer (for Ω − ) normal vector to Γ in the point x. The following results will be useful in what follows. We use {·} Γ to denote an average of the interior and exterior traces of a function. Then, applying the trace mappings yields to the single layer, double layer, adjoint double layer potentials and hypersingular boundary integral operator For the solution u of the problem (1.1), we have the following boundary integral equations on Γ denotes two Calderón projectors defined as follows From the relation (2.5) for external traces we can construct the following exterior Dirichlet-to-Neumann operator Obviously, it makes sense only if the inverse of the operator V exists. Using Dirichlet-to-Neumann operator (2.6) we introduce a new variable λ = γ + N u = ∂ n u + as The classical symmetric coupling that satisfies the transmission conditions of (1.1) is as follow Well-posedness of the continuous problem The following results are well known (see [Cos87] or [Han90]), but for reader's convenience we present them in the case of problem (1.1). Let us propose a more compact formulation of (2.8). We finished by applying the trace inequality (2.4) to terms including · Proof. Using coercivity of V (see [Ste08, Theorem 6.22]) and coercivity of W (see [Ste08, Theorem 6.24]) we obtain wherev denotes the average over Γ of v and c ε = min(1, ε). This shows (2.12) when ε > 0. For the case = 0 we need a Poincaré inequality of the form is defined by the second equation of (2.8). We immediately see that if this is true then there exists α > 0 such that Then we need to study We argue by contradiction. Assume that ϕ(v) = 0. Then u 0 is the trace of a solution to the homogeneous Neumann problem in Ω − . However by the first line of the left relation of equation (2.5) there holds for all such traces (2.14) Hence the functional is defined by However since the operator on the left hand side, defined by, V is injective it follows that ϕ(v) = 0, which leads to a contradiction. This concludes the proof. The existence and uniqueness of the solution of problem (2.9) is achieved by using the Lax-Milgram theorem. The discrete problem We assume that Ω − is a polyhedral domain. The boundary Γ may be decomposed in a set of n planar surfaces {Γ j } n j=1 . It will be convenient to use following broken Sobolev spaces over the polyhedral boundary Γ. For s > 1 define Let T h be a triangulation of Ω − made of tetrahedrons. For each triangulation T h , E h denotes the set of its facets. In addition, for each of element K ∈ T h , h K := diam(K), and h := max K∈T h h K . Let G i , i = 1, 2 denote two different surface triangulation of the boundary Γ. For notation convenience we here assume that the trace mesh of T h and the G i all have the similar local mesh size. The following result will be useful in what follows. Lemma 5 (Trace inequality). There exists C max > 0, independent of h K , such that for all K ∈ T h and polynomial function v in K the following discrete trace inequality holds To discretise the problem (2.8) over the triangulation one can choose either continuous or discontinuous finite elements. For simplicity of the analysis we choose the following spaces Using the above spaces we propose the hybrid discrete formulation of the problem (2.8) The stabilisation parameter τ > 0 has to be chosen appropriately. The formulation of bilinear form a h is well known for example from [Egg09]. As we said before it is possible to use the discontinuous finite element method for example symmetric interior penalty hybrid discontinuous Galerkin method presented in [ES10]. Remark 2 (Impedance boundary condition). The hybrid weakly imposed Dirichlet and Neumann boundary conditions is related to an impedance boundary condition of the type with γ = h τ . This can be seen considering terms associated with v h in above definition of the bilinear forms. Symmetric formulation Despite using the symmetric Nitsche method, our whole system is not symmetric. This is a consequence of the lack of symmetry of the boundary element method with weak imposition. We can use the Steklov-Poincaré operator to eliminate the flux variable, so that the non-symmetric method above is transformed into a symmetric reduced system as we show below. The following equations are associated with bilinear form b h from (3.16) reads Similar to the continuous formulation we use the Dirichlet-to-Neumann operator (2.6) to obtain Injecting this relation into the second equation leads to the formulation of the new symmetric bilinear form To show the well-posedness of (3.16) we need the ellipticity of the bilinear forms a h and b h Lemma 8 (Coercivity). Assume that positive constant τ is large enough. Then, there exists positive constant α such that for all Proof. Let us start with bilinear form a h . First we assume that ε ≥ ε min > 0 Using Cauchy-Schwarz and trace inequalities (3.15), followed by Young's inequality, we arrive at We finish by applying the equivalence of the norms (3.20) under the assumption that τ > 2C 2 max . In the case of bilinear form b h , by using the results from Lemma 4 we obtain, withw h = Observe that when ε > 0 we may bound where we applied the trace inequality (2.4) in the last step. The right hand side is controlled by the lower bounds on a h and b h above. Once again, we finish by applying the equivalence of the norms (3.21). In case ε = 0 we need to show that a Poincaré inequality holds, similar to (2.13), this time on the form To this end, since coercivity holds up to a constant, we may assume that w − h =w h = w + h =w h and proceed verbatim as in the continuous case, since in that case the continuous and discrete expressions corresponding to (2.14) are the same. The existence and uniqueness of the solution of problem (3.16) is achieved by using the Lax-Milgram theorem. In addition, the proposed method is consistent as the following result shows. Error analysis In this section we present the error estimates for the method. These estimates are proved using the following norm The first step is the following version of Cea's lemma. Now using Lemma 7, we get continuity of Then, using the triangle inequality we see that and it follows from (3.28), Lemma 9 and (3.29) that Thus, we get (3.27) with C := 1 + β α . Proof. The result is a consequence of (3.27) and approximation. Applying triangle inequality and trace inequality [BS08, Theorem 1.6.6] followed by Young's inequality, we obtain For the boundary part, by applying triangle inequality, we obtain We conclude the proof by applying Lemma 10. Iterative solution For the solution of the linear system we will iterate on the Schur complement for the trace variable, solving independently in the two sub domains. To justify this split approach we here show that a simple relaxed Jacobi iteration on the two systems will converge. The condition number of the Schur complement can be analysed using the arguments of [BEH + 19, Section 4]. 1. Given u n solve for u n+1 and λ n+1 by solving the linear system 2. Given u n+1 and λ n+1 , solve for the new trace variable u n+1 , for σ > 0, To prove that the iterative algorithm converges we only need to show that if f = 0, u n+1 , u n+1 and λ n+1 all go to zero as n → ∞. We add and subtract u n+1 in the first equation and add the second to obtain Test this equation with u n+1 , λ n+1 , u n+1 and use coercivity to obtain Here we used the well known formula (4.31) Considering the terms on the right hand side and using trace inequality (3.15) we see that Using the duality pairing between H 1 2 and H − 1 2 followed by the global inverse inequality u n+1 − u n H 1 2 (Γ) [SS11,Theorem 4.4.3]) and Young's inequality we have Using the once again the telescoping property (4.31) we see that for σ > τ −1 α −1 max C max , C −2 t , τ + 2, the right hand sides can all be absorbed in the left hand side to yield It follows that as N → ∞ u n+1 , u n+1 and λ n+1 all go to zero, since the sum of the left hand side has to be bounded by the constant of the right hand side. Numerical experiments In our experiment tests we consider ε = 1 and V h × Λ l h × M m h with j = k = m = 1. The value l varies depending on the geometry of domains considered. We let the trace meshes G 1 and G 2 coincide with the trace mesh of T h on Γ. For our experiments we use two numerical softwares: FEniCS [ABH + 15] and Bempp [SBA + 15]. We use the solution of interior and exterior Dirichlet boundary value problems to construct a Schur complement system solving the following equations The solution u on Γ of the Schur complement is obtained using the nested conjugate gradient method (CG) [HS52]. Although one can use direct solvers to solve the interior and exterior Dirichlet boundary value problems, we here use iterative solvers to apply preconditioners. The interior Dirichlet boundary value problem that is a symmetric system associated with bilinear form a h (3.16) is solved by using FEniCS and CG without and with algebraic multigrid preconditioner. The discrete exterior problem associated with bilinear form b h (3.16) is not symmetric, however as we shown in Section 3.1, we can apply the Steklov-Poincaré operator to the flux variable and transform the equations into a symmetric system. For clarity of the code, we here simply used the generalized minimal residual method (GMRES) [SS86] without or with mass matrix preconditioner to solve in Bempp the external Dirichlet boundary value problem. The tolerance of the iterative solvers is chosen to be not greater the 10 −8 . A Jupyter notebook demonstrating the functionality used in this paper will be made available at www.bempp.com. Choice of parameter τ Thanks to Lemma 8 we know that the stabilisation parameter τ in the discrete problem (3.16) must be large enough to assure the coercivity. We start with an experiment showing how the value of the parameter τ influences the convergence and number of iterations. We consider Ω − as a unit sphere with boundary Γ. We define u − (x, y, z) = 1 2π sin π(x 2 + y 2 + z 2 ) + 1 2π cos π(x 2 + y 2 + z 2 ) + 2π + 1 2π , u + (x, y, z) = 1 x 2 + y 2 + z 2 . It is easy to check that for the unit sphere domain Ω − the above elementary functions are the solution of our problem (1.1). Figure 1 shows the error values for different values of τ and j = k = m = l = 1. In this case, Γ is smooth, and so W 1 h = Λ 1 h . In Figure 1a, we plot in log-log scale the error of the interior solution u − − u − h L 2 (Ω − ) and on Figure 1b the error of the exterior solutions u + − u + h L 2 (Γ) + λ + − λ + h L 2 (Γ) for h < 2 −1 (solid line with circles), h < 2 −2 (dash-dotted line with diamonds), and h < 2 −3.5 (dashed line with squares). It can be seen from the Figures 1 that both errors stop decreasing when τ is around 10. Furthermore, for τ > 10 the iterations increase with growing τ , hence we fix τ = 10 for the next experiments. Spherical subdomain Let Ω − once again be the unit sphere, Γ its boundary and consider the same exact solution as above. Figure 2 shows the convergence, CG iteration counts and solving time when τ = 10 and k = l = 1. In Figure 2a In Figure 2b, we plot in log-log scale the number of iterations taken by CG to solve the non-preconditioned system associated with exterior problem (dashed line), compared with the preconditioned system (solid line). In addition, Figure 2c shows the time required by solvers of interior and exterior systems. The interior system is solved by CG with or without algebraic multigrid preconditioner and the exterior system is solved by GMRES with or without mass preconditioner. Preconditioning reduces both the iteration count and the CPU time needed by the solver. Cubical subdomain Let Ω − = (0, 1) 3 be a cube and we solve the problem (1.1) with f = 1. We choose τ = 10 and j = k = m = l + 1 = 1, where Λ 0 h is the space of piece-wise constants per element in the trace space. Figure 3a shows the convergence when τ = 10 and j = k = m = l + 1 = 1. In this case, the In Figure 3, we plot as well in log-log scale the number of iterations and solving time taken by CG to solve the non-preconditioned system (dashed line), compared with the preconditioned system (solid line). Once again preconditioning brings improvement in terms of iteration counts and time taken to solve the problem. Conclusions We have analyzed and demonstrated the effectiveness of Nitsche type methods for coupling finite element and boundary element formulations. Our approach gives flexibility to choose a continuous or discontinuous finite element space in the FEM solver, hence the interior problem can be solved essentially using any method that allows for the hybridised Nitsche method for interdomain coupling. We are also free to choose the trace variable minimising the coupling degrees of freedom. In this paper, we focus on the technical aspects and analysis to allow for flexibility within our framework. A work demonstrating the applicability for large problems using parallel approach is in preparation. The method can be extended to other models such as the Helmholtz equations. In this case it is known [SW11] that the use of impedance interface conditions is advantageous and such an approach can be mimicked in the present framework by letting the stabilisation constant have nonzero imaginary part and depend on the wave number. Formulations of the presented FEM/BEM coupling method to the Helmholtz and Maxwell problems are currently in preparation. For these cases however more effective operator preconditioning techniques for exterior problem are essential, especially for high frequency problems. Despite that, we expect that their implementation will be similar to the presented Laplace case.
2021-06-04T01:15:46.502Z
2021-06-03T00:00:00.000
{ "year": 2021, "sha1": "43215bbfe19507fb1b630a1964a5a766613c198f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "43215bbfe19507fb1b630a1964a5a766613c198f", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
255929655
pes2o/s2orc
v3-fos-license
RF-Alphabet: Cross Domain Alphabet Recognition System Based on RFID Differential Threshold Similarity Calculation Model Gesture recognition can help people with a speech impairment to communicate and promote the development of Human-Computer Interaction (HCI) technology. With the development of wireless technology, passive gesture recognition based on RFID has become a research hotspot. In this paper, we propose a low-cost, non-invasive and scalable gesture recognition technology, and successfully implement the RF-alphabet, a gesture recognition system for complex, fine-grained, domain-independent 26 English letters; the RF-alphabet has three major advantages: first, this paper achieves complete capture of complex, fine-grained gesture data by designing a dual-tag, dual-antenna layout. Secondly, to overcome the disadvantages of the large training sets and long training times of traditional deep learning. We design and combine the Difference threshold similarity calculation prediction model to extract digital signal features to achieve real-time feature analysis of gesture signals. Finally, the RF alphabet solves the problem of confusing the signal characteristics of letters. Confused letters are distinguished by comparing the phase values of feature points. The RF-alphabet ends up with an average accuracy of 90.28% and 89.7% in different domains for new users and new environments, respectively, by performing feature analysis on similar signals. The real-time, robustness, and scalability of the RF-alphabet are proven. Introduction With the rapid development of gesture recognition technology and wireless technology, human gesture recognition has received increasing attention from academia and industry [1]. Researchers have adopted different approaches to achieve fine-grained human perception [2,3]. Human gesture perception technology is an emerging branch of human perception technology, and the applications involved in it have a significant impact on our daily life: e.g., virtual reality (VR), smart home, smart city, 5G communication, and sign language recognition technology [4][5][6][7][8][9]. Especially in the context of the current epidemicridden environment, RFID-based gesture recognition can be zero-contact, non-invasive, and can be applied to many places. For example, it can be used in hospitals, libraries, supermarkets, museums and other public places to avoid germ infection by gesture recognition of the human body. Unlike traditional keyboard input and voice input, gestures give users a better experience in noisy environments. RFID tags are everywhere and commonly used for bus cards, car keys, pass cards, etc. One main reason for the widespread usage is the simplicity and extremely low cost of the RFID tags (each tag costs 5-10 cents USD). With the development of 5G communication technology, it is possible to interact with specific smart devices by gesture input in daily family life. Traditional solutions for gesture recognition are performed by users wearing specific wearable sensors or cameras [10][11][12][13][14][15][16][17]. Although these traditional solutions have high recognition rates, wearable sensor-based approaches tend to place an additional burden on the human body and wearing these devices outside the home is often accompanied by inconvenience and forgetfulness. The Kinect-based multi-level gesture recognition proposed by Feng et al. [18] uses component concurrent features and sequentially organized features to extract semantic units from frame segmentation and the entire gesture sequence features, and then classify motion, position and shape components sequentially. Thus, a relatively high single-learning gesture recognition performance is obtained. Although the accuracy of gesture recognition using camera-based gestures is high, it is sensitive to light and often does not provide high recognition in dark environments. It may even violate the user's privacy [19]. In this paper, we explore a flexible, easy-to-deploy, non-intrusive gesture recognition mechanism. For wireless signals, when a user is performing specific gestures, these specific gesture transformations affect the RF signal (amplitude or phase) of the wireless channel. By analyzing these channel transformations, data processing and feature extraction, the recognition of gestures can be achieved. The literature [20] uses dynamic time warping (DTW) based pattern matching to achieve gesture classification. However, these traditional methods require an extensive data acquisition process and data preprocessing as well as a long time data training process. The performance of gesture recognition depends largely on the choice of feature extraction algorithm or the fit of the neural network. The model-based method has the advantages of fewer data sets, short training time and high scalability. Based on this idea, we designed a device-free, complex, fine-grained, domain-independent gesture recognition system, the RF-alphabet. As shown in Figure 1, a dual-antenna, dual-tag layout is used. Users draw 26 letters of the alphabet on the back of the tags. The RF-alphabet can map the captured signals to specific gestures. In the experiment, volunteers draw 26 letters in three different locations (dormitory, conference room, and classroom). In this paper, we explore a flexible, easy-to-deploy, non-intrusive gesture recognition mechanism. For wireless signals, when a user is performing specific gestures, these specific gesture transformations affect the RF signal (amplitude or phase) of the wireless channel. By analyzing these channel transformations, data processing and feature extraction, the recognition of gestures can be achieved. The literature [20] uses dynamic time warping (DTW) based pattern matching to achieve gesture classification. However, these traditional methods require an extensive data acquisition process and data preprocessing as well as a long time data training process. The performance of gesture recognition depends largely on the choice of feature extraction algorithm or the fit of the neural network. The model-based method has the advantages of fewer data sets, short training time and high scalability. Based on this idea, we designed a device-free, complex, fine-grained, domain-independent gesture recognition system, the RF-alphabet. As shown in Figure 1, a dual-antenna, dual-tag layout is used. Users draw 26 letters of the alphabet on the back of the tags. The RF-alphabet can map the captured signals to specific gestures. In the experiment, volunteers draw 26 letters in three different locations (dormitory, conference room, and classroom). There are a number of problems and challenges encountered in designing the RF-alphabet system for traditional gesture recognition. First, in drawing the 26 gesture letters, the user dynamically draws the gesture letters behind the tag, which will design complexity, diversity, and fine-grained signal transformations. Current commercial RFID readers provide a limited spatial resolution of the signals (RSS and phase). How to capture these fine-grained transformations becomes one of the challenges. Second, gesture recognition designs complex spatiotemporal signal transformations. Among the 26 English letters, there exist some letters with very similar gesture signals. Often for some fine-grained gesture signals, RFID readers are not able to sensitively perceive their transformations. Moreover, in practical experimental sites, they are often accompanied by multipath effects [21], where the signals are received along with additional noise. It becomes one of the challenges to extract the fine-grained packing signal for gesture recognition. Finally, when a user is in one environment, its gesture recognition accuracy may be high, but when a different environment and a different user performs the gesture, the gesture recognition accuracy tends to drop sharply [22]. How to solve domainindependent gesture recognition becomes one of the challenges, and the properties of RF signals also increase the complexity of domain-specific feature extraction. There are a number of problems and challenges encountered in designing the RFalphabet system for traditional gesture recognition. First, in drawing the 26 gesture letters, the user dynamically draws the gesture letters behind the tag, which will design complexity, diversity, and fine-grained signal transformations. Current commercial RFID readers provide a limited spatial resolution of the signals (RSS and phase). How to capture these fine-grained transformations becomes one of the challenges. Second, gesture recognition designs complex spatiotemporal signal transformations. Among the 26 English letters, there exist some letters with very similar gesture signals. Often for some fine-grained gesture signals, RFID readers are not able to sensitively perceive their transformations. Moreover, in practical experimental sites, they are often accompanied by multipath effects [21], where the signals are received along with additional noise. It becomes one of the challenges to extract the fine-grained packing signal for gesture recognition. Finally, when a user is in one environment, its gesture recognition accuracy may be high, but when a different environment and a different user performs the gesture, the gesture recognition accuracy tends to drop sharply [22]. How to solve domain-independent gesture recognition becomes one of the challenges, and the properties of RF signals also increase the complexity of domain-specific feature extraction. Given the above problems and the limited RF information provided by current commercial RFID readers, we look for solutions in its original signal and model design. As shown in Figure 1, in our experiments, we overcome the drawbacks of traditional multi-tag arrays, which are often accompanied by coupling effects. We use a specific antenna-tag layout with two antennas and two tags to successfully capture complex and fine-grained gesture signals. For gesture signals involving complex spatiotemporal transformations and multipath noise, we successfully achieve RF signal sensing capability after performing data smoothing, subtraction operations, data normalization, and data expansion on the raw data. We convert the original phase data, which seems to be irregular, into a 100 × 100 pixel picture. By designing and combining the models, we successfully converted the complex RF signals into recognizable waveband signals. Eventually, higher-level phase features were extracted to achieve feature extraction of temporal and spatial modes, which in turn accurately realized the gesture recognition of 26 English letters. In the experiment, we conducted a large number of experiments in three different scenarios (dormitory, conference room, and classroom) and invited three volunteers (two men and one woman) to collect a sample set of about 8000 to evaluate the model. The contributions of this paper are as follows: (1) RF-alphabet is a device-free, dualtag-dual-antenna layout domain-independent 26 letters gesture recognition system. It overcomes the drawbacks of traditional gesture recognition using multi-tag arrays. (2) The RF-alphabet adopts a model-based design approach, which successfully achieves a lowcost, easy-to-deploy, and short training time model design. It overcomes the drawbacks of using traditional deep learning methods that require large data sets, long training time, and long experimental sample set collection time. (3) The RF-alphabet achieves cross-domain 26 letters gesture recognition on commercial RFID readers, which still has high accuracy for different domains due to the specific model design. After extensive experiments, The RFalphabet shows that it has a flexible, easy-to-deploy, and highly scalable gesture recognition system. The average accuracy rate for the 26 letters was 92.43%. The rest of the paper is organized as follows: Section 2 presents an overview of the research, Section 3 describes the preliminary work, Section 4 details the design of the 26-letter gesture recognition system, Section 5 discusses the implementation and evaluation of the experimental approach, Section 6 describes the research experiments and future research perspectives, and Section 7 summarizes the future research and the research summary. Related Work Current gesture recognition is divided into three main categories: gesture recognition based on wearable sensors, gesture recognition based on computer vision, and gesture recognition based on wireless technology. Among them, wearable-based gesture recognition technologies use wearable or nested sensing devices to capture hand or finger transformations. For example, inertial sensors are used to recognize eating gestures. Gesture transformation of the signal [23]. A glove designed using a combination of accelerometer and gyroscope technology is used to track the gesture signals of seat transitions by wearing the glove. Although the adoption of wearable sensing devices is common in real life, people often tend to forget to wear related devices when they go out or put an extra burden on their bodies when wearing these devices. Computer vision-based gesture recognition and human recognition systems use cameras or optical sensors to recognize gesture movements or human actions. In the context of the current deep learning environment, a large number of researchers have used gesture data trained by neural networks with high accuracy for testing specific actions performed by the human body. In the literature [24], Kinect is used to capture user gesture information and the captured data are used as pre-processed data for neural networks for feature training, which in turn are used to perform gesture recognition. In [21], an RGB camera is used for user gesture recognition. Although the gesture recognition obtained by computer vision-based methods has a high correct rate, these systems tend to be photosensitive and susceptible to lighting conditions. There is also the possibility of violating the user's privacy. The non-intrusive nature of RF-alphabet is able to solve the drawbacks of the above methods well. Gesture recognition based on wireless technology has received increasing attention from a large number of researchers because wireless devices have the advantages of being non-intrusive, easy to deploy, and highly scalable. Other wireless technologies such as WIFI [25], RFID, ultrasonic, and radar have been widely used for gesture recognition [22]. Yang et al. proposed the BVP (body-coordinate velocity profile) model for feature capture in independent domains to make full use of the channel information of WiFi for human activity sensing [26]. Wang et al. used acoustic signals to accurately identify gesture signals in the millimeter range and recognize them [27]. FingerPass uses channel state information (CSI) in WiFi to orderly authenticate users with finger gestures in smart homes, achieving high accuracy and low response latency. The real-time system involved in [28] Zhang et al.'s SMARS model using a commercial WiFi chipset is able to detect user sleep and assess sleep quality. Ubiquitous, non-intrusive and non-contact daily sleep detection was achieved. The RF-alphabet is able to sense complex and fine-grained alphabetic gesture information through a unique dual-antenna dual-tag layout and extract digital signal features through a Euclidean distance similarity prediction model, avoiding the disadvantages of traditional gesture recognition that requires a large number of experimental data sets and realizing high-precision and real-time gesture recognition. The RF-alphabet we designed also absorbs the advantages of wireless technology and applies it to human gesture recognition. Preliminary In this section, we introduce the RFID principle while considering the antenna layout and constructing the dual antenna-tag layout used in our experiments. RFID Principle A typical RFID system consists of a transmitter, receiver, microprocessor, antenna, and tag. The transmitter, receiver and microprocessor are generally combined together to become a reader; in RFID technology in the studio, the reader sends a signal, and after the connection of the antenna, the tag receives the signal and feeds back internal load information, and then back to the reader via the antenna, the reader identifies the information and then transmits the results to the host computer running in the background. The signal received by the reader, the backscattered signal S(t), which can be expressed as [29]: where a(t) and θ(t) are the amplitude and phase of the backscattered signal, respectively. j is an imaginary unit, and d is the distance between the antenna and the tag. Since the tag receives the signal from the antenna and generates a backscattered signal, the actual propagation distance should be 2d. θ 0 is the initial offset caused by the device and contains the phase shift caused by the reader, the tag and the antenna. λ is the wavelength of the RF signal. When the user crosses the detection area, the signal will propagate along three directions, including the direct path, the reflection path of the obstacle and the dynamic reflection path when the user moves. The backscattered signal S(t) is the superposition of dynamic and static reflection signals, the static reflection signal is the composite signal consisting of the direct path and the reflection path of the obstacle, and the dynamic reflection signal is the reflection signal when the user moves. When the user moves between the antenna-tag, assuming a total of n reflective paths coming from the user, then the received signal S(t) can be expressed as: S d (t) = ∑ n a n e −j( 2π where S s (t) is the static reflection signal, S d (t) is the dynamic reflection signal when the user is moving. a n is the amplitude of the user at the nth path, and d n (t) is the propagation distance at time t under path n. We use phase information as the features of different gestures performed by the user. The gesture drawing of letters in the alphabet is fine-grained, and the traditional RSS information is not able to effectively distinguish the fine-grained gesture information, so we use phase information to extract the fine-grained gesture features. Single Antenna-Tag Layout Because there are similar waveforms (a and u, n and h) in the letters of the English alphabet, it is often difficult to distinguish the gesture signal waveforms when using the single antenna-single tag layout, which leads to recognition errors. At the beginning of the experiment, the pre-experiment is carried out by means of a single tag and single antenna combination. The results are shown in Figure 2, the phase waveforms of these similar letters are relatively similar, and there is great difficulty in recognition. Because it is necessary to analyze and recognize all the letters of the alphabet, if the single antenna and single tag layout are used for data acquisition, it will be difficult to distinguish similar letters when the number of recognized letters increases. where ( ) is the static reflection signal, ( ) is the dynamic reflection signal when the user is moving. is the amplitude of the user at the nth path, and ( ) is the propagation distance at time under path . We use phase information as the features of different gestures performed by the user. The gesture drawing of letters in the alphabet is fine-grained, and the traditional RSS information is not able to effectively distinguish the fine-grained gesture information, so we use phase information to extract the fine-grained gesture features. Single Antenna-Tag Layout Because there are similar waveforms (a and u, n and h) in the letters of the English alphabet, it is often difficult to distinguish the gesture signal waveforms when using the single antenna-single tag layout, which leads to recognition errors. At the beginning of the experiment, the pre-experiment is carried out by means of a single tag and single antenna combination. The results are shown in Figure 2, the phase waveforms of these similar letters are relatively similar, and there is great difficulty in recognition. Because it is necessary to analyze and recognize all the letters of the alphabet, if the single antenna and single tag layout are used for data acquisition, it will be difficult to distinguish similar letters when the number of recognized letters increases. Dual Antenna-Dual Tag Layout It is difficult to distinguish similar letters in the layout of a single antenna and single label. Through our observation, we find that the layout of a double antenna and double label can effectively capture the drawn gestures. The user's gesture is tracked in all directions by the antennas in front and to the right of the user. At the same time, the gesture features of the two antenna-tag arrays are fused to recognize the letter. Figure 3 shows the waveform captured by the side antenna. For similar letter pairs a and u, it can be found that when the waveform of the front antenna array is similar, the waveform captured by the side antenna array can effectively distinguish the two letters. It also further verifies the feasibility of the dual antenna-dual tag array. Dual Antenna-Dual Tag Layout It is difficult to distinguish similar letters in the layout of a single antenna and single label. Through our observation, we find that the layout of a double antenna and double label can effectively capture the drawn gestures. The user's gesture is tracked in all directions by the antennas in front and to the right of the user. At the same time, the gesture features of the two antenna-tag arrays are fused to recognize the letter. Figure 3 shows the waveform captured by the side antenna. For similar letter pairs a and u, it can be found that when the waveform of the front antenna array is similar, the waveform captured by the side antenna array can effectively distinguish the two letters. It also further verifies the feasibility of the dual antenna-dual tag array. Confused Letter Recognition During the experiment, we found that even through the combination of double tags and double antennas, we can largely solve the problem of similar signal waveforms of similar letters. However, there are still two groups of English letters that are difficult to distinguish, namely a and d, h and n, which are called confused letters. Figures 4 and 5 show the phase waveforms of these two groups of letters after data preprocessing and normalization, respectively. Since the drawing gestures of each group of letters in the two groups are very similar, the phase waveforms collected by the two antennas are less Confused Letter Recognition During the experiment, we found that even through the combination of double tags and double antennas, we can largely solve the problem of similar signal waveforms of similar letters. However, there are still two groups of English letters that are difficult to distinguish, namely a and d, h and n, which are called confused letters. Figures 4 and 5 show the phase waveforms of these two groups of letters after data preprocessing and normalization, respectively. Since the drawing gestures of each group of letters in the two groups are very similar, the phase waveforms collected by the two antennas are less different. For this case, the distinction can be made by comparing the feature point phase magnitude of the confused letters. Confused Letter Recognition During the experiment, we found that even through the combination of double tags and double antennas, we can largely solve the problem of similar signal waveforms of similar letters. However, there are still two groups of English letters that are difficult to distinguish, namely a and d, h and n, which are called confused letters. Figures 4 and 5 show the phase waveforms of these two groups of letters after data preprocessing and normalization, respectively. Since the drawing gestures of each group of letters in the two groups are very similar, the phase waveforms collected by the two antennas are less different. For this case, the distinction can be made by comparing the feature point phase magnitude of the confused letters. Signal Pre-Processing Module Since the collected raw Phase is often accompanied by noise in the environment and influenced by different antennas and tags, it is not suitable to import the raw signal into Signal Pre-Processing Module Since the collected raw Phase is often accompanied by noise in the environment and influenced by different antennas and tags, it is not suitable to import the raw signal into the model as training data, so we first import the signal into the pre-processing module to improve the recognition ability of the signal. 1. Noise removal The experimental deployment link is composed of two 9 dBi UHF gain antennas as well as two identical RFID UHF electronic tags. In the experiment the placement of the antennas and the distance between the tags often cause different effects; these effects include noise in the environment, the impact of the antennas themselves and the multipath effect between the tag and the tag, and between the tag and the antenna. In order to solve these problems we carried out the subsequent exploration. For the multipath effect, we cannot eliminate it, so we make appropriate adjustments to the experimental deployment by increasing the distance between tags and tags and tags and antennas, as well as trying different placement positions, as detailed in the experimental section, while ensuring the reception of the gesture signal to minimize its impact. 2. Data normalization In RFID systems, due to the nature of tags and the different locations of human gesture movements, the collected raw phases may have different units of magnitude, which can make the comparability between data lower, thus reducing the accuracy of gesture recognition. The data normalization operation is to normalize the original data so that they are in the same order of magnitude to solve the comparability between data metrics, and at the same time, it can improve the convergence rate of the model as well as the accuracy of the model. Specifically, we use the min-max normalization method to linearly transform the output of the original data X 1 : where X is the data after the subtraction operation, max is the maximum value of the current sample data, and min is the minimum value of the current sample data. Data smoothing In gesture recognition experiments, the problem of excessive noise in the raw data is often encountered, such as the raw data "a" output from Tag1 shown in Figure 6. We can observe that the output "a" is too jittery, which makes it difficult to distinguish accurately in some cases, so we performed a simple smoothing process on the data. observe that the output "a" is too jittery, which makes it difficult to distinguish accurately in some cases, so we performed a simple smoothing process on the data. In this paper, we use the Savitzky-Golay Filter to smooth the data because the Savitzky-Golay Filter is more suitable in RFID where data variation is dominant. Specifically, we set the width of the filter window to = 2 + 1, the prediction point to = ( − ), − 1 … , + 1, + ), and fit the data within the window using a polynomial of In this paper, we use the Savitzky-Golay Filter to smooth the data because the Savitzky-Golay Filter is more suitable in RFID where data variation is dominant. Specifically, we set the width of the filter window to m = 2n + 1, the prediction point to x = (t − n), t − 1 . . . t, t + 1, t + n), and fit the data within the window using a polynomial of k − 1, and the fitting equation is shown in (6). For the system of equations as above to have a solution, then 2n + 1 ≥ k, where we take 2n + 1 ≥ k. Determine the fitting parameter A by least squares, which gives (7). We simplify the above matrix to obtain (8). The subscripts in the above equation indicate the respective dimensions, e.g., A k×1 denotes a parameter with k rows and 1 column. We can find this by the least squares method A k×1 . The solution of The superscript T above denotes transpose. Then the predicted or filtered value of model Y is Finally, we can obtain the relationship matrix between the filtered and observed values. In our experiments, we set the window length to 15 and the order to 3. After deriving the matrix B, we can quickly convert the observations to filtered values, thus achieving data smoothing. Gesture Detection The vast majority of the gesture phase signals between the 26 different letters have significant differences, and the few gestures with greater similarity can be narrowed down by the dual antennae with dual tags set up by the experiment. Therefore, we use the Phase signal as the only criterion for gesture detection. After obtaining the pre-processed signals, we embed them into the gesture detection module to further differentiate the recognition. The gesture detection module is divided into the following two parts: differential threshold estimation, and Dtsc model (similarity calculation + classification). A. Differential threshold method The differential thresholding method is usually used as a fast algorithm for ECG QRS wave detection, which is essentially an algorithm for processing the signal. The main principle is that the rising slope or the falling slope of the ECG waveform is significantly Sensors 2023, 23, 920 9 of 17 different compared to other waveforms, so the location of the R-wave can be detected by detecting the derivative of the ECG signal sequence with respect to the time. In our experiments, we use the same method as in [30] to process the Phase signal and take the first-order derivative over zero and the second-order derivative extreme point as our R-wave position (i.e., eigenpoint), and then determine our threshold by the second-order difference ∆y. ∆y = y x+2 − 2y x+1 + y x where, y x indicates the corresponding Phase value at the x timestamp. B. Dtsc model (Difference threshold similarity calculation). For different alphabetic gesture information, the signal waveforms are often different, and the absolute differences in individual numerical characteristics can be reflected by Euclidean distance. For model selection, we use a combination of similarity calculation and threshold classification. From the above differential thresholding method we obtain the feature point set used to represent each letter (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ), (x 4 , y 5 ) . . ., the mean value is obtained for each letter of the feature point set, and the first eight points are selected as the training feature point set due to the different lengths of the feature point sets obtained for different letters. Similarity calculation. For the similarity algorithm selection, we used Euclidean distance to measure the similarity of different volunteers under the same gesture. Although Pearson's correlation coefficient can distinguish independent and continuous phases, and small differences between different figures can be derived from the correlation coefficient, Euclidean distance provides great convenience for our subsequent classification problem. As shown in Figure 7, the similarity between two different gestures is calculated using the Euclidean formula, where dist(A, B) is tabulated as the distance between two points in three-dimensional space. Let, (X i , Y i ) be the different characteristic points of a volunteer under the same letter, and we Euclideanize it to yield The smaller the equation above ρ, the greater the correlation between the letter plotted by the volunteer and the original letter. Threshold classification. For the classification problem, we use the threshold values derived from the difference thresholding method as our classification criteria. Firstly, all the values derived from the recognition degree calculation are sorted, and then the threshold values are compared with the set of feature points of all letters for similarity calculation, and "n" (number of samples for each letter) data are selected in turn by comparison to achieve the function of gesture detection and recognition. Implementation Hardware: The experimental equipment used is commercially available and does not Threshold classification. For the classification problem, we use the threshold values derived from the difference thresholding method as our classification criteria. Firstly, all the values derived from the recognition degree calculation are sorted, and then the threshold values are compared with the set of feature points of all letters for similarity calculation, and "n" (number of samples for each letter) data are selected in turn by comparison to achieve the function of gesture detection and recognition. Implementation Hardware: The experimental equipment used is commercially available and does not require any modification. We use the H47UHF Tag in our experiments because of its high sensitivity, strong anti-interference ability, stable reading and writing ability and cheap price, a mere 5-10 cents. The experimental parameters are shown in Table 1. The hardware consists of four parts: an Impinj R420 RFID reader (Seattle, WA, USA; shown in Figure 8) operating at 920.875 MHz, two RFID UHF circularly polarized antennas (9 dBi) (shown in Figure 9), two 4 cm × 4 cm H47UHF tags (shown in Figure 10), and a Lenovo R7000p computer (Quarry Bay, Hong Kong). Threshold classification. For the classification problem, we use the threshold values derived from the difference thresholding method as our classification criteria. Firstly, all the values derived from the recognition degree calculation are sorted, and then the threshold values are compared with the set of feature points of all letters for similarity calculation, and "n" (number of samples for each letter) data are selected in turn by comparison to achieve the function of gesture detection and recognition. Implementation Hardware: The experimental equipment used is commercially available and does not require any modification. We use the H47UHF Tag in our experiments because of its high sensitivity, strong anti-interference ability, stable reading and writing ability and cheap price, a mere 5-10 cents. The experimental parameters are shown in Table 1. The hardware consists of four parts: an Impinj R420 RFID reader (Seattle, WA, USA; shown in Figure 8) operating at 920.875 MHz, two RFID UHF circularly polarized antennas (9 dBi) (shown in Figure 9), two 4 cm × 4 cm H47UHF tags (shown in Figure 10), and a Lenovo R7000p computer (Quarry Bay, Hong Kong). Software facilities: we run our model on a Lenovo laptop equipped with a 2.5 GHz AMDR7 and 16 G memory for data acquisition and pre-processing. The RFID reader is connected to the laptop via an Ethernet cable and the neural model we have designed is implemented using Python. Data: We invited eight volunteers (six males and two females) to collect a sample set of approximately 3500 samples, of which 2340 samples were used to fit out our data model and the rest were used to evaluate the model. Environment: We designed three experimental scenarios: for scenario A, as shown in Figure 11, the user drew gestures in a conference room with a length of 8.5 m and a width of 7.2 m. Only a small amount of electronic devices and metal interference existed in the conference room. For scenario B, as shown in Figure 12, the user conducts experi- Software facilities: we run our model on a Lenovo laptop equipped with a 2.5 GHz AMDR7 and 16 G memory for data acquisition and pre-processing. The RFID reader is connected to the laptop via an Ethernet cable and the neural model we have designed is implemented using Python. Data: We invited eight volunteers (six males and two females) to collect a sample set of approximately 3500 samples, of which 2340 samples were used to fit out our data model and the rest were used to evaluate the model. Environment: We designed three experimental scenarios: for scenario A, as shown in Figure 11, the user drew gestures in a conference room with a length of 8.5 m and a width of 7.2 m. Only a small amount of electronic devices and metal interference existed in the conference room. For scenario B, as shown in Figure 12, the user conducts experi- Software facilities: we run our model on a Lenovo laptop equipped with a 2.5 GHz AMDR7 and 16 G memory for data acquisition and pre-processing. The RFID reader is connected to the laptop via an Ethernet cable and the neural model we have designed is implemented using Python. Data: We invited eight volunteers (six males and two females) to collect a sample set of approximately 3500 samples, of which 2340 samples were used to fit out our data model and the rest were used to evaluate the model. Environment: We designed three experimental scenarios: for scenario A, as shown in Figure 11, the user drew gestures in a conference room with a length of 8.5 m and a width of 7.2 m. Only a small amount of electronic devices and metal interference existed in the conference room. For scenario B, as shown in Figure 12, the user conducts experiments in a classroom with a length of 10 m and a width of 7 m, where there are several groups of tables and chairs, and the material of the tables and chairs is mainly metal and wood, so there is a lot of metal interference in this experimental scene. For scenario C, as shown in Figure 13, the user conducts gesture drawing in a dormitory with a length of 8.5 m and a width of 4.5 m, and there are many electronic devices and metal brackets in the dormitory. Accuracy of Different English Letters In this section, we first evaluated the accuracy of each English letter, we selected 780 data (30 copies of each English letter) from the collected dataset for the RF-alphabet, and the experimental results are shown in Figures 14 and 15, where two groups of letters, a and d, n and h, are more similar due to the front and side drawing gestures of the letters within each group, and the collected phase of the difference between the waveforms is not great, and the difference only lies in the size of the phase values of some feature points, so the accuracy is low at 83.33%, but the average accuracy of the other letters reaches 94.08%, and the overall average accuracy is 92.43%. Accuracy of Different English Letters In this section, we first evaluated the accuracy of each English letter, we selected 780 data (30 copies of each English letter) from the collected dataset for the RF-alphabet, and the experimental results are shown in Figures 14 and 15, where two groups of letters, a and d, n and h, are more similar due to the front and side drawing gestures of the letters within each group, and the collected phase of the difference between the waveforms is not great, and the difference only lies in the size of the phase values of some feature points, so the accuracy is low at 83.33%, but the average accuracy of the other letters reaches 94.08%, and the overall average accuracy is 92.43%. Accuracy of Different English Letters In this section, we first evaluated the accuracy of each English letter, we selected 780 data (30 copies of each English letter) from the collected dataset for the RF-alphabet, and the experimental results are shown in Figures 14 and 15, where two groups of letters, a and d, n and h, are more similar due to the front and side drawing gestures of the letters within each group, and the collected phase of the difference between the waveforms is not great, and the difference only lies in the size of the phase values of some feature points, so the accuracy is low at 83.33%, but the average accuracy of the other letters reaches 94.08%, and the overall average accuracy is 92.43%. Accuracy under Different Users In order to verify the accuracy of RF-alphabet under different users, we invited five volunteers to draw A-Z in scene A, respectively, and each volunteer drew a total of 35 times (the number of times each volunteer drew each letter in 26 letters was different); the experimental results are shown in Figure 16, among which the accuracy of volunteers 1, 3, and 4 were 91.42%, 94.28% and 91.42%. The accuracy rates of volunteers 2 and 5 were 88.57% and 85.71%, respectively, due to the presence of multiple confused letters among the 30 letters drawn by volunteers 2 and 5 (where user 2 drew three times a, two times d, and two times n; volunteer 5 drew four times a, three times h, and one time n), but the system still provided an average accuracy rate of 90.28%. Accuracy under Different Environments To verify the accuracy of the RF-alphabet in different environments, we invited one volunteer to draw the letters A-Z in three environments, A, B, and C. To control the variables, we asked the volunteer to draw each letter twice, for a total of 52 drawings in each environment. Figure 17 shows the accuracy of the letters drawn by this volunteer in the three different environments, with the highest accuracy of 92.3% in scene A, followed by scene B with 90.38% accuracy, and the lowest accuracy of 86.53% in scene C. This is due to the presence of a large amount of metal and electronic device interference in scene C, Figure 15. Accuracy of the letters "n-z". Accuracy under Different Users In order to verify the accuracy of RF-alphabet under different users, we invited five volunteers to draw A-Z in scene A, respectively, and each volunteer drew a total of 35 times (the number of times each volunteer drew each letter in 26 letters was different); the experimental results are shown in Figure 16, among which the accuracy of volunteers 1, 3, and 4 were 91.42%, 94.28% and 91.42%. The accuracy rates of volunteers 2 and 5 were 88.57% and 85.71%, respectively, due to the presence of multiple confused letters among the 30 letters drawn by volunteers 2 and 5 (where user 2 drew three times a, two times d, and two times n; volunteer 5 drew four times a, three times h, and one time n), but the system still provided an average accuracy rate of 90.28%. Accuracy under Different Users In order to verify the accuracy of RF-alphabet under different users, we invited five volunteers to draw A-Z in scene A, respectively, and each volunteer drew a total of 35 times (the number of times each volunteer drew each letter in 26 letters was different); the experimental results are shown in Figure 16, among which the accuracy of volunteers 1, 3, and 4 were 91.42%, 94.28% and 91.42%. The accuracy rates of volunteers 2 and 5 were 88.57% and 85.71%, respectively, due to the presence of multiple confused letters among the 30 letters drawn by volunteers 2 and 5 (where user 2 drew three times a, two times d, and two times n; volunteer 5 drew four times a, three times h, and one time n), but the system still provided an average accuracy rate of 90.28%. Accuracy under Different Environments To verify the accuracy of the RF-alphabet in different environments, we invited one volunteer to draw the letters A-Z in three environments, A, B, and C. To control the variables, we asked the volunteer to draw each letter twice, for a total of 52 drawings in each environment. Figure 17 shows the accuracy of the letters drawn by this volunteer in the three different environments, with the highest accuracy of 92.3% in scene A, followed by scene B with 90.38% accuracy, and the lowest accuracy of 86.53% in scene C. This is due to the presence of a large amount of metal and electronic device interference in scene C, Accuracy under Different Environments To verify the accuracy of the RF-alphabet in different environments, we invited one volunteer to draw the letters A-Z in three environments, A, B, and C. To control the variables, we asked the volunteer to draw each letter twice, for a total of 52 drawings in each environment. Figure 17 shows the accuracy of the letters drawn by this volunteer in the three different environments, with the highest accuracy of 92.3% in scene A, followed by scene B with 90.38% accuracy, and the lowest accuracy of 86.53% in scene C. This is due to the presence of a large amount of metal and electronic device interference in scene C, which interferes with the collected phase, especially for confusing letters, but the system can still provide an average accuracy of 89.7%. nsors 2023, 23, x FOR PEER REVIEW 15 which interferes with the collected phase, especially for confusing letters, but the sy can still provide an average accuracy of 89.7%. Figure 17. Accuracy in different environments. RF-Alphabet Recognition of Arabic Numerals The recognition performance may vary for different linguistic information. In o to verify the portability of the RF-alphabet, we conducted a validation experiment fo abic numerals. We invited a volunteer to draw Arabic numerals 0-9, and the volu drew each number 35 times in three different experimental environments, as show Figure 18; the average recognition rates of users' gestures in the three environments 94.2%, 91.4%, and 88.5%, respectively. It can be seen that RF-alphabet can effectively ognize not only English letters, but also Arabic numerals. Comparison with the Latest Methods Finally, we compared the latest gesture recognition algorithms, the compariso sults are shown in Table 2. The RF-alphabet has higher accuracy than RF-FreeGR [29 FingerPass in different environments and with different users, and RF-alphabet fewer tags to achieve better accuracy compared to RF-FreeGR, although FingerPass uses WIFI instead of RFID, thus reducing hardware dependency, the model uses a learning-based algorithm, and FingerPass requires more training data and is less effi compared to the model-based RF-alphabet. RF-Alphabet Recognition of Arabic Numerals The recognition performance may vary for different linguistic information. In order to verify the portability of the RF-alphabet, we conducted a validation experiment for Arabic numerals. We invited a volunteer to draw Arabic numerals 0-9, and the volunteer drew each number 35 times in three different experimental environments, as shown in Figure 18; the average recognition rates of users' gestures in the three environments were 94.2%, 91.4%, and 88.5%, respectively. It can be seen that RF-alphabet can effectively recognize not only English letters, but also Arabic numerals. which interferes with the collected phase, especially for confusing letters, but can still provide an average accuracy of 89.7%. RF-Alphabet Recognition of Arabic Numerals The recognition performance may vary for different linguistic informatio to verify the portability of the RF-alphabet, we conducted a validation experim abic numerals. We invited a volunteer to draw Arabic numerals 0-9, and th drew each number 35 times in three different experimental environments, a Figure 18; the average recognition rates of users' gestures in the three environ 94.2%, 91.4%, and 88.5%, respectively. It can be seen that RF-alphabet can effe ognize not only English letters, but also Arabic numerals. Comparison with the Latest Methods Finally, we compared the latest gesture recognition algorithms, the com sults are shown in Table 2. The RF-alphabet has higher accuracy than RF-FreeG FingerPass in different environments and with different users, and RF-alp fewer tags to achieve better accuracy compared to RF-FreeGR, although Fing uses WIFI instead of RFID, thus reducing hardware dependency, the model u learning-based algorithm, and FingerPass requires more training data and is le compared to the model-based RF-alphabet. Comparison with the Latest Methods Finally, we compared the latest gesture recognition algorithms, the comparison results are shown in Table 2. The RF-alphabet has higher accuracy than RF-FreeGR [29] and FingerPass in different environments and with different users, and RF-alphabet uses fewer tags to achieve better accuracy compared to RF-FreeGR, although FingerPass [25] uses WIFI instead of RFID, thus reducing hardware dependency, the model uses a deep learningbased algorithm, and FingerPass requires more training data and is less efficient compared to the model-based RF-alphabet. Discussion The RF-alphabet quickly classifies and recognizes alphabetic gesture information by the Dtsc model. However, the RF-alphabet has some limitations in practical applications. First, the RF-alphabet achieves gesture recognition of single letters, but not of consecutive English words. The difficulty of gesture recognition for consecutive English words is the segmentation and detection when transitioning between different letters. In the future, we intend to use data segmentation and threshold detection techniques to split words into individual letters and splice the split letters to achieve gesture recognition of consecutive words. Secondly, the RF-alphabet requires more gesture data for different users to accurately recognize the drawn letters. In the future, we intend to use reinforcement learning to achieve maximum gesture recognition of letters during interaction with different users. We also consider using GAN adversarial neural networks to retain information related to specific letter gestures during the adversarial learning process to improve the generalization ability of the model. The effect of data expansion and cross-domain recognition is achieved to accurately recognize the user's gestures. Summary The RF-alphabet proposed in this paper is an RFID-based, device-free, domainindependent gesture recognition system for the alphabet. It is capable of recognizing all letters in the alphabet. Our proposed dual-antenna, the dual-tag layout is able to capture alphabet gestures in all directions, overcoming the limitations of traditional tag arrays. By de-noising and analyzing the captured RFID gesture data, data normalization and data smoothing are performed. Moreover, this paper designs the Dtsc model to classify and identify the pre-processed data by combining the differential thresholding method and similarity calculation. After extensive experiments, it is shown that the RF-alphabet is able to recognize complex, fine-grained, domain-independent alphabetic gestures. The overall accuracy rates are 90.28% and 89.7% for different users and sites with different environments. The system we designed is significantly due to existing solutions, and we believe that the RF-alphabet can facilitate RFID-based gesture recognition for human-computer interaction. Conflicts of Interest: The authors declare no conflict of interest.
2023-01-17T17:04:12.703Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "792a92e6d7b21706acfb0b5de7cd7a96dbfbde36", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/2/920/pdf?version=1673589053", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b039fa4a7ed0d04a52f8532639d056b8acf0772", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
225861239
pes2o/s2orc
v3-fos-license
Shoring Up the US Safety Net in the Era of Coronavirus Disease 2019 Our country’s safety-net health system, including public hospitals and community health centers that are often the singular lifeline for many of the 100 million patients who have Medicaid or no insurance, is doubly hit by the coronavirus disease 2019 (COVID-19) pandemic. In just 2 months, US jobless claims have exceeded 30 million, and the coronavirus has proven to be a disproportionate burden in low-income neighborhoods and communities of color. These realities imply that the safety net must rapidly scale to absorb new patients and attend to needs directly and indirectly associated with COVID-19 Insights | COVID-19 Shoring Up the US Safety Net in the Era of Coronavirus Disease 2019 Samyukta Mullangi, MD, MBA; Janine Knudsen, MD; Dave A. Chokshi, MD, MSc Our country's safety-net health system, including public hospitals and community health centers that are often the singular lifeline for many of the 100 million patients who have Medicaid or no insurance, is doubly hit by the coronavirus disease 2019 (COVID-19) pandemic.In just 2 months, US jobless claims have exceeded 30 million, and the coronavirus has proven to be a disproportionate burden in low-income neighborhoods and communities of color.These realities imply that the safety net must rapidly scale to absorb new patients and attend to needs directly and indirectly associated with COVID-19. These are sobering facts, particularly when considering the financially tenuous position of most safety-net systems.However, the structure of these essential health systems may prove to be particularly well primed to rise to the challenges of COVID-19, with sufficient resources.Three features of the US safety net allow it to be resilient and prepared for this turbulent time. First, both because of their financial capacity and their mission-driven culture, safety-net health systems are built to deliver care more aligned with modern, value-based models rather than follow traditional fee-for-service incentives. 1 Financially, safety-net systems operate with narrow to negative margins and rely heavily on capitated payments.Culturally, clinicians are more used to working with limited resources and attuned to avoiding unnecessary care and prioritizing high-value, patient-centered interventions.Consequently, safety-net systems aim to shift patient care to the least expensive site of care, compared with peer institutions, which seek high volumes and expensive procedures. One visible example borne from this ethos is the safety-net system's early foray into using technology to bridge gaps in access to specialty care.The eReferral or eConsult systems use simple technology to asynchronously access specialist expertise, attenuating the wait for a consult from months to days. 2 These systems, pioneered at San Francisco General Hospital, proliferated even when no billing codes existed for such visits.During this pandemic, safety-net health systems, such as the Los Angeles County Department of Health Services and NYC Health + Hospitals, that adopted eConsult systems early have been able to build on such existing infrastructure with greater agility than their peers. Second, safety-net health systems have historically shown a greater orientation to sharing innovation, rather than viewing these as sources of competitive advantage.Many health systems around the country, for example, have been focused on identifying patients with high needs and high costs, often relying on risk prediction models based on claims data.However, safety-net systems often do not have the resources for such expensive, commercial risk models.NYC Health + Hospitals' leadership discovered an added challenge: patients without insurance do not generate claims data, meaning they would be excluded from existing models.Embracing the notion that necessity is the mother of invention, they developed their own payer-agnostic predictive model-one not reliant on claims data-and made it publicly available. 3 homelessness.During the pandemic, several counties found this interagency infrastructure vital to COVID-19 response.For instance, Whole Person Care partnerships helped provide operational support for isolation hotels and stood up temporary housing for individuals who have recently been incarcerated. 4ird, safety-net health systems' reliance on public financing to offset uncompensated care generally presents major challenges to their financial viability.Initial stages of innovation are often dependent on grant funding because of a paucity of capital available for investment.For example, analyses have found that telehealth adoption at federally qualified health centers was contingent on public and private grants, hindering long-term planning and scalability.Despite its uncertainty, public financing offers one important silver lining: funding flexibility. Most health systems are reliant on traditional fee-for-service payments and operate with decentralized budgetary systems, impeding their ability to make global investments.In contrast, when sufficient resources are made available, safety-net health systems may be more likely to invest in population health programs that pay long-term dividends across departments and care settings. For example, while national adoption of community health worker programs, which have repeatedly shown a positive return on investment, has been slow, the safety-net system has taken to these with greater enthusiasm. 5The Indian Health Service has relied on the community health worker model for decades to reach patients in marginalized groups and connect them with care. 6These programs are increasingly viewed as a viable structure on which to build contact-tracing initiatives necessary to suppress transmission of COVID-19 and ensure a safe reopening of the US. The COVID-19 pandemic has been a devastating crisis and unmasked the fragility of the US health care system in several ways.While safety-net health systems are naturally well positioned to contribute to this response, they require greater support.In the near term, the health care safety net would benefit from a fairer allocation of stimulus dollars from disaster relief funds designated for In the longer term, safety-net health systems could be at the vanguard of a broader trajectory of transformation that prioritizes population health over rent-seeking behavior in health care.This pandemic has upended traditional health-system strategies that rely on maximizing patient volume and prioritizing in-person care.Going forward, systems could be oriented around flexible budgets that allow for multiple modalities of service delivery and further extending care into patients' neighborhoods and homes.Braiding and blending public funds could further cinch collaboration across sectors, particularly with public health departments and community-based organizations, around the common goal of improving the health of US residents with low incomes. Public financing helps ensure that safety-net health systems serve the patients with the greatest vulnerabilities.Yet the COVID-19 era has shown how this mission has spillover benefits for the rest of society.Uncontrolled outbreaks in vulnerable neighborhoods can easily spread beyond them.The essential workers underpinning the economy rely on clinics and hospitals that accept Medicaid or serve patients with no insurance.Safety-net health systems provide financial protection not only to such patients but also neighboring hospitals that would otherwise take on their uncompensated care burden.For all of these reasons, shoring up our safety-net system should be seen less as charity and more as an investment in health for all. 3 Downloaded Peer safety-net health systems, including Los Angeles County and Denver Health, have now incorporated this methodology into the development of similar models for their populations.A similar spirit of collaboration exists across public health departments and community-based organizations.California forged countywide collaboratives through the Whole Person Care initiative (either led by county health departments or hospitals and clinics) to address medical and social needs of Medicaid beneficiaries with the greatest vulnerabilities, such as people experiencing Author affiliations and article information are listed at the end of this article.Open Access.This is an open access article distributed under the terms of the CC-BY License.JAMA Health Forum.2020;1(6):e200730.doi:10.1001/jamahealthforum.2020.0730(Reprinted) June 15, 2020 1/From: https://jamanetwork.com/ on 09/24/2023 hospitals.The formulae used by the Trump administration to determine relief money allocations from the Coronavirus Aid, Relief, and Economic Security (CARES) Act left safety-net hospitals shorthanded, favoring hospitals with greater private insurance revenue.Thankfully, this imbalance is being addressed in the latest tranches of funding, with a recent announcement from the US Department of Health and Human Services about an additional $15 billion in relief monies earmarked to eligible Medicaid physicians, dentists, behavioral health providers, and assisted living facilities and home services providers and a further $10 billion to safety-net hospitals.
2020-06-26T13:06:05.066Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "7e858e0c462518d4a5c6df2a3a22205ed43648c2", "oa_license": "CCBY", "oa_url": "https://jamanetwork.com/journals/jama-health-forum/articlepdf/2767380/mullangi_2020_is_200066_1617992165.50369.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "d64671662b060b59a5f956e1aa9b1b0f837aebd4", "s2fieldsofstudy": [ "Medicine", "Political Science", "Economics" ], "extfieldsofstudy": [ "Political Science" ] }
264784908
pes2o/s2orc
v3-fos-license
A High-Performance InGaAs Vertical Electron–Hole Bilayer Tunnel Field Effect Transistor with P+-Pocket and InAlAs-Block To give consideration to both chip density and device performance, an In0.53Ga0.47As vertical electron–hole bilayer tunnel field effect transistor (EHBTFET) with a P+-pocket and an In0.52Al0.48As-block (VPB-EHBTFET) is introduced and systematically studied by TCAD simulation. The introduction of the P+-pocket can reduce the line tunneling distance, thereby enhancing the on-state current. This can also effectively address the challenge of forming a hole inversion layer in an undoped InGaAs channel during device fabrication. Moreover, the point tunneling can be significantly suppressed by the In0.52Al0.48As-block, resulting in a substantial decrease in the off-state current. By optimizing the width and doping concentration of the P+-pocket as well as the length and width of the In0.52Al0.48As-block, VPB-EHBTFET can obtain an off-state current of 1.83 × 10−19 A/μm, on-state current of 1.04 × 10−4 A/μm, and an average subthreshold swing of 5.5 mV/dec. Compared with traditional InGaAs vertical EHBTFET, the proposed VPB-EHBTFET has a three orders of magnitude decrease in the off-state current, about six times increase in the on-state current, 81.8% reduction in the average subthreshold swing, and stronger inhibitory ability on the drain-induced barrier-lowering effect (7.5 mV/V); these benefits enhance the practical application of EHBTFETs. Introduction Due to the emergence of the cloud, big data and real-time data transmission have become the main trends in the development of information technology, and they require integrated circuits to have ultra-low power dissipation.However, as the core of current integrated circuits, MOSFETs suffer from the increasing static power consumption with the decrease in feature size, inhibiting the development of integrated circuits.The decrease of subthreshold swing (SS) is an effective approach to deal with this issue.Limited by the injection mechanism of thermal emission, the SS of MOSFETs cannot be lower than 60 mV/dec; thus, there is an urgent need to develop steep SS (<60 mV/dec) devices to satisfy the cloud applications. The tunnel field effect transistor (TFET) [1][2][3], as a steep SS device, has received widespread attention because of its CMOS process compatibility and low standby power consumption.However, the on-state current (I on ) of TFETs is too low for reasonable performance, which makes most research focus on overcoming this drawback.Therefore, TFETs with new structures or operation mechanisms have appeared in large numbers, such as two-dimensional material TFETs [4][5][6], negative capacitance TFETs [7][8][9], heterojunction TFETs [10][11][12], nanowire TFETs [13][14][15], line tunneling (L-tunneling) TFETs [16][17][18][19][20][21][22][23][24][25][26][27][28][29], etc.Comprehensive analysis shows that expanding the tunneling area based on the L-tunneling mechanism is a very effective approach to improving I on .The electron-hole bilayer TFET (EHBTFET) [19][20][21][22][23][24][25][26][27][28][29] is a new type of L-tunneling TFET that was first proposed by Lattanzio [19] and has been developed in recent years because of its novel tunneling mechanism.Different from the L-tunneling TFETs with the L/U/T-type gate structure [16][17][18] that create the L-tunneling by overlapping the gate and heavily doped source region, EHBTFETs can generate the L-tunneling perpendicular to the channel in the electron-hole bilayer formed by the gate engineering or bias-induced method.So far, research has mainly focused on the transverse EHBTFETs, from which it has been found that the vertical L-tunneling not only boosts I on but also leads to high off-state current (I off ).To solve this issue, many methods, such as counterdoping in the gate underlap region [23], partial light doping in the source and drain regions [24], heterogate structure [25], and implantation of a dielectric barrier layer in the gate underlap region [26] have been adopted, and I off has been suppressed to a certain extent.However, it should be noted that the I on of the transverse EHBTFETs is directly proportional to the gate overlap area, which means that the increase of I on is at the expense of chip density.To simultaneously improve device performance and chip density, new EHBTFETs need to be investigated. Vertical EHBTFETs [27][28][29] can improve I on by expanding the tunneling area in the vertical direction without sacrificing the chip density, which attracts researchers' attention.To further improve I on , III-V materials are usually adopted in vertical EHBTFETs due to their smaller electron effective mass and band gap (E g ) [30], but which can also exacerbate the deterioration of I off .Our previous research [29] demonstrates that the impact of the point tunneling (P-tunneling) between the gate underlap region and the drain on the I off is significant, which cannot be effectively attenuated through the approaches used in transverse EHBTFETs.Although our proposed DGNP-EHBTFET in Reference [29] can solve this problem well, I on can only be maintained without degradation and cannot be improved.To obtain better off-and on-state device performance, an improved vertical EHBTFET is proposed in this paper, namely, an In 0.53 Ga 0.47 As EHBTFET with a P + -pocket and an In 0.52 Al 0.48 As-block (VPB-EHBTFET).Since the line tunneling of EHBTFETs depends on the concentration of the electron-hole bilayer in the channel, the P + -pocket can be used to generate the hole layer in real operation.This is due to that for high-K/InGaAs interface, a large number of interface states near the valence band in InGaAs will inhibit the formation of the hole layer in the InGaAs channel without any acceptor doping.Although some methods, such as the use of Al 2 O 3 /HfO 2 bilayer gate oxide [31], the postdeposition annealing process [32], and the gate-last process [33], can effectively reduce the interface trap density for the HfO 2 /InGaAs interface, P-type doping performed in the right-side channel may be the best method for creating the hole layer in real operation.More importantly, the P + -pocket can reduce the tunneling distance of the L-tunneling, which enhances the electron tunneling and achieves the goal of improving the on-state current.In 0.52 Al 0.48 As possesses greater E g and carrier effective mass and matches the lattice of In 0.53 Ga 0.47 As.Due to its excellent material properties, placing In 0.52 Al 0.48 As in the gate underlap region near the drain can inhibit the point tunneling in this region.Moreover, it can also effectively avoid potential performance degradation caused by the lattice mismatch in the device fabrication.Heretofore, since investigations on suppressing off-state leakage and improving on-state performance for vertical EHBTFETs are relatively few, revealing the physical mechanism of the proposed VPB-EHBTFET is necessary, thereby providing theoretical guidance for device manufacturing. The content of this paper focuses on the following aspects.Device structures and corresponding parameters, key manufacturing processes, and physical models for simulations are introduced in Section 2. Section 3 examines in detail the performance comparison between conventional and improved EHBTFETs and the effects of P + -pocket and InAlAsblock on the proposed VPB-EHBTFET.Finally, a concise summary of the current studies is provided in Section 4. Device Structures and Simulation Methods For comparison, we bring in two other EHBTFETs, namely, (1) a traditional vertical EHBTFET (V-EHBTFET) and (2) a vertical EHBTFET with a P + -pocket (VP-EHBTFET).Cross-sectional views and corresponding device parameters of the three EHBTFETs are given in Figure 1 and Table 1, respectively.It is found that EHBTFETs' difference only exists in the channel.The bulk material of the three EHBTFETs is In 0.53 Ga 0.47 As, but there is an In 0.52 Al 0.48 As-block in the channel of VPB-EHBTFET compared with the other two devices.Moreover, only the channel near the right gate (RG) of VP-EHBTFET and VPB-EHBTFET possesses the P + -pocket with the doping concentration of 6 × 10 19 cm −3 .For these three EHBTFETs, since the L-tunneling occurs in the electron-hole bilayer of the gate overlap region (GO region) (see yellow arrows in Figure 1c), the appropriate carrier concentration is required in this region.According to the charge plasma concept [34], chromium (work-function = 4.5 eV) is employed as the left gate (LG) to induce a twodimensional electron gas layer, and aurum (work-function = 5.3 eV) is adopted as RG to create a two-dimensional hole gas layer [35].To induce a uniform carrier distribution, the width of bulk material is set to 10 nm.Furthermore, some key material parameters used in simulations refer to reference [10]. Micromachines 2023, 14, x FOR PEER REVIEW 3 of 13 Cross-sectional views and corresponding device parameters of the three EHBTFETs are given in Figure 1 and Table 1, respectively.It is found that EHBTFETs' difference only exists in the channel.The bulk material of the three EHBTFETs is In0.53Ga0.47As,but there is an In0.52Al0.48As-block in the channel of VPB-EHBTFET compared with the other two devices.Moreover, only the channel near the right gate (RG) of VP-EHBTFET and VPB-EHBTFET possesses the P + -pocket with the doping concentration of 6 × 10 19 cm −3 .For these three EHBTFETs, since the L-tunneling occurs in the electron-hole bilayer of the gate overlap region (GO region) (see yellow arrows in Figure 1c), the appropriate carrier concentration is required in this region.According to the charge plasma concept [34], chromium (work-function = 4.5 eV) is employed as the left gate (LG) to induce a two-dimensional electron gas layer, and aurum (work-function = 5.3 eV) is adopted as RG to create a twodimensional hole gas layer [35].To induce a uniform carrier distribution, the width of bulk material is set to 10 nm.Furthermore, some key material parameters used in simulations refer to reference [10].The proposed VPB-EHBTFET can be fabricated with the most advanced process technology currently available, in which the In0.53Ga0.47Asepitaxial layer with the In0.52Al0.48Asblockcan be grown vertically on the InP substrate by molecular beam epitaxy technology, and the P + -pocket is created by ion implantation.Inductively coupled plasma etching is The proposed VPB-EHBTFET can be fabricated with the most advanced process technology currently available, in which the In 0.53 Ga 0.47 As epitaxial layer with the In 0.52 Al 0.48 Asblock can be grown vertically on the InP substrate by molecular beam epitaxy technology, and the P + -pocket is created by ion implantation.Inductively coupled plasma etching is employed to etch dielectric and epitaxial layer materials, and the atom layer deposition technique can be adopted to deposit dielectrics and metal electrodes.The key process is the fabrication of the P + -pocket and the In 0.52 Al 0.48 As-block, which is closely related to the accurate control of doping depth and concentration in the ion implantation, as well as the precise design of the mask pattern. All device simulations are performed using the Silvaco-Atlas 2-D numerical simulation platform.In simulations, the density gradient model is included to take into account the quantum confinement effect.Additionally, models included in references [10,29] are considered, such as the non-local band-to-band tunneling (BTBT) model, the Lombardi mobility model, the strained two-band zincblende model, etc.To compare under the same conditions, the influence of trap is not considered in the simulations. Performance Comparison between V-EHBTFET, VP-EHBTFET, and VPB-EHBTFET Referring to our previous research on EHBTFETs [29], it is known that there are two tunneling mechanisms in this kind of device, namely, the P-tunneling and L-tunneling parallel to and perpendicular to the channel, respectively.The P-tunneling basically occurs in the gate underlap region near the source or drain (named GUS region or GUD region, respectively), while the L-tunneling occurs in the GO region.Both tunneling mechanisms are closely related to P tun , which can be expressed as Equation (1) [29]: Two key factors affecting P tun , the tunneling distance (λ) and the energy range used for tunneling (∆ϕ), can be extracted from the energy band.Therefore, to interpret the tunneling mechanism in detail, energy bands in the off-state [gate voltage (V gs ) = 0 V, drain voltage (V ds ) = 0.5 V] for these three EHBTFETs are calculated along A-A , B-B , and C-C (gray dotted lines in Figure 1b), respectively, and plotted in Figure 2a-c, respectively. As shown in Figure 2a, although ∆ϕ appears in the GUD region, the P-tunneling in the left-side channel of the three EHBTFETs is suppressed due to the very long λ.It is observed from Figure 2b that the P-tunneling in the right-side channel also occurs in the GUD region.Since the introduction of a P + -pocket can enlarge ∆ϕ and decrease λ, the Ptunneling in the off-state for VP-EHBTFET is stronger than that for V-EHBTFET, according to Equation (1).Although VPB-EHBTFET also possesses the P + -pocket like VP-EHBTFET, its P-tunneling can be better suppressed compared with the other two EHBTFETs.This is because there is an In 0.52 Al 0.48 As-block with wider E g in the GUD region of VPB-EHBTFET, which significantly degrades the P tun .Figure 2c shows the energy band profiles in the GO region, from which it is found that the L-tunneling does not take place in the off-state due to no ∆ϕ.Based on the analyses in Figure 2a-c, it is concluded that in the off-state, the P-tunneling in the right-side channel is dominant, and the proposed VPB-EHBTFET has the weakest tunneling capacity.I off is an important parameter to test the off-state performance of devices, which can be extracted from the transfer characteristic curve (i.e., I ds -V gs curve).To verify the above mechanism analysis, I ds -V gs curves of these three EHBTFETs are computed and shown in Figure 2d.As observed from the figure, it results that I off of VPB-EHBTFET approaches as low as 1.83 × 10 −19 A/µm.Compared with the I off of V-EHBTFET and VP-EHBTFET (2.29 × 10 −16 A/µm and 8.37 × 10 −11 A/µm, respectively), that of VPB-EHBTFET is reduced by approximately three and eight orders of magnitude, respectively.It follows that VPB-EHBTFET has the best off-state performance, which is consistent with the tunneling mechanism analyzed above.Ion, a key performance indicator in the on-state (Vgs = 1 V and Vds = 0.5 V in simulations), can also be extracted from Figure 2d.Ion of VPB-EHBTFET approaches 1.04 × 10 −4 A/µm, which is basically the same as that of VP-EHBTFET but about 5.7 times higher than that of V-EHBTFET (1.84 × 10 −5 A/µm).Here, we still explain their differences from the perspective of the energy band.Similarly, energy band profiles in the on-state for the three EHBTFETs are extracted and plo ed in Figure 3. From Figure 3a,b, the P-tunneling in the left-side channel occurs in the GUS region, which is different from that in the off-state, but that in the right-side channel still occurs in the GUD region.Based on the previous analysis, it can be known that the effect of the P + -pocket and the In0.52Al0.48As-block on the P-tunneling in the on-state is the same as that in the off-state.Combined with the P-tunneling in the left-and right-side channels, it can be inferred that the adoption of the P + -pocket is conducive to the enhancement of P-tunneling in the on-state.Furthermore, it is observed from Figure 3c that the energy bands in the GO region of VP-EHBTFET and VPB-EHBTFET bend upward due to the existence of the P + -pocket, which can reduce the λ of the L-tunneling and eventually enhance their Ltunneling.Since the L-tunneling occurs in the whole GO region (50 nm in length), while the P-tunneling region only exists within a range of a few nanometers near the gate, and the λ of L-tunneling is much smaller than that of P-tunneling, both of these cause the Ltunneling to dominate in the on-state.As a result, the tunneling ability of VP-EHBTFET and VPB-EHBTFET is basically the same in the on-state and stronger than that of V-EHBT-FET, eventually resulting in be er on-state performance in VP-EHBTFET and VPB-EHBT-FET.I on , a key performance indicator in the on-state (V gs = 1 V and V ds = 0.5 V in simulations), can also be extracted from Figure 2d.I on of VPB-EHBTFET approaches 1.04 × 10 −4 A/µm, which is basically the same as that of VP-EHBTFET but about 5.7 times higher than that of V-EHBTFET (1.84 × 10 −5 A/µm).Here, we still explain their differences from the perspective of the energy band.Similarly, energy band profiles in the on-state for the three EHBTFETs are extracted and plotted in Figure 3. Y-position (nm) LG From Figure 3a,b, the P-tunneling in the left-side channel occurs in the GUS region, which is different from that in the off-state, but that in the right-side channel still occurs in the GUD region.Based on the previous analysis, it can be known that the effect of the P + -pocket and the In 0.52 Al 0.48 As-block on the P-tunneling in the on-state is the same as that in the off-state.Combined with the P-tunneling in the left-and right-side channels, it can be inferred that the adoption of the P + -pocket is conducive to the enhancement of P-tunneling in the on-state.Furthermore, it is observed from Figure 3c that the energy bands in the GO region of VP-EHBTFET and VPB-EHBTFET bend upward due to the existence of the P + -pocket, which can reduce the λ of the L-tunneling and eventually enhance their L-tunneling.Since the L-tunneling occurs in the whole GO region (50 nm in length), while the P-tunneling region only exists within a range of a few nanometers near the gate, and the λ of L-tunneling is much smaller than that of P-tunneling, both of these cause the L-tunneling to dominate in the on-state.As a result, the tunneling ability of VP-EHBTFET and VPB-EHBTFET is basically the same in the on-state and stronger than that of V-EHBTFET, eventually resulting in better on-state performance in VP-EHBTFET and VPB-EHBTFET. To better understand the tunneling mechanism of devices, it can be further investigated from the point of view of the non-local electron BTBT (e-BTBT) rate.According to the results of energy band analysis, only the off-state e-BTBT rate in the right-side channel and the on-state one in the GO region of V-EHBTFET, VP-EHBTFET, and VPB-EHBTFET are extracted and displayed in Figure 4a,b, respectively.It is found from Figure 4a that the peak values of the off-state e-BTBT rate for these three EHBTFETs occur in the GUD region and show the following trend: VP-EHBTFET >> V-EHBTFET >> VPB-EHBTFET, which directly reflects that the enhancement of the off-state P-tunneling by the P + -pocket can be suppressed significantly by the In 0.52 Al 0.48 As-block.The smaller the off-state e-BTBT rate of VPB-EHBTFET, the better its off-state performance.Figure 4b shows that the peak value of the on-state e-BTBT rate in the GO region of VP-EHBTFET and VPB-EHBTFET is the same, but it is one order of magnitude higher than that of V-EHBTFET.Thus, it can be confirmed again that the P + -pocket benefits improve the on-state L-tunneling, which makes the proposed VPB-EHBTFET have good on-state performance as VP-EHBTFET.To better understand the tunneling mechanism of devices, it can be further investigated from the point of view of the non-local electron BTBT (e-BTBT) rate.According to the results of energy band analysis, only the off-state e-BTBT rate in the right-side channel and the onstate one in the GO region of V-EHBTFET, VP-EHBTFET, and VPB-EHBTFET are extracted and displayed in Figure 4a,b, respectively.It is found from Figure 4a that the peak values of the off-state e-BTBT rate for these three EHBTFETs occur in the GUD region and show the following trend: VP-EHBTFET >> V-EHBTFET >> VPB-EHBTFET, which directly reflects that the enhancement of the off-state P-tunneling by the P + -pocket can be suppressed significantly by the In0.52Al0.48As-block.The smaller the off-state e-BTBT rate of VPB-EHBTFET, the better its off-state performance.Figure 4b shows that the peak value of the on-state e-BTBT rate in the GO region of VP-EHBTFET and VPB-EHBTFET is the same, but it is one order of magnitude higher than that of V-EHBTFET.Thus, it can be confirmed again that the P + -pocket benefits improve the on-state L-tunneling, which makes the proposed VPB-EHBTFET have good on-state performance as VP-EHBTFET.Furthermore, other important performance parameters, such as I on /I off , subthreshold voltage (V th ), average subthreshold swing (SS avg ), point subthreshold swing (point SS), and drain-induced barrier lowering (DIBL), are calculated based on the I ds -V gs curves.Due to the high I on and the minimum I off in the proposed VPB-EHBTFET, it obtains the maximum I on /I off of 5.7 × 10 14 .V th usually refers to the V gs corresponding to the midpoint of the transition zone, where the drain current (I ds ) changes sharply with the V gs in the I ds -V gs curve.Referring to previous publications [2,10], V gs corresponding to I ds = 1 × 10 −7 A/µm is taken as V th in this paper.As shown in Figure 2d, the V th of the proposed VPB-EHBTFET is as low as 0.06 V, which is the same as that of VP-EHBTFET.Moreover, the V th s of VPB-EHBTFET and VP-EHBTFET are less than that of V-EHBTFET (0.26 V).This is due to the introduction of P + -pocket reducing the λ of the L-tunneling, allowing VP-and VPB-EHBTFETs to be turned on at lower V gs .SS avg is expressed as SS avg = (V th -V off )/(log I Vth -log I Voff ), where V off is the V gs at which the I ds begins to increase.In view of the minimum I off (i.e., I Voff ) caused by the In 0.52 Al 0.48 As-block, SS avg of 5.5 mV/dec can be obtained in VPB-EHBTFET, which is reduced by 81.8% and 75.1% compared with that in V-EHBTFET and VP-EHBTFET (30.2 mV/dec and 22.1 mV/dec, respectively), respectively.Figure 5a shows the point SS values of the three EHBTFETs, where the point SS is calculated by dV gs /d(logI ds ).It is found that VPB-EHBTFET possesses steeper point SS at each I ds ; in particular, when I ds < 10 −8 A/µm, the point SS is around 1 mV/dec and basically remains unchanged, guaranteeing the steepest SS avg in VPB-EHBTFET.DIBL can be used to characterize the shift of V th in devices, which is usually defined as ∆V th /∆V ds .To obtain the DIBL values of the three EHBTFETs, the I ds -V gs curves are calculated at V ds = 0.1 V and 0.5 V, respectively, and plotted in Figure 5b.Since V th s of the proposed VPB-EHBTFET and VP-EHBTFET change negligibly under different V ds s, a low DIBL value of 7.5 mV/V can be achieved in both EHBTFETs.This is because the existence of the P + -pocket enhances the built-in electric field in the GO region, thereby reducing the influence of the applied electric field on the L-tunneling.However, the DIBL value cannot be calculated for V-EHBTFET because this device is still turned off at V ds = 0.1 V. Thus, it can be seen that the adoption of the P + -pocket can effectively suppress the DIBL effect.For a clear comparison, the parameters discussed above for the three EHBTFETs are summarized in Table 2. EHBTFET and VP-EHBTFET are less than that of V-EHBTFET (0.26 V).This is due to the introduction of P + -pocket reducing the λ of the L-tunneling, allowing VP-and VPB-EHBT-FETs to be turned on at lower Vgs.SSavg is expressed as SSavg = (Vth − Voff)/(log IVth − log IVoff), where Voff is the Vgs at which the Ids begins to increase.In view of the minimum Ioff (i.e., IVoff) caused by the In0.52Al0.48As-block,SSavg of 5.5 mV/dec can be obtained in VPB-EHBT-FET, which is reduced by 81.8% and 75.1% compared with that in V-EHBTFET and VP-EHBTFET (30.2 mV/dec and 22.1 mV/dec, respectively), respectively.Figure 5a shows the point SS values of the three EHBTFETs, where the point SS is calculated by dVgs/d(logIds).It is found that VPB-EHBTFET possesses steeper point SS at each Ids; in particular, when Ids < 10 −8 A/µm, the point SS is around 1 mV/dec and basically remains unchanged, guaranteeing the steepest SSavg in VPB-EHBTFET.DIBL can be used to characterize the shift of Vth in devices, which is usually defined as ΔVth/ΔVds.To obtain the DIBL values of the three EHBTFETs, the Ids-Vgs curves are calculated at Vds = 0.1 V and 0.5 V, respectively, and plotted in Figure 5b.Since Vths of the proposed VPB-EHBTFET and VP-EHBTFET change negligibly under different Vdss, a low DIBL value of 7.5 mV/V can be achieved in both EHBT-FETs.This is because the existence of the P + -pocket enhances the built-in electric field in the GO region, thereby reducing the influence of the applied electric field on the L-tunneling.However, the DIBL value cannot be calculated for V-EHBTFET because this device is still turned off at Vds = 0.1 V. Thus, it can be seen that the adoption of the P + -pocket can effectively suppress the DIBL effect.For a clear comparison, the parameters discussed above for the three EHBTFETs are summarized in Table 2. Effect of P + -Pocket on VPB-EHBTFET Here, doping concentration and doping width (CP and WP) in the P + -pocket of the proposed VPB-EHBTFET are investigated for the optimization of device performance.Figure 6a shows Ion and Ioff values under different CPs.It is found from the figure that Ioff is very low and remains the same order of magnitude when CP < 8 × 10 19 cm −3 , but it increases sharply with the further increase in CP.Ion increases with CP.Both of these trends can be explained by the energy band profiles.Based on the conclusion of device performance comparison between conventional and improved EHBTFETs, it is known that their Ioff and Ion depend on the off-state P-tunneling in the GUD region of the right-side channel and the on-state L-tunneling in the GO region, respectively.Therefore, the off-state energy bands of the P-tunneling in the right-side channel Effect of P + -Pocket on VPB-EHBTFET Here, doping concentration and doping width (C P and W P ) in the P + -pocket of the proposed VPB-EHBTFET are investigated for the optimization of device performance.Figure 6a shows I on and I off values under different C P s.It is found from the figure that I off is very low and remains the same order of magnitude when C P < 8 × 10 19 cm −3 , but it increases sharply with the further increase in C P .I on increases with C P .Both of these trends can be explained by the energy band profiles.Based on the conclusion of device performance comparison between conventional and improved EHBTFETs, it is known that their I off and I on depend on the off-state P-tunneling in the GUD region of the right-side channel and the on-state L-tunneling in the GO region, respectively.Therefore, the off-state energy bands of the P-tunneling in the right-side channel under different C P s are calculated first, but results demonstrate that the effect of the change of C P on the P-tunneling is negligible.This is because the existence of the In 0.52 Al 0.48 As-block makes the λ of the P-tunneling possess basically identical lengths (about 50 nm) under different C P s, thereby preventing the tunneling of electrons in this region.Then, the off-state energy bands of the L-tunneling in the GO region under different C P s are also calculated and plotted in Figure 6b.With the increase in C P , the energy band gradually bends upward, and a ∆ϕ can be generated when C P > 6 × 10 19 cm −3 .The existence of ∆ϕ provides a condition for the offstate L-tunneling, which results in a rapid increase in I off .Moreover, for the interpretation of the change trend of I on , the on-state energy bands of the L-tunneling in the GO region are examined and shown in Figure 6c.With the increase in C P , λ decreases and ∆ϕ increases, so that I on takes on a monotonically increasing trend according to Equation (1). Figure 6d shows SS avg and I on /I off in VPB-EHBTFET with different C P s.With the increase in C P , SS avg decreases slowly first and then increases rapidly, but I on /I off has the opposite change trend compared with SS avg .When C P = 6 × 10 19 cm −3 , SS avg and I on /I off can approach the minimum and maximum values, respectively.By compromising performance parameters, it results that 6 × 10 19 cm −3 is the optimal C P . Micromachines 2023, 14, x FOR PEER REVIEW 8 of 13 under different CPs are calculated first, but results demonstrate that the effect of the change of CP on the P-tunneling is negligible.This is because the existence of the In0.52Al0.48As-blockmakes the λ of the P-tunneling possess basically identical lengths (about 50 nm) under different CPs, thereby preventing the tunneling of electrons in this region.Then, the off-state energy bands of the L-tunneling in the GO region under different CPs are also calculated and plotted in Figure 6b.With the increase in CP, the energy band gradually bends upward, and a Δφ can be generated when CP > 6 × 10 19 cm −3 .The existence of Δφ provides a condition for the off-state L-tunneling, which results in a rapid increase in Ioff.Moreover, for the interpretation of the change trend of Ion, the on-state energy bands of the L-tunneling in the GO region are examined and shown in Figure 6c.With the increase in CP, λ decreases and Δφ increases, so that Ion takes on a monotonically increasing trend according to Equation (1). Figure 6d shows SSavg and Ion/Ioff in VPB-EHBTFET with different CPs.With the increase in CP, SSavg decreases slowly first and then increases rapidly, but Ion/Ioff has the opposite change trend compared with SSavg. When CP = 6 × 10 19 cm −3 , SSavg and Ion/Ioff can approach the minimum and maximum values, respectively.By compromising performance parameters, it results that 6 × 10 19 cm −3 is the optimal CP.Further, the effect of doping width WP on the device performance is investigated.Figure 7a shows Ioff and Ion values under different WPs.With the increase in WP, Ioff increases first, then decreases, and finally increases again.To interpret this trend, we extract the e-BTBT rates that can reflect the tunneling ability of the P-tunneling and L-tunneling in the off-state and plot them in Figure 7b.It is observed from the figure that the off-state e-BTBT rates of the P-tunneling and L-tunneling are very low when WP ≤ 1 nm, which indicates that both kinds of tunneling are suppressed in this condition, so a very small Ioff can be Further, the effect of doping width W P on the device performance is investigated.Figure 7a shows I off and I on values under different W P s.With the increase in W P , I off increases first, then decreases, and finally increases again.To interpret this trend, we extract the e-BTBT rates that can reflect the tunneling ability of the P-tunneling and L-tunneling in the off-state and plot them in Figure 7b.It is observed from the figure that the off-state e-BTBT rates of the P-tunneling and L-tunneling are very low when W P ≤ 1 nm, which indicates that both kinds of tunneling are suppressed in this condition, so a very small I off can be obtained.When 1 nm < W P < 5 nm, the off-state L-tunneling is turned on and dominates, which makes I off have a sharp increase.This is due to the fact that the λ of the L-tunneling reduces with the increase in W P .With the further increase in W P , the decrease in electrons in the left side of the GO region lifts the energy band in this region; thus, the ∆ϕ of the L-tunneling starts to reduce at W P = 4 nm and disappears at W P = 5 nm, thereby resulting in a decrease in I off .When W P ≥ 5 nm, the L-tunneling is turned off, and the P-tunneling is dominant, leading to I off increasing with W P again.This is because, with the increase in W P , the P-tunneling junction in the left-side channel becomes steeper, which increases the e-BTBT rate of the P-tunneling.I on depends on the on-state L-tunneling in the GO region.Due to the mutual constraint between λ and ∆ϕ, the e-BTBT rate of the L-tunneling exhibits a trend of increasing first and then decreasing with the increase in W P , which results in the same change trend in I on (see Figure 7a).As shown in Figure 7c, the optimal SS avg and I on /I off can be obtained at W P = 5 nm.Comprehensive analysis shows that the optimal W P is 5 nm. Micromachines 2023, 14, x FOR PEER REVIEW 9 of 13 obtained.When 1 nm < WP < 5 nm, the off-state L-tunneling is turned on and dominates, which makes Ioff have a sharp increase.This is due to the fact that the λ of the L-tunneling reduces with the increase in WP.With the further increase in WP, the decrease in electrons in the left side of the GO region lifts the energy band in this region; thus, the Δφ of the Ltunneling starts to reduce at WP = 4 nm and disappears at WP = 5 nm, thereby resulting in a decrease in Ioff.When WP ≥ 5 nm, the L-tunneling is turned off, and the P-tunneling is dominant, leading to Ioff increasing with WP again.This is because, with the increase in WP, the P-tunneling junction in the left-side channel becomes steeper, which increases the e-BTBT rate of the P-tunneling.Ion depends on the on-state L-tunneling in the GO region.Due to the mutual constraint between λ and Δφ, the e-BTBT rate of the L-tunneling exhibits a trend of increasing first and then decreasing with the increase in WP, which results in the same change trend in Ion (see Figure 7a).As shown in Figure 7c, the optimal SSavg and Ion/Ioff can be obtained at WP = 5 nm.Comprehensive analysis shows that the optimal WP is 5 nm. Effect of InAlAs-Block on VPB-EHBTFET Here, the width and length (WB and LB) of the In0.52Al0.48As-block in the proposed VPB-EHBTFET are examined to optimize device performance.Figure 8a shows Ioff and Ion values under different WBs.Ioff gradually decreases with the increase in WB but basically maintains stability when WB > 7 nm, which can be explained by the contour plots of the nonlocal e-BTBT rate shown in Figure 8b.It is found that both the distribution range and magnitude of the non-local e-BTBT rate reduce with the increase in WB, which can demonstrate that the increasing WB helps to inhibit the P-tunneling in the right side of the GUD region caused by the P + -pocket, thereby lowering the Ioff.When WB = 7 nm, the non-local e-BTBT rate falls sharply due to complete suppression of this type of P-tunneling, thus resulting in several orders of magnitude reduction in Ioff.As WB goes beyond 7 nm, the In0.52Al0.48As-blockbegins to suppress the P-tunneling in the left side of the GUD region so as to further reduce the non-local e-BTBT rate.Since the λ of the P-tunneling in the leftside channel is very long, the electron tunneling in this region is insignificant.As a result, Ioff is not sensitive to the change in WB.Moreover, Figure 8a shows that only when WB > 7 nm does Ion begin to decrease.This is because although the In0.52Al0.48As-blockdoes not affect the L-tunneling in the GO region, when WB > 7 nm, it prevents the tunneling electrons in the left-side channel from drifting to the drain region.As shown in Figure 8c, with the increase in WB, SSavg and Ion/Ioff decreases and increases, respectively, but both take on the opposite trend when WB > 7 nm.Thus, the best choice of WB is 7 nm. Effect of InAlAs-Block on VPB-EHBTFET Here, the width and length (W B and L B ) of the In 0.52 Al 0.48 As-block in the proposed VPB-EHBTFET are examined to optimize device performance.Figure 8a shows I off and I on values under different W B s.I off gradually decreases with the increase in W B but basically maintains stability when W B > 7 nm, which can be explained by the contour plots of the non-local e-BTBT rate shown in Figure 8b.It is found that both the distribution range and magnitude of the non-local e-BTBT rate reduce with the increase in W B , which can demonstrate that the increasing W B helps to inhibit the P-tunneling in the right side of the GUD region caused by the P + -pocket, thereby lowering the I off .When W B = 7 nm, the non-local e-BTBT rate falls sharply due to complete suppression of this type of Ptunneling, thus resulting in several orders of magnitude reduction in I off .As W B goes beyond 7 nm, the In 0.52 Al 0.48 As-block begins to suppress the P-tunneling in the left side of the GUD region so as to further reduce the non-local e-BTBT rate.Since the λ of the P-tunneling in the left-side channel is very long, the electron tunneling in this region is insignificant.As a result, I off is not sensitive to the change in W B .Moreover, Figure 8a shows that only when W B > 7 nm does I on begin to decrease.This is because although the In 0.52 Al 0.48 As-block does not affect the L-tunneling in the GO region, when W B > 7 nm, it prevents the tunneling electrons in the left-side channel from drifting to the drain region.As shown in Figure 8c, with the increase in W B , SS avg and I on /I off decreases and increases, respectively, but both take on the opposite trend when W B > 7 nm.Thus, the best choice of W B is 7 nm.Next, the influence of LB on the device performance is investigated.It should be noted that all research results are obtained with the unchanged channel length.Figure 9a shows the Ioff and Ion values under different LBs.Ioff decreases first with the increase in LB but basically keeps stable when LB > 25 nm.Since the off-state P-tunneling in the right-side channel is affected by the In0.52Al0.48As-block,its energy band profiles can be calculated to interpret the trend of Ioff.It is observed from Figure 9b that there are two kinds of P-tunneling between the P + -pocket and the GUD region: (1) electrons tunnel from the valence band of the P + -pocket into the conduction band of In0.53Ga0.47As(CBS) (named PCT-tunneling), and (2) electrons tunnel from the valence band of the P + -pocket into the conduction band of In0.52Al0.48As(CBB) (named PBT-tunneling).Due to the wider Eg in the In0.52Al0.48Asand the longer λ in the tunneling junction, the effect of PBT-tunneling on Ioff is negligible.When LB = 0 nm, only the PCT-tunneling exists in the tunneling junction, and its Δφ the widest; thus, the maximum is generated.With the increase in LB, the Δφ of the PCT-tunneling and the PBT-tunneling (i.e., Δφ1 and Δφ2 in Figure 9b) decreases and increases, respectively, and disappears and saturates, respectively, when LB ≥ 25 nm.It follows that the PCT-tunneling is gradually suppressed with the increase in LB and eventually completely replaced by the PCB-tunneling when LB ≥ 25 nm, thereby resulting in the change trend of Ioff shown in Figure 9a.Furthermore, Figure 9a shows that Ion is immune to the change in LB, which is because the L-tunneling dominant in the on-state is not affected by the LB. Figure 9c shows that SSavg and Ion/Ioff have the same and opposite change trend as Ioff, respectively.This is because both parameters mainly depend on Ioff , which is closely related to LB.This research demonstrates that good device performance can be achieved when LB is not less than 25 nm.Next, the influence of L B on the device performance is investigated.It should be noted that all research results are obtained with the unchanged channel length.Figure 9a shows the I off and I on values under different L B s.I off decreases first with the increase in L B but basically keeps stable when L B > 25 nm.Since the off-state P-tunneling in the right-side channel is affected by the In 0.52 Al 0.48 As-block, its energy band profiles can be calculated to interpret the trend of I off .It is observed from Figure 9b that there are two kinds of P-tunneling between the P + -pocket and the GUD region: (1) electrons tunnel from the valence band of the P + -pocket into the conduction band of In 0.53 Ga 0.47 As (CBS) (named PCT-tunneling), and (2) electrons tunnel from the valence band of the P + -pocket into the conduction band of In 0.52 Al 0.48 As (CBB) (named PBT-tunneling).Due to the wider E g in the In 0.52 Al 0.48 As and the longer λ in the tunneling junction, the effect of PBT-tunneling on I off is negligible.When L B = 0 nm, only the PCT-tunneling exists in the tunneling junction, and its ∆ϕ is the widest; thus, the maximum I off is generated.With the increase in L B , the ∆ϕ of the PCT-tunneling and the PBT-tunneling (i.e., ∆ϕ 1 and ∆ϕ 2 in Figure 9b) decreases and increases, respectively, and disappears and saturates, respectively, when L B ≥ 25 nm.It follows that the PCT-tunneling is gradually suppressed with the increase in L B and eventually completely replaced by the PCB-tunneling when L B ≥ 25 nm, thereby resulting in the change trend of I off shown in Figure 9a.Furthermore, Figure 9a shows that I on is immune to the change in L B , which is because the L-tunneling dominant in the on-state is not affected by the L B . Figure 9c shows that SS avg and I on /I off have the same and opposite change trend as I off , respectively.This is because both parameters mainly depend on I off , which is closely related to L B .This research demonstrates that good device performance can be achieved when L B is not less than 25 nm.To verify the superiority of the introduction of the In0.52Al0.48As-block, the device performance of the proposed VPB-EHBTFET under la ice mismatch was investigated, and corresponding simulation results are shown in Figure 10.In simulations, the composition of InGaAs is unchanged, while the Al composition (i.e., x) of the In1−xAlxAs-block changes from 0.2 to 0.8.Considering the strain caused by the la ice mismatch, a strained two-band zincblende model is included in simulations.As shown in Figure 10a, it is observed that the Ion is insensitive to x, but the Ioff decreases first with the increase in x and then basically remains unchanged when x ≥ 0.48, both of which can be interpreted by the tunneling mechanism.Based on the previous analyses, the In1−xAlxAs-block located in the right-side GUD region mainly controls the off-state P-tunneling.With the increase in x, the Eg of In1−xAlxAs increases.According to Equation (1), the increase in Eg benefits to reducing the Ptun, eventually reducing the Ioff.When x ≥ 0.48, electrons are difficult to tunnel into the drain across the In1−xAlxAs-block due to the long λ caused by the Eg, which makes the Ioff basically remain stable.It is found from Figure 10b that when x < 0.48, Ion/Ioff and SSavg increases and decreases with the increase in x, respectively.As x increases further, both parameters tend to be saturated.Comprehensive analysis shows that good device performance of VPB-EHBTFET can be achieved when x ≥ 0.48.However, the epitaxial layers with mismatched la ices are prone to defects during growth, which can affect the performance and lifespan of devices.Therefore, In0.52Al0.48Aswith x = 0.48 is the optimal choice because it matches the la ice of In0.53Ga0.47As. Conclusions To sum up, an In0.53Ga0.47Asvertical EHBTFET with a P + -pocket and an In0.52Al0.48Asblock(VPB-EHBTFET) is introduced and investigated by TCAD simulation.Numerical simulations indicate that the adoption of a P + -pocket and In0.52Al0.48As-blockcan make To verify the superiority of the introduction of the In 0.52 Al 0.48 As-block, the device performance of the proposed VPB-EHBTFET under lattice mismatch was investigated, and corresponding simulation results are shown in Figure 10.In simulations, the composition of InGaAs is unchanged, while the Al composition (i.e., x) of the In 1−x Al x As-block changes from 0.2 to 0.8.Considering the strain caused by the lattice mismatch, a strained two-band zincblende model is included in simulations.As shown in Figure 10a, it is observed that the I on is insensitive to x, but the I off decreases first with the increase in x and then basically remains unchanged when x ≥ 0.48, both of which can be interpreted by the tunneling mechanism.Based on the previous analyses, the In 1−x Al x As-block located in the right-side GUD region mainly controls the off-state P-tunneling.With the increase in x, the E g of In 1−x Al x As increases.According to Equation (1), the increase in E g benefits to reducing the P tun , eventually reducing the I off .When x ≥ 0.48, electrons are difficult to tunnel into the drain across the In 1−x Al x As-block due to the long λ caused by the E g , which makes the I off basically remain stable.It is found from Figure 10b that when x < 0.48, I on /I off and SS avg increases and decreases with the increase in x, respectively.As x increases further, both parameters tend to be saturated.Comprehensive analysis shows that good device performance of VPB-EHBTFET can be achieved when x ≥ 0.48.However, the epitaxial layers with mismatched lattices are prone to defects during growth, which can affect the performance and lifespan of devices.Therefore, In 0.52 Al 0.48 As with x = 0.48 is the optimal choice because it matches the lattice of In 0.53 Ga 0.47 As.To verify the superiority of the introduction of the In0.52Al0.48As-block, the device performance of the proposed VPB-EHBTFET under la ice mismatch was investigated, and corresponding simulation results are shown in Figure 10.In simulations, the composition of InGaAs is unchanged, while the Al composition (i.e., x) of the In1−xAlxAs-block changes from 0.2 to 0.8.Considering the strain caused by the la ice mismatch, a strained two-band zincblende model is included in simulations.As shown in Figure 10a, it is observed that the Ion is insensitive to x, but the Ioff decreases first with the increase in x and then basically remains unchanged when x ≥ 0.48, both of which can be interpreted by the tunneling mechanism.Based on the previous analyses, the In1−xAlxAs-block located in the right-side GUD region mainly controls the off-state P-tunneling.With the increase in x, the Eg of In1−xAlxAs increases.According to Equation (1), the increase in Eg benefits to reducing the Ptun, eventually reducing the Ioff.When x ≥ 0.48, electrons are difficult to tunnel into the drain across the In1−xAlxAs-block due to the long λ caused by the Eg, which makes the Ioff basically remain stable.It is found from Figure 10b that when x < 0.48, Ion/Ioff and SSavg increases and decreases with the increase in x, respectively.As x increases further, both parameters tend to be saturated.Comprehensive analysis shows that good device performance of VPB-EHBTFET can be achieved when x ≥ 0.48.However, the epitaxial layers with mismatched la ices are prone to defects during growth, which can affect the performance and lifespan of devices.Therefore, In0.52Al0.48Aswith x = 0.48 is the optimal choice because it matches the la ice of In0.53Ga0.47As. Conclusions To sum up, an In 0.53 Ga 0.47 As vertical EHBTFET with a P + -pocket and an In 0.52 Al 0.48 Asblock (VPB-EHBTFET) is introduced and investigated by TCAD simulation.Numerical simulations indicate that the adoption of a P + -pocket and In 0.52 Al 0.48 As-block can make VPB-EHBTFET simultaneously possess good on-state and off-state performance.Corresponding indicator parameters are I on of 1.04 × 10 −4 A/µm, I off of 1.83 × 10 −19 A/µm, I on /I off of 5.7 × 10 14 , SS avg of 5.5 mV/dec, and a DIBL value of 7.5 mV/V.Through examining the influence of the P + -pocket on VPB-EHBTFET, it results that C P controls the device performance by affecting the L-tunneling, while W P can simultaneously regulate the λ and ∆ϕ of the point tunneling and line tunneling to obtain the optimal on-state performance.Considering the changes in W B and L B , it is concluded that both mainly control I off through the suppression of the P-tunneling in the right-side channel.Moreover, through investigating the effect of the Al composition of the In 1−x Al x As-block on the device performance and considering the potential performance degradation caused by lattice mismatch during device manufacturing, the In 0.52 Al 0.48 As-block that matches the lattice of In 0.53 Ga 0.47 As is the optimal choice. Figure 2 . Figure 2. Off-state energy band profiles for EHBTFETs along (a) A-A , (b) B-B , and (c) C-C , respectively; (d) I ds -V gs curves of EHBTFETs. Figure 3 . Figure 3. On-state energy bands for EHBTFETs along (a) A-A , (b) B-B , and (c) C-C , respectively. Figure 4 .Figure 4 . Figure 4. Nonlocal e-BTBT rates for EHBTFETs along (a) B-B' in the off-state and (b) C-C' in the on-state. Figure 5 . Figure 5. (a) The correlation between point SS and drain current of EHBTFETs; and (b) Ids-Vgs curves of EHBTFETs at Vds = 0.1 V and Vds = 0.5 V, respectively. Figure 5 . Figure 5. (a) The correlation between point SS and drain current of EHBTFETs; and (b) I ds -V gs curves of EHBTFETs at V ds = 0.1 V and V ds = 0.5 V, respectively. Figure 6 . Figure 6.(a) The variation of Ion and Ioff with CP for VPB-EHBTFET; (b) off-state and (c) on-state energy band profiles along C-C' for VPB-EHBTFET; and (d) the variation of Ion/Ioff and SSavg with CP for VPB-EHBTFET. Figure 6 . Figure 6.(a) The variation of I on and I off with C P for VPB-EHBTFET; (b) off-state and (c) on-state energy band profiles along C-C for VPB-EHBTFET; and (d) the variation of I on /I off and SS avg with C P for VPB-EHBTFET. Figure 7 . Figure 7. (a) The variation of Ion and Ioff with WP for VPB-EHBTFET; (b) the peak value of the nonlocal e-BTBT rate under different WPs for the P-tunneling in the left-side channel and the L-tunneling in the GO region, in the off-state; and (c) the variation of Ion/Ioff and SSavg with WP for VPB-EHBTFET. Figure 7 . Figure 7. (a) The variation of I on and I off with W P for VPB-EHBTFET; (b) the peak value of the nonlocal e-BTBT rate under different W P s for the P-tunneling in the left-side channel and the L-tunneling in the GO region, in the off-state; and (c) the variation of I on /I off and SS avg with W P for VPB-EHBTFET. 13 NonlocalFigure 8 . Figure 8.(a) The variation of Ion and Ioff with WB for VPB-EHBTFET; (b) the contour plots of the nonlocal e-BTBT rate under different WBs for VPB-EHBTFET in the off-state; and (c) the variation of Ion/Ioff and SSavg with WB for VPB-EHBTFET. Figure 8 . Figure 8.(a) The variation of I on and I off with W B for VPB-EHBTFET; (b) the contour plots of the nonlocal e-BTBT rate under different W B s for VPB-EHBTFET in the off-state; and (c) the variation of I on /I off and SS avg with W B for VPB-EHBTFET. Figure 9 . Figure 9. (a) The variation of Ion and Ioff with LB for VPB-EHBTFET; (b) energy band profiles of VPB-EHBTFET under different LBs in the off-state; and (c) the variation of Ion/Ioff and SSavg with LB for VPB-EHBTFET. Figure 9 . Figure 9. (a) The variation of I on and I off with L B for VPB-EHBTFET; (b) energy band profiles of VPB-EHBTFET under different L B s in the off-state; and (c) the variation of I on /I off and SS avg with L B for VPB-EHBTFET. Micromachines 2023 , 13 Figure 9 . Figure 9. (a) The variation of Ion and Ioff with LB for VPB-EHBTFET; (b) energy band profiles of VPB-EHBTFET under different LBs in the off-state; and (c) the variation of Ion/Ioff and SSavg with LB for VPB-EHBTFET. Figure 10 . Figure 10.(a) I on and I off ; (b) I on /I off and SS avg , for VPB-EHBTFET under different Al compositions. Table 1 . Device structure parameters used in simulations. Table 1 . Device structure parameters used in simulations.
2023-10-31T18:24:39.642Z
2023-10-31T00:00:00.000
{ "year": 2023, "sha1": "4688692b0197d62a5ce0254da0fd8dd016670cf6", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4518194872c96ee82df2703bde0c09706f3f5e2c", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
260851166
pes2o/s2orc
v3-fos-license
Composite Liquid Biofuels for Power Plants and Engines: Review : The problems of environmental pollution caused by the operation of power plants and engines motivate researchers to develop new biofuels. The environmental aspect of composite biofuels appears to have great potential because of the carbon neutrality of plant raw materials. This study analyzes recent advances in the production of biofuels and their application. The research findings on the properties of promising plant raw materials and their derivatives have been systematized. The most important stages (spraying, ignition, and combustion) of using biofuels and mixtures based on them in internal combustion engines have been analyzed. A separate section reviews the findings on the environmental aspect of using new fuel compositions. Most studies show great prospects for involving bio-components in the development of composite fuels. The real issue is to adjust existing engines and plants to non-conventional fuel mixtures. Another big problem is the increased viscosity and density of biofuels and oils, as well as the ambiguous effect of additives on burnout completeness and emissions. The impact of the new kinds of fuels on the condition of components and parts of engines, corrosion, and wear remains understudied. The interrelation of industrial process stages (from feedstock to an engine and a plant) has not been closely examined for composite liquid fuels. It is important to organize the available data and develop unified and adaptive technologies. Within the framework of this review work, scientific approaches to solving the above problems were considered and systematized. Introduction There is currently a steady trend towards increased consumption of liquid hydrocarbon fuels in internal combustion engines of cars and propulsion systems of air transport [1]. The most widely used types of liquid hydrocarbons in the fuel sector are kerosene, Diesel fuel, and gasoline. Kerosene is primarily used as a jet fuel for planes and rockets (aviation kerosene) and as an additive (up to 20%) to Diesel fuel at low operating temperatures to prevent freezing without significant worsening of performance characteristics [2]. Aviation kerosene in aircraft is not only a motor fuel in turboprops and turbojets but also serves as a refrigerant in heat exchangers (fuel-to-air heat exchangers) and as a lubricant in fuel and engine systems. Similar to kerosene, Diesel fuel also has a wide range of applications [3][4][5]. It is used as a motor fuel for different kinds of land and water transport, as well as for special-purpose machines. Rail transport (diesel locomotives and diesel multiple units), maritime transport (mainly motor ships), road transport (buses, trucks, and amphibious all-terrain vehicles), and different special-purpose machines (tractors, combines, etc.) rely heavily on Diesel fuel. Apart from its application in transport, Diesel fuel is also used in electric generators. motor units can provide an alternative to internal combustion engines, yet they have their limitations as well. The central problem of electric motors is that they are powered by electricity produced primarily through the combustion of fossil fuels, such as coal, fuel oil, and natural gas. According to IEA World Energy Outlook [8], over 80% of global power is generated by burning fossil fuels. Significantly, electrical power production and transportation to charging stations entail some losses associated with the efficiency of power facilities (boilers, turbines, etc.), electric generators, transmission lines, and charging stations themselves. Moreover, the growth of the electric transport industry makes it necessary to recycle dead batteries. However, there are no solutions addressing this issue so far. As reported in [9,10], only 5% of all lithium-ion batteries are now recycled. The remaining 95% go to landfill sites for long storage. It is also important to consider the current level of infrastructure for the widespread adoption of electric vehicles into everyday life. Although the network of charging stations for electric vehicles is growing around the world, a significant proportion of cars, especially trucks, in the largest automotive markets (China, India, and South America) still rely on internal combustion engines [7,11], as they do not require a substantial upgrade of the whole national infrastructure or considerable government expenditure. An environmentally friendly alternative to conventional types of motor fuels can be offered by biofuel produced from plant-derived components [12]. Such fuels are carbon-neutral because raw materials (trees, shrubs, etc.) used to produce them absorb CO2 during growth, which compensates for carbon oxide emissions into the atmosphere during their combustion ( Figure 1). Apart from carbon neutrality, these fuels have other benefits. For instance, no significant upgrade of fuel feeding systems at existing filling stations is required for using these fuels. Another advantage is the high speed of filling these fuels into a vehicle tank, whereas the full charge of electric motors may take from half an hour to several hours. It is also important to consider the current level of infrastructure for the widespread adoption of electric vehicles into everyday life. Although the network of charging stations for electric vehicles is growing around the world, a significant proportion of cars, especially trucks, in the largest automotive markets (China, India, and South America) still rely on internal combustion engines [7,11], as they do not require a substantial upgrade of the whole national infrastructure or considerable government expenditure. An environmentally friendly alternative to conventional types of motor fuels can be offered by biofuel produced from plant-derived components [12]. Such fuels are carbonneutral because raw materials (trees, shrubs, etc.) used to produce them absorb CO 2 during growth, which compensates for carbon oxide emissions into the atmosphere during their combustion ( Figure 1). Apart from carbon neutrality, these fuels have other benefits. For instance, no significant upgrade of fuel feeding systems at existing filling stations is required for using these fuels. Another advantage is the high speed of filling these fuels into a vehicle tank, whereas the full charge of electric motors may take from half an hour to several hours. In recent years, there has been an increasing interest in studying biofuels and developing technologies for their widespread adoption in different applications [13,14]. Such research is underway, though not without impediments. The introduction of new composite fuels implies that a wide range of factors should be analyzed that directly affect the operation of engines and power facilities. Overall, this is a very diverse area with great potential for study and development. The purpose of this review paper is to systematize the main trends in the production and application of composite liquid fuels derived from plant components, identify the advantages and disadvantages of these technologies, and find promising development paths for this field. The difference of this work from other previously published reviews is that the emphasis here is on the diversity of the feedstock used and the types of biofuels obtained from it, suitable for a wide range of applications. Liquid Biofuels: Feedstocks, Production Methods and Properties There is a growing body of literature that recognizes the importance of using liquid biofuels. Preferable components for such fuels are vegetable oils or products of their thermochemical conversion [15,16]. This is because the feedstock is almost carbonneutral [17]. Apart from vegetable oils, acyclic hydrocarbons with a general formula C n H 2n+2 or their derivatives [18][19][20], alcohols [19,21], waste oils [22,23], biomass [24,25], municipal waste [26], etc. can also be used as feedstock. Not only liquid components but also solid waste from different industries (agriculture, MSW, and others) can be used for the synthesis of biofuels. These components are virgin raw materials for the preparation of pyrolysis or gasification products that can be later used as additives to biofuels or as a component for the synthesis of biofuels in pyrolytic cracking and hydrocracking units. Table 1 presents promising solid waste that can be used for the synthesis of biofuels. The most common method of biofuel production is the transesterification of initial components to produce certain esters. Thermochemical conversion and transesterification are used to produce new components. Therefore, their subsequent application requires research into the properties, environmental, energy, and performance parameters. Such characteristics as ash content, the specific heat of combustion, ignition and flash temperature, as well as elemental composition, are enormously important for fuel applications. Density, viscosity, and crystallization temperature are also important, as these properties make it possible to predict the behavior of fuel mixtures during transportation, storage, and actual combustion in a furnace. Most studies on biofuels focus on the preparation of components. The synthesis often involves esterification and transesterification [46], pyrolytic cracking [47], hydrocracking [48], and hydrothermal liquefaction [49] (Figure 2). Additional steps or catalysts are often used to improve the production of biofuels. For example, a number of works [50,51] report that ionic liquids can reduce energy consumption throughout the process of biofuel production. For example, in the research [50], 3 of 5 investigated ionic liquids displayed a good catalytic activity and resulted in conversion higher than 77%. The chosen ionic liquid, 1-methylimidazolium hydrogen sulfate, led to the highest conversion in the screening step. The most common method of biofuel production is the transesterification of initial components to produce certain esters. Thermochemical conversion and transesterification are used to produce new components. Therefore, their subsequent application requires research into the properties, environmental, energy, and performance parameters. Such characteristics as ash content, the specific heat of combustion, ignition and flash temperature, as well as elemental composition, are enormously important for fuel applications. Density, viscosity, and crystallization temperature are also important, as these properties make it possible to predict the behavior of fuel mixtures during transportation, storage, and actual combustion in a furnace. Most studies on biofuels focus on the preparation of components. The synthesis often involves esterification and transesterification [46], pyrolytic cracking [47], hydrocracking [48], and hydrothermal liquefaction [49] (Figure 2). Additional steps or catalysts are often used to improve the production of biofuels. For example, a number of works [50,51] report that ionic liquids can reduce energy consumption throughout the process of biofuel production. For example, in the research [50], 3 of 5 investigated ionic liquids displayed a good catalytic activity and resulted in conversion higher than 77%. The chosen ionic liquid, 1-methylimidazolium hydrogen sulfate, led to the highest conversion in the screening step. The pretreatment of dry and ensiled hemp with steam for the production of ethanol was studied in [52]. The process efficiency was assessed in terms of sugar recovery and polysaccharide conversion when using enzymatic hydrolysis. It was established that impregnation with 2% SO 2 followed by steam pretreatment at 210 • C for 5 min resulted in the highest yield of glucose. Another effective method of biofuel synthesis is pyrolytic cracking using different catalysts [53][54][55]. Amini et al. [54] utilized waste cooking oil (WCO) consisting of triglycerides and contaminated derivatives from the frying process. Dolomite was used as a catalyst that accelerated the production of aromatic hydrocarbons during chemical reactions. Amini et al. [54] used two-step pyrolytic cracking to obtain products such as gasoline, kerosene, and Diesel fuel. The resulting biofuels met the standards of the certified fuel [54]. Le-Phuc et al. [55] studied the conversion of high acid value (AV) waste cooking oils (WCOs) into biofuels through cracking over spent fluid catalytic cracking (SFCC) catalysts. WCOs were processed in a fluid catalytic cracking lab-scale unit at 450-520 • C. The authors note that this approach potentially minimizes the costs associated with buying catalysts and managing spent catalysts while maximizing the conversion of used vegetable oil into a clean fuel with almost zero AV. In addition, rare earth elements from spent FCC catalysts, after being used for WCO processing, can be recovered as a high-purity RE mixture. Therefore, this procedure has a high potential as a practical alternative that is both economically and environmentally beneficial. Thermochemical conversion of solid agricultural waste is the most promising method in regions with limited availability of fossil fuels. These methods have a number of benefits: a positive economic effect (reduced dependence on conventional energy sources, competitive advantage on the global market, variety of components and fuels produced); lower environmental impact (reduced emissions of greenhouse gases when using biofuel, recycling of stockpiled waste); energy independence (availability of fuel, a wider scope of raw materials). Hydrocarbons can also be produced from coal. It is possible to obtain ultra-clean transportation fuels from coal in four sequential steps: coal gasification, syngas cleanup, F-T (Fischer-Tropsch) synthesis, and F-T product workup. A Fischer-Tropsch (F-T) subsystem is incorporated into an IGCC (integrated gasification combined cycle) complex to produce an ultra-clean CTL (coal-to-liquids) fuel together with electric power, chemicals, and steam [56]. The most widespread methods of conversion are pyrolysis, gasification, and hydrothermal liquefaction. Pyrolysis results in 3 types of products that can be used in the energy sector (biochar, bio-oil, and syngas) [57]. Bio-alcohols are obtained using a hybrid method of fermentation with subsequent gasification. This combination of biological and thermochemical processes produces pure ethanol, methyl, or butyl alcohol. Preliminary fermentation of solid waste of biomass catalyzes the reaction of further gasification of the feedstock since this method simultaneously uses different types of biomass [58]. Table 2 sets out the operating conditions of the equipment used for the synthesis of biofuels from plant raw materials and waste. Table 2. Operation parameters of equipment. Synthesis Method Parameters Ref. Atomization One of the most crucial factors for the efficiency of combustion of any fuel is its spraying. It is a relevant task in this science and technology area to identify the impact of a group of different factors on the spraying of biofuels. These factors include the physicochemical properties of biofuels, pressure and time of injection, nozzle shape, conditions of the ambient gas medium, etc. At present, there are a lot of experimental and theoretical findings on the spraying of biofuels with different compositions. However, many of the spraying characteristics and factors affecting them have not yet been determined for these fuel types. It is necessary to investigate the spraying of liquid fuels of bio-origin because their properties are distinctly different from those of conventional fuels. For instance, the high viscosity of biodiesel obtained through the transesterification of vegetable oils with monohydric alcohols worsens the spraying characteristics of droplets, and the fuel does not mix well with the air in the engine during ignition [70,71]. It is possible to improve the spraying of biodiesel by reducing the kinematic viscosity and surface tension by adding alcohol [72]. Suraj et al. [73] found that biodiesel stored for one year had greater kinematic viscosity and a lower mass flow rate compared to fresh biodiesel. Park et al. [74] used a spray visualization system to investigate the spraying characteristics of biodiesel mixed with ethanol. The spraying of composite fuel [74] was characterized by a smaller size of droplets and faster evaporation (because of the high volatility of ethanol) compared to the spraying of pure biodiesel. Gao et al. [75] determined the spraying parameters of biodiesel derived from inedible oil (in different proportions) using high-speed video recording: spray penetration, spray cone angle, and spray tip speed. An increase in the proportion of biodiesel in the fuel composition led to greater spray penetration and a faster spray tip speed, yet to a smaller spray cone angle [75]. Chaudhari et al. [76] utilized the high-speed shadowgraphy technique to examine the spraying properties of the following fuels: Diesel fuel, biodiesel Azadirachta Indica (Neem), and biodiesel mixed with anhydrous ethanol (50 vol%). The effect of fuel properties and injection pressure on the spraying and mixing of the fuel with the air was identified [76]. The most efficient spraying, i.e., the biggest cone angle, was recorded [76] for diesel mixed with ethanol with a volume fraction of 50% at an injection pressure of 50 MPa. This type of fuel is the preferred option for modern internal combustion engines for improved mixing of the fuel with the air. When a nozzle with an equilateral triangular orifice was used [77], the spray width of biodiesel was bigger than that of Diesel fuel at different injection pressures . The visualization of spraying in [76] revealed the occurrence of cavitation that improved the efficiency of spraying and hence, combustion. The collapse of cavitating droplets destroyed the spray jet, thus increasing the spray cone angle of the fuel [78]. Droplet cavitation parameters depend on the injection conditions and the nozzle geometry. Cavitation shedding was normally observed beside the lower wall of the nozzle at high injection pressure [79]. The sprayed fuel is disintegrated not only through cavitation but also by a transverse flow of gas under pressure, which is of great interest for internal combustion engines. Jagadale et al. [80] conducted experiments on the fragmentation of levitating droplets of ethanol, rapeseed methyl ester, and their emulsions induced by a laser pulse. They used an acoustic levitator for droplet levitation and its non-contact manipulation [80]. Three droplet breakup regimes were observed [80]: droplet rupture and air entrapment, sheet breakup, and prompt/catastrophic fragmentation. Jagadale et al. [80] established that emulsion droplets (ethanol and rapeseed methyl ester) broke up not so easily as rapeseed methyl ester droplets or ethanol droplets did, even at lower laser energy. The problem of environmental pollution is becoming more acute each year. There is a growing body of literature on devising solutions for reducing emissions by varying the spraying characteristics. For instance, Palash et al. [81] found that NO x emissions during the combustion of biodiesel are significantly affected not only by the physicochemical characteristics, flame temperature, and engine load but also by the injection time. They established experimentally [81] that NO x emissions can be effectively reduced (5-25%) by retarding biodiesel injection [81]. Park et al. [74] reported that early injection of biodiesel with ethanol contributed to lower emissions of exhaust gases. Evans et al. [82] studied how the methods of supplying biofuel into a hydrogen flame (prevaporization or direct spraying) affected soot formation. They showed [82] that in the direct spraying of toluene with a hydrogen-nitrogen fuel mixture, there was slightly more soot than in the case of spraying toluene as a vapor. As in [82], the prevaporization of palm methyl ester droplets resulted in little soot, hence, lower NO x and CO emissions [83]. Studies on biodiesel produced from coconut oil, waste coffee grounds, tomato seeds and microalgae [84] revealed that compared to the traditional fuel, using mixtures of 20% and 50% biodiesel reduced nitrogen oxides (NO x ) by 4.6 and 1.2%, respectively. Using sustainable aviation fuel, namely biofuel derived from Euglena [85] and biofuel from algae [86], can also cut greenhouse gas emissions compared to the conventional fuel. In contrast, a mixture of biodiesel produced from Scenedesmus obliquus produced increased emissions of carbon dioxide and nitrogen oxides [87]. Lapuerta et al. [88] also reported higher levels of emissions and soot in exhaust gases as a result of increasing the content of glycerol in the biodiesel composition. An overview of the recent research into biofuels indicates that their spraying has been at the forefront over the past few decades in an attempt to find optimal spray parameters by varying their composition and hence physicochemical properties. The general theory in the field of rational spraying of biofuels has not yet been built. One of the reasons for this is the variety of compositions of fuels under study and their properties. Ignition and Combustion Performance of Liquid Biofuels Ignition and combustion behavior is crucial for the sustainable operation of different engines, turbine plants, and boilers. Being intended for use in internal combustion engines, composite liquid fuels have certain specific features. Despite a great deal of research on this subject, numerous aspects still remain understudied. Ignition and combustion are complex processes in terms of chemical transformations and heat and mass transfer. Another important issue is the relationship between spraying, ignition, combustion, and emissions. The main area of application of biofuels is the diesel engine. Oo et al. [89] compared the ignition and combustion of Diesel fuel and several biodiesels (jatropha methyl ester, palm methyl ester, soybean methyl ester, and coconut methyl ester) using an experimental setup at different pressures and temperatures. They discovered [89] that the ignition delay times of biodiesels were lower than those of Diesel fuel over the temperature range under study (350-950 • C). Of biodiesels, coconut methyl ester demonstrated the fastest evaporation and ignition. Hidegh et al. [90] explored the properties and characteristics of the combustion of biodiesels produced from palm, coconut, and waste cooking oils. Biofuels were mixed with diesel and subjected to temperature-controlled combustion in a swirl burner. The fuels were burned in a test rig where fuel was supplied together with air, pre-heated to 150-350 • C. Hidegh et al. [90] established that the properties of mixtures with 25% biofuel were very similar to those of Diesel fuel. For instance, the initial boiling point and the flash point did not differ significantly. With a further increase in the content of biodiesel, the properties started changing. Thus, with the proportion of palm methyl ether increased from 25% to 100%, the initial boiling point and the flash point grew to 345 • C and 190 • C, respectively. Of the three biodiesels under consideration, coconut methyl ether in a wide range of its proportions differed the least from commercial diesel in the above temperature characteristics [90]. The visualization of combustion revealed three flame shapes: straight, V-shaped, and distributed. The burner performance was consistent even under highly lean conditions. High atomization pressure and low air temperature caused distributed combustion. The ignition and combustion behavior of single droplets of crude and pure glycerol, petroleum diesel, ethanol, and biodiesel was explored by Setyawan at al. [91]. The fuel droplets were heated in an electric furnace in the temperature range of 675-775 • C. Setyawan et al. [91] pointed out that glycerol is overproduced. According to [92], every kilogram of biodiesel produced comes with approximately 0.1 kg of glycerol. This signifies that glycerol can be used as an alternative fuel for boilers. Depending on the production technology, crude glycerol may contain different proportions of impurities such as water, soap, methanol, fatty acid methyl esters, etc. In [91], crude glycerol contained at least 31% impurities. These impurities inevitably affect the dynamics of ignition and combustion of glycerol droplets. The experiments showed [91] that the size of a pure glycerol droplet varied greatly during heating, unlike the size of the other fuel droplets. Impurities caused micro-boiling during heating. It resulted in bubbles and variations in the shape and size of fuel droplets. Setyawan at al. [91] found that the combustion rate of pure glycerol was lower than that of Diesel fuel, biodiesel, and ethanol. This result is largely attributed to higher density and a rather low Spalding number of pure glycerol. Crude glycerol, on the contrary, was characterized by the highest burnout rate, which is due to impurities [91]. Methanol and water reduced the total combustion time of crude glycerol droplets by increasing the combustion rate, which was due to decreased latent heat and boiling temperature, increased vapor pressure, and a micro-explosion. As for the ignition rate, within the whole range of heating temperatures, the fastest ignition was typical of pure glycerol; the longest Energies 2023, 16, 5939 9 of 20 one was observed with Diesel fuel. The greatest difference in the recorded ignition delay times was 0.3-1.8 s. Setyawan at al. [91] determined the flame standoff ratio (the flame radius to the droplet radius) for different liquid fuels. For droplets of ethanol and pure glycerol, this ratio remained constant throughout combustion, whereas for crude glycerol, Diesel fuel, and biodiesel it tended to increase. Tariq and Saleh [93] analyzed the possibility of using heavy petroleum fuel in a diesel engine. Such fuels have a low cetane number, which complicates their ignition in an internal combustion engine. Overall, this fuel is almost unsuitable for a standard engine. Therefore, a one-cylinder test rig in the experiments was adjusted to heat the fuel to 70 • C before its injection into a combustion chamber. The authors tested heavy fuel oil and its blends with light fuel oil. The heating of the fuel mixture (80% light fuel, 20% heavy fuel) improved the burnout yet increased the exhaust gas temperature by approx. 30 • C. Tariq and Saleh [93] demonstrated good prospects for recovering cheap feedstock (fuel oil) and showed that the satisfactory performance of engines is possible with some adaptation of the fuel composition and operating conditions. The prospects of combining hydrocarbon fuel with components of plant or animal origin have recently gained momentum [94]. Significantly, using vegetable oils for combustion in engines has serious drawbacks. Because of the specific properties of oils (high density and viscosity), their ignition is more complicated than that of petroleum fuels. Thus, the viscosity of oils is 10-15 times as high as that of Diesel fuel [95]. Operating problems might occur because of the gradual formation of deposits, filter clogging, low-efficient spraying, injector coking, and increased production of CO. High viscosity is a limitation both for heavy fuels [93] and for vegetable oils [96]. Uddin et al. [96] proposed mixing kerosene with mustard oil to make the fuel suitable for internal combustion engines. A 4-stroke single-cylinder diesel engine was used in the experiments. It was installed on a hydraulic dynamometer bed [96]. Adding 20-30% mustard oil to kerosene was found to produce a fuel with a viscosity comparable to that of Diesel fuel. This proportion provided a brakespecific fuel consumption of 258-270 gm/kw-h, which is comparable to the equivalent parameter of diesel oil (233.51 gm/kw-h) burned in this engine. Chivu et al. [97] outlined good prospects for using blends of conventional Diesel fuel with turpentine oil. Unlike vegetable oils, its density is higher than that of regular Diesel fuel. Chivu et al. [97] found that the performance indicators changed only slightly with a switch to composite fuel (5-30% turpentine). Thermal efficiency differed by no more than 1.5%. However, the emissions of hydrocarbon and nitrogen oxides were higher than during the combustion of common diesel [97]. Fuel mixtures provide adequate heat of combustion [98] and other characteristics (e.g., density and brake-specific fuel consumption). Hossain et al. [98] prepared stable emulsions of "water-rapeseed oil-Diesel fuel" and "water-rapeseed oil" to be sprayed and burned in a 2-cylinder engine. The proportions of water and surfactant were 2.5-5% and 2%, respectively. Compared to Diesel fuel, emulsions based on rapeseed oil have a 10-15 • C higher flash temperature. The higher the percentage of water and rapeseed oil, the higher the flash temperature. The highest flash temperature (118 • C) was recorded for the mixture with 95.5% rapeseed oil, 2.5% Diesel fuel, and 2% surfactant [98]. Hossain et al. [98] estimated that at full load, the thermal efficiency during the combustion of the emulsion derived from vegetable oil was 12% higher than that of Diesel fuel. A review of literature shows that compromise solutions are necessary when using liquid biofuels to achieve satisfactory indicators of ignition and combustion, as well as additional operating parameters of equipment. Thus, for instance, adjusting straight vegetable oils to power facilities is rather problematic. Therefore, despite the availability and diversity of vegetable oils, they can only be used in blends with a conventional fuel in relatively low proportions (5-15%) or can be processed into biodiesel. Vegetable oils are characterized not only by high viscosity and density but also by a negative impact on some metals (copper and its alloys, zinc, lead, and iron) because of free fatty acids in their composition [95]. These factors affect the durability and smoothness of operation of units and engines over a period of time. In summary, to ensure the viability of an engine running on a non-conventional fuel, strategies are being developed nowadays on the basis of microemulsification, preliminary heating of the fuel, and the creation of blends with fossil Diesel fuel. Emission Performance In line with the general trend towards a reduction in anthropogenic emissions from transport and power-generating facilities, much attention is given to the environmental and social aspects of the application of alternative fuels in their development (Figure 3). The main pollutants from the combustion of different types of fuels are carbon oxides (CO 2 and CO), nitrogen oxides (primarily NO and NO 2 ), sulfur oxides (SO 2 ), and polycyclic aromatic hydrocarbons (PAH) [99]. The combustion of fuels also produces particulate matter (PM) of different sizes. The greatest health hazard for humans and animals is volatile microparticles with a size of less than 2.5 µm (PM2.5), as they can penetrate through biological barriers of living beings [100]. From 1990 to 2010, about 3.1 mln deaths were caused by PM2.5 particles. Moreover, PM2.5 particles were found to reduce life expectancy by 8.6 months on average. PM2.5 is responsible for a total of 3% of deaths from cardiovascular and respiratory diseases and 5% of deaths from lung cancer [101]. vegetable oils to power facilities is rather problematic. Therefore, despite the availability and diversity of vegetable oils, they can only be used in blends with a conventional fuel in relatively low proportions (5-15%) or can be processed into biodiesel. Vegetable oils are characterized not only by high viscosity and density but also by a negative impact on some metals (copper and its alloys, zinc, lead, and iron) because of free fatty acids in their composition [95]. These factors affect the durability and smoothness of operation of units and engines over a period of time. In summary, to ensure the viability of an engine running on a non-conventional fuel, strategies are being developed nowadays on the basis of microemulsification, preliminary heating of the fuel, and the creation of blends with fossil Diesel fuel. Emission Performance In line with the general trend towards a reduction in anthropogenic emissions from transport and power-generating facilities, much attention is given to the environmental and social aspects of the application of alternative fuels in their development (Figure 3). The main pollutants from the combustion of different types of fuels are carbon oxides (CO2 and CO), nitrogen oxides (primarily NO and NO2), sulfur oxides (SO2), and polycyclic aromatic hydrocarbons (PAH) [99]. The combustion of fuels also produces particulate matter (PM) of different sizes. The greatest health hazard for humans and animals is volatile micro-particles with a size of less than 2.5 µm (PM2.5), as they can penetrate through biological barriers of living beings [100]. From 1990 to 2010, about 3.1 mln deaths were caused by РМ2.5 particles. Moreover, РМ2.5 particles were found to reduce life expectancy by 8.6 months on average. РМ2.5 is responsible for a total of 3% of deaths from cardiovascular and respiratory diseases and 5% of deaths from lung cancer [101]. Carbon dioxide emissions are not hazardous for human respiration. However, in an excess of carbon dioxide, it acts as thermal insulation for the planet. Carbon monoxide (CO), on the contrary, is poisonous to humans and animals [102,103]. CO enters the lungs and binds to hemoglobin to form carboxyhemoglobin, thus preventing the transportation of oxygen by the blood and leading to hypoxia [103]. At a high concentration, it has a toxic effect; namely, it inhibits cellular respiration in the cerebral cortex [103]. Carbon dioxide emissions are not hazardous for human respiration. However, in an excess of carbon dioxide, it acts as thermal insulation for the planet. Carbon monoxide (CO), on the contrary, is poisonous to humans and animals [102,103]. CO enters the lungs and binds to hemoglobin to form carboxyhemoglobin, thus preventing the transportation of oxygen by the blood and leading to hypoxia [103]. At a high concentration, it has a toxic effect; namely, it inhibits cellular respiration in the cerebral cortex [103]. Nitrogen oxides (NO x ) are potential irritants that can increase the risk of chronic lung diseases. Nitrogen oxides volatilizing to the atmosphere pose a serious threat to the environment. They combine with the air to form HNO 2 and HNO 3 , which are the basic components of acid rain [104]. They are also toxic themselves and cause irritation to the mucous membrane. Nitrogen dioxide (NO 2 ) mainly affects the airways and lungs. It also alters the blood composition, or more specifically, reduces the content of hemoglobin in the blood [105,106]. Sulfur oxides entering the atmosphere can travel hundreds of kilometers before converting to H 2 SO 4 to precipitate with rain. People living near sources of sulfur oxide emissions often suffer from wheezing, coughing, and mucous membrane irritation [104] (Figure 3). Diesel-Biofuel Emissions Hamza et al. [107] investigated the emissions of particulate matter (PM) during the combustion of composite fuels derived from common Diesel fuel, biodiesel, and kerosene. To this end, an experimental rig was constructed on the basis of a 4-stroke, 4-cylinder Diesel engine. Biofuel was produced from sunflower oil through transesterification. Four fuel blends were produced to compare them with conventional Diesel fuel: Diesel fuel with a volume fraction of biodiesel of 10% and 20% (marked BD10 and BD20, respectively) and kerosene with 10% and 20% biodiesel (marked KB10 and KB20, respectively). Hamza et al. [107] established that using composite fuels (namely, BD10, BD20, KB10, and KB20) reduced the emissions of PM1.0 by up to 12.3%, 36.65%, 60.92%, and 81%, respectively, compared to conventional Diesel fuel. The emissions of PM2.5 fell by up to 21.29%, 25%, 41.43%, and 51.85% for the fuel mixtures BD10, BD20, KB10, and KB20. Two factors account for this [107]. First, biodiesel has a high content of oxygen, which causes complete combustion and contributes to the oxidation of soot. The second reason is the low content of sulfur in the composite fuel, especially when kerosene, rather than Diesel fuel, is used as a base. Another study on the formation of PM during biodiesel combustion is the review paper by Mohankumar et al. [108]. The authors note that the use of biodiesel can significantly reduce PM emissions because of low sulfur, aromatic components, and high oxygen content. Arias et al. [109] studied the PAH emissions of a diesel engine run on different fuel types. The experiments were conducted using a Diesel fuel, to which a road load simulation system was connected. It simulated the operation of the gearbox, tires, and other powertrain-dependent parts of a Nissan Qashqai [109]. The tested fuels were conventional Diesel fuel, commercial fuel derived from hydrotreated vegetable oils (HVO), and four experimental fuel types: hydrogenated turpentine and hydrogenated orange oil at 20 vol% were blended with Diesel fuel (HT20 and HO20, respectively); polyoxymethylene dimethyl ethers (OME) at 20 vol% were blended with Diesel fuel (OME20); biofuel derived from glycerol (consisting of a mixture of fatty acid methyl esters (FAME, 70 vol%), fatty acid glycerol esters (FAGE, 27 vol%) and acetals (3 vol%)), was blended at 20 vol% with 80 vol% HVO (the resulting fuel was given a commercial name of SLB100). Arias et al. [109] reported that the blend of Diesel fuel with hydrogenated turpentine had the highest PAH emissions in the engine tests. The lowest PAH emissions in all the engine tests were typical of HVO. This is explained by its specific thermophysical characteristics. HVO has the highest lower heating value, so it exhibited the lowest fuel consumption. Moreover, with no oxygen in its composition, neat HVO takes more time to burn than the other biofuels and blends. It has the best thermal efficiency at low and high loads, as combustion is more centered around the top dead center of the piston unit. Similar findings were obtained in [110][111][112] when using HVO. Kerosene-Biofuel Emission Gas-turbine power units are another promising area for using biofuels with a high priority on the environmental aspect. Some studies demonstrate the benefit of biofuels in terms of nitrogen oxide emissions [113,114]. López Juste and Salvá Monfort [114] examined the combustion behavior of a composite fuel produced from a commercial biofuel B-EUO4-B with a volume fraction of 80% and ethanol with a volume fraction of 20%. The obtained characteristics were compared with those of the traditional aviation fuel JP-4. The experiments were carried out in a gas turbine combustor equipped with a swirl atomizer. López Juste and Salvá Monfort [114] established that at a fuel/air ratio of 0.09, which corresponds to the energy contribution by a unit of mass of gases about 1.36 MJ/kg, the emissions of nitrogen oxides in the combustion of JP-4 were four times as high. At a fuel/air ratio of 0.06 and energy contribution by a unit of mass of gases 1 MJ/kg, the emissions of NO x in the combustion of JP-4 were comparable to those for the bio-oil/ethanol blend. This result is accounted for by a higher temperature of the flame during the combustion of JP-4, which causes additional formation of nitrogen oxides [114]. Boomadevi et al. [115] performed tests using a small experimental jet engine. The operation parameters of the engine and the level of anthropogenic emissions were determined for the fuel blends produced from Jet-A and Spirulina algae biofuel. The following fuels were investigated: B20% (20% biofuel with 80% Jet-A); B40% (40% biofuel with 60% Jet-A); B60% (60% biofuel with 40% Jet-A); B80% (80% biofuel with 20% Jet-A); biofuel B100%. The authors found [115] that using biofuel as an additive lowered the emissions of CO 2 at any turbine engine speed. Thus, the emissions of CO 2 when using 100% Jet-A were 3025 g/kg of fuel at a speed of 30,000 rpm and 3095 g/kg of fuel at 80,000 rpm. At the same time, the emissions of CO 2 when using B20%, B40%, and B60% were 3000-3050 g/kg of fuel, 2950-3010 g/kg of fuel, and 2920-2090 g/kg of fuel, respectively. It was also established that an increase in engine speed reduced the emissions of CO. As with CO 2 emissions, the highest CO emissions were also recorded for the 100% Jet-A. An increase in the rotation speed from 30,000 rpm to 80,000 rpm when using the 100% Jet-A reduced the production of carbon monoxide from 138 g/kg of fuel to 68 g/kg of fuel. When using B20%, the emissions of CO fell from 104 g/kg of fuel to 48 g/kg of fuel. Nitrogen oxide emissions increased with the higher speed of the turbine engine. For instance, with the 100% Jet-A, the emissions of NO x rose from 0.2 g/kg of fuel to 1.2 g/kg. In the case of B20%, this indicator varied in the range of 0.18-1.15 g/kg. However, a further increase in the proportion of biofuel in the blend composition increased the emissions of NO x . The reason for this is a higher concentration of oxygen in the blends, which led to additional oxidation of nitrogen in the combustion zone. Gasoline-Biofuel Emissions A promising area for using biofuels is gasoline powertrains. In [116], the combustion and emission characteristics of a gasoline engine were investigated. It was fueled by a blend based on gasoline and hydrogenated catalytic biodiesel (HCB) derived from waste cooking oils through one-step catalyzed hydrogenation processes. The concentration of biodiesel in the fuel blend varied in the range of 20-40 vol%. Zhang et al. [116] established that an increase in the proportion of biodiesel in the fuel blend composition reduced the emissions of NO x , CO, and HC. However, the emissions of PM increased. This occurred because the high content of HCB reduced the ignition delay time and worsened the mixing of the fuel with the air, thus leading to more emissions of solid particles. The ignition and combustion behavior of composite fuels based on gasoline and lemon peel oil was examined in [117]. The experimental setup was a twin-cylinder gasoline engine with a combustion endoscopic window fitted in it. The fuel blends under study consisted of gasoline with an octane number 87 and lemon peel oil with a volume fraction of 10-30% (named Lp10-Lp30). Velavan et al. [117] reported findings on the concentrations of anthropogenic emissions when varying the proportion of lemon peel oil in the composite fuel composition. Thus, the lowest emissions of CO were recorded for the fuel with 10% lemon peel oil. The highest emission level was for the fuel with 20-30% lemon peel oil. This result is explained by a longer diffusion combustion phase compared to gasoline and Lp10. Diffusion combustion has a higher localized temperature zone, which leads to the destruction of fuel droplets with soot formation and their incomplete burnout in the gas phase. Higher viscosity and slower evaporation of lemon peel oil leads to incomplete evaporation of the fuel, which contributes to the diffusion combustion. The CO emissions in the combustion of Lp10 were 5-7% lower than in the combustion of gasoline. The opposite is true for the emissions of CO 2 : the highest content of carbon dioxide in the composition of combustion products was observed in Lp10 and gasoline. This is accounted for by more complete burnout of these fuels. Predictably enough, Lp20 and Lp30 were characterized by lower CO 2 emissions because of diffusion combustion. The content of CO 2 in the composition of combustion products of Lp20 and Lp30 was 2.8% and 4.6% lower than that for gasoline. Velavan et al. [117] obtained interesting findings on the emissions of nitrogen oxides. The emissions of NO x were maximum for Lp20 and Lp30 and minimum for Lp10. Compared to gasoline, the combustion of Lp10 at full load proceeded with 5% Energies 2023, 16, 5939 13 of 20 lower emissions of nitrogen oxides. This can be explained by the fact that the slightly higher latent heat of vaporization of the lemon peel oil reduces the temperature in the combustion zone, which, in turn, reduces the emissions of NO x . However, the emissions for Lp20 and Lp30 were 4% and 9% higher than those for gasoline. This result is due to diffusion flames. The blending of gasoline with lemon peel oil reduces the evaporation rate of the fuel, thus enhancing the diffusion combustion flames. The diffusion flame mechanism, on the one hand, provides a more stoichiometrically correct air/fuel ratio in the combustion zone. However, the air/fuel ratio in the other part of the combustion chamber may be lower than the stoichiometrical one. An increase in the adiabatic temperature of the flame promotes localized formation of NO x in the combustion zone. A similar experimental setup was employed in the study by Manoj Babu et al. [118]. They examined gasoline and its blends with pine oil with a volume fraction of 10-30% (named Pn10-Pn30). They established [118] that the lowest emission of nitrogen oxides was recorded for the fuel blend Pn30 over the whole range of engine loads. Lower emissions stem from a decrease in the combustion temperature because of the high density of pine oil, which impairs the formation of the mixture with the air. Moreover, the calorific value of pine oil is lower than that of gasoline. Therefore, an increase in the proportion of pine oil in the blend lowers the calorific value of the resulting fuel, thus reducing the heat release. At a minimum engine load, the highest emissions of nitrogen oxides were for gasoline. With an increase in the engine load, the maximum emissions of NO x were typical of the fuel blends Pn10 and Pn20. Increased emissions for Pn10 and Pn20 at high loads were caused by a significant amount of oxygen in the combustion zone and a high temperature in the cylinders. As reported by Manoj Babu et al. [118], the heat release for the two blends at higher loads was higher than for gasoline. Therefore, the temperature in the cylinders was high enough for the formation of NO x . The maximum emissions of CO were recorded for the fuel blend with 40 vol% pine oil. CO emissions are mainly produced because of oxygen shortage in the combustion zone or at a low temperature in the cylinder. Manoj Babu et al. [118] concluded that the cause of the high content of carbon monoxide, in this case, was low heat release during the combustion of Pn30. An interesting trend is observed for fuels Pn10 and Pn20. At low engine loads, when there is not enough oxygen in the combustion zone, these composite fuels have higher emissions of CO than pure gasoline. However, with an increase in the load and, thus, a greater amount of oxygen supplied to the combustion zone and greater heat release in the cylinders, the emissions of CO for Pn10 and Pn20 are lower than those for gasoline. This leads to the conclusion that anthropogenic emissions of all conventional liquid hydrocarbon fuels can be reduced using biofuels or different oil types as additives. However, it should be taken into account that the level of emissions is determined not only by the fuel composition and characteristics but also by the parameters of the efficient operation of a power unit used with these fuels. Table 4 presents studies on anthropogenic emissions when using biofuels in powertrains. Taken together, these findings suggest that the optimal choice of composition of fuels and operation of powertrains can bring about a significant environmental effect. Fuel blends on the basis of Jet-A and Spirulina algae biofuel: B20% (20% biofuel with 80% Jet-A); B40% (40% biofuel with 60% Jet-A); B60% (60% biofuel with 40% Jet-A); B80% (80% biofuel with 20% Jet-A); biofuel B100%. Experimental jet engine Emissions of CO 2 decreased by up to 11% compared to Jet-A; Emissions of CO decreased by up to 35% compared to Jet-A; Emissions of NO x decreased by up to 5% compared to Jet-A; [121] Jet A + 20%/40% biodiesel (B20/B40) Turbofan engine (CFM56-7B) Emissions of NO x decreased for B20 and B40 by 29% and 23%, respectively, against Jet A. [122] Diesel fuel + 10-50% biodiesel based on palm oil (B10-B50) MGT (30 kW) Lower emissions of CO and NO x when using composite fuels rather than Diesel fuel [116] Gasoline + 20-40% hydrogenated catalytic biodiesel (HCB) Gasoline engine Emissions of NO x , CO, and HC decrease when using HCB; PM emissions increase when using HCB [117] Gasoline + 10-30% lemon peel oil (Lp10-Lp30) Gasoline engine CO emissions in the combustion of Lp10 were 5-7% lower than in the combustion of gasoline; NO emissions in the combustion of Lp10 were 5% lower than in the combustion of gasoline; CO 2 emissions in the combustion of Lp30 were 4% lower than in the combustion of gasoline; [118] Gasoline + 10-30% pine oil (Pn10-Pn30) Gasoline engine Maximum emissions of CO and minimum emissions of NO x were recorded for the fuel blend with 30 vol% pine oil. Conclusions (i) Many of the energy industry aspects are crying out for revision and innovation. In this respect, biofuels are attracting more and more interest. The conducted review has revealed that this area provides tremendous opportunities for replacing fossil fuels of petroleum origin. However, the diversity of raw materials and their processing methods determine the respective variation ranges of end-product properties. Moreover, only some technologies provide high-quality biofuel (transesterification, some types of thermal conversion). More complex technologies (cracking and hydrocracking) serve to create biofuels whose performance is comparable to that of conventional hydrocarbon fuels. (ii) Spraying, being a practical aspect of using liquid fuels, is an important stage. Data from numerous studies suggest that this stage, in many ways, determines the efficiency of combustion of aerosol, thus, the thermal efficiency of a unit and emissions. Increased viscosity of composite liquid fuels containing biofuel is the main constraint on efficient spraying in a combustion chamber. To solve this problem, researchers have proposed using additives (e.g., ethanol), preliminary heating of the fuel, or transverse injection of gas to provide disruption. (iii) The carbon neutrality of plant raw materials makes composite biofuels very environmentally attractive. Particular performance characteristics of the engine may vary, and in some cases, the combustion of composite fuels is inferior to that of conventional Diesel fuel, kerosene, and gasoline in emissions. There is a clear trend towards a decrease in the emissions of particulate matter (PM) with an increase in the proportion of biodiesel or bio-kerosene in the liquid fuel composition. At the same time, the yield of CO depends greatly on the operating conditions of the equipment. Generally, the review indicates that CO emissions tend to rise with an increase in the proportion of oils or biofuel to over 15-20%. This is largely attributed to a lower combustion temperature. (iv) The fuel properties, chemical composition, and presence of impurities affect the spraying, mixing with the air, rates of evaporation, and kinetics of ignition and burnout. The studies conducted with different composite fuels reveal that satisfactory combustion is possible by controlling the fuel density and viscosity, as well as by selecting the composition with a high evaporation rate. The impact of the new kinds of fuels on the condition of components and parts of engines, corrosion, and wear is understudied. The most promising development paths are the reduction of the cost of cracking raw materials, the search for affordable low-viscosity components for fuel blends, upgrade of plants and fuel feeding systems for optimal atomization, evaporation, ignition, and burnout of fuel mixtures.
2023-08-13T15:16:55.007Z
2023-08-11T00:00:00.000
{ "year": 2023, "sha1": "18ac570d2eec826d6fea50658b01b7c68dbd8a13", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/16/16/5939/pdf?version=1691745520", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8c22f27840fc354184a2a66a3dccf4f87c83a8ff", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Chemistry" ], "extfieldsofstudy": [] }
208704282
pes2o/s2orc
v3-fos-license
Enhanced capacitive properties of all-metal-oxide-nanoparticle-based asymmetric supercapacitors The major problem of transition metal oxide (TMO)-based supercapacitors is their low specific energy (Esp) due to the poor electrical conductivity of the TMO electrodes and narrow operating voltage window. To solve these limitations simultaneously, we propose asymmetric supercapacitors (ASCs) consisting of two composite TMO electrodes working in different potential ranges. Titanium dioxide (TiO2) nanoparticle (NP)-incorporated iron oxide (Fe2O3) and manganese oxide (MnO2) NPs were used as electrode materials covering the negative and positive potential window, respectively. The specific capacitance (Csp) of this asymmetric TiO2–Fe2O3‖TiO2–MnO2 supercapacitor is comparable to that of the symmetric TiO2–MnO2‖TiO2–MnO2 supercapacitor. However, the ASC can operate over a doubly extended voltage range, which resulted in a significant enhancement in the specific energy of the device. The Esp value of the ASC at a specific power of 1000 W kg−1 is 48.6 W h kg−1, which is 34.1 and 8.1 times, respectively, larger than that of the two symmetric devices. Introduction Pseudocapacitors have attracted growing attention due to their superior specic capacitance (C sp ) compared to other types of supercapacitors, mainly electrical double layer capacitors (EDLCs). [1][2][3] The high C sp values of pseudocapacitors are attributed to the faradaic redox reactions of the electrode materials for storing charges while the EDLCs use only non-faradaic charge storage on the electrode surface. 3,4 Especially, transition metal oxides (TMOs) have been widely investigated as an electrode material for pseudocapacitors due to their high theoretical C sp values and variant redox characteristics. [5][6][7] However, the major limitation of the TMO-based pseudocapacitors is their low energy density due to the intrinsically narrow redox potential of the TMOs because the energy density (E) is proportional to the square of the operating voltage window (DV), E ¼ C sp (DV) 2 /2. 8,9 To solve this problem, an asymmetric supercapacitor (ASC) has been studied to widen the voltage window by integrating two electrodes with different working potentials. Normally, TMOs with a positive working potential have been used as a positive electrode, while carbon materials such as graphene, 9,10 activated carbon, 11,12 and carbon nanotube 13 have been used as a negative electrode. However, the relatively low specic capacitance of the carbon-based electrode still hampers the further enhancement of the energy density. Thus, ASCs comprised of two TMO-based electrodes with different redox potentials are a promising candidate. Another challenging fact is that most TMOs have poor electrical conductivity, which leads to a low rate capability and a rapid decrease in the electrode thickness. [14][15][16] To overcome this problem, various electrode nanostructures have been proposed to increase the contact between the electrode and electrolyte. [16][17][18][19] Although nanostructured electrodes have improved the capacitive characteristics to some extent, most fabrication techniques are still expensive and complicated for practical use. In contrast, a much simpler, low-cost preparation for a variety of TMO nanoparticles (NPs) has been well established, and hence the TMO NPs are promising materials for nanostructured supercapacitor electrodes. However, very limited studies have been reported on all TMO-NPs-based ASCs; therefore, the preparation and capacitive property of such devices are of signicant interest from both a fundamental and technological perspective. In this study, we developed the ASC consisting of manganese oxide (MnO 2 ) and iron(III) oxide (Fe 2 O 3 ) NPs-based electrodes. These two TMOs are low-cost, non-toxic and easily obtained from naturally abundant minerals. Furthermore, these TMObased electrodes work at different potentials, and hence their combination is expected to widen the voltage window of the ASC. [20][21][22] The working potentials of the MnO 2 and Fe 2 O 3 electrodes were reported as 0.0-1.0 V and À1.1-0.2 V, respectively. 15,[23][24][25][26] A small amount of titanium oxide (TiO 2 ) NPs was also incorporated into the MnO 2 and Fe 2 O 3 electrodes for a synergetic improvement of the electrode performance. The incorporation of the TiO 2 NPs was expected to improve the charge transport characteristics of the TMO electrodes because the electrical conductivity of TiO 2 (10 À5 to 10 À2 S cm À1 ) is higher than that of MnO 2 (10 À6 to 10 À5 S cm À1 ) and Fe 2 O 3 (10 À9 to 10 À7 S cm À1 ). 27,28 In addition, the TiO 2 NPs are stable, inexpensive and readily available. The all-TMO-NPs-based ASC with an architecture of TiO 2 -Fe 2 O 3 kTiO 2 -MnO 2 could operate over a signicantly extended voltage range of 2.0 V, and hence, its specic energy was approximately 34.1 and 8.1 times larger than those of the TiO 2 -Fe 2 O 3 kTiO 2 -Fe 2 O 3 and TiO 2 -MnO 2 kTiO 2 -MnO 2 symmetric supercapacitors, respectively. Other capacitive properties of the Fe 2 O 3 kMnO 2 -based ASC such as the voltammetric response and cycle life were also investigated. Experimental First, the synthetic process for the MnO 2 and Fe 2 O 3 NPs from their precursors, potassium permanganate (KMnO 4 ) and iron(III) nitrate (Fe(NO 3 ) 3 ), respectively, was described elsewhere. 29 During the synthesis of the NPs, commercially available TiO 2 NPs (anatase, Dysol) with an average diameter of 10 nm were added to the synthetic reactor for their incorporation. The amount of TiO 2 NPs used was one-tenth the weight of each precursor, which corresponds to 15 wt% for the TiO 2 -MnO 2 composite NPs and 13 wt% for the TiO 2 -Fe 2 O 3 composite NPs, assuming that the reaction proceeded completely. The TiO 2 -MnO 2 electrode was prepared by dipping a nickel foam into a slurry containing TiO 2 -MnO 2 NPs as an active material, super-P as a conductive additive, and polytetrauoroethylene (PTFE, Aldrich) as a binder with a weight ratio of 8 : 1 : 1. The TiO 2 -Fe 2 O 3 electrode was prepared in a similar manner except that a uorine-doped tin oxide (FTO) substrate was used instead of Ni foam as a current collector. The loading mass of each electrode was measured using a microbalance. The ASC was prepared by stacking the TiO 2 -MnO 2 electrode and TiO 2 -Fe 2 O 3 electrode with a membrane lter paper as a separator in between. The morphology and crystalline structure of the synthesized NPs were characterized by eld emission scanning electron microscopy (FE-SEM, JSM-7410F, JEOL Ltd.), transmission electron microscopy (TEM, JEM-2100F, JEOL Ltd.), and X-ray diffraction (XRD, Rigaku D/Max2500). The specic surface area of the NPs were also estimated using a Brunauer-Emmett-Teller (BET) surface area analyser (QuadraSorb Station 2, Quantachrome Instrument). The capacitive properties of the electrodes and ASCs were evaluated using a cyclic voltammeter (ZIVE SP2, WonATech) in a 1.0 M aqueous Na 2 SO 4 electrolyte solution. While a two electrode system was used for the symmetric and asymmetric full-cell capacitors, a three electrode system, i.e. the TMO electrode as a working electrode, a platinum plate as a counter electrode, and Ag/AgCl (in 3 M KCl) as a reference electrode, was used to evaluate the individual electrode. Results and discussion The morphologies of the TiO 2 -NPs-embedded Fe 2 O 3 and MnO 2 NPs were characterized by FE-SEM ( Fig. 1a and b). Uniform, spherical particles with tens of nanometers in size were clearly observed. The elemental SEM-mapping results for the Ti atoms shown in the inset of the gures indicated that the TiO 2 NPs were well and uniformly dispersed. The Ti/Fe and Ti/Mn atomic ratios determined from the SEM-mapping analyses were equivalent to the mole fraction of TiO 2 used, indicating a stoichiometric embedment of the TiO 2 NPs. The crystallinities of the TiO 2 -Fe 2 O 3 and TiO 2 -MnO 2 composite NPs were characterized by XRD measurements. While clear diffraction patterns corresponding to the hematite a-Fe 2 O 3 phase 30 for the TiO 2 -Fe 2 O 3 NPs were observed (Fig. 1c), the broader peaks of the TiO 2 -MnO 2 NPs (Fig. 1d) indicated typical amorphous characteristic of the MnO 2 NPs. [31][32][33] Additional diffraction peaks corresponding to the TiO 2 anatase phase were also observed for both composite NPs. More close morphology of the synthesized TMO NPs were observed by TEM (Fig. S2 †). In the case of Fe 2 O 3 NPs, the clear lattice fringe in the TEM image and spots and rings in the selected area electron diffraction (SAED) pattern indicated the polycrystalline characteristics. In contrast, for the MnO 2 NPs, the fringes and SAED spots were hardly observed, indicating relatively amorphous characteristics, consistent with the XRD results. Fig. 2a shows the cyclic voltammogram (CV) measured using a three-electrode system at a scan rate of 30 mV s À1 for the Fe 2 O 3 (negative voltage range) and MnO 2 (positive voltage range) halfcell electrodes with and without TiO 2 incorporation. The specic capacitance, C sp , of the electrodes was calculated by the following equation; where I (A) is the average current, m (g) is the deposit weight, and dV/dt (mV s À1 ) is the scan rate. 34 The area of the CV contour, indicative of the specic capacitance, of the TiO 2 -MnO 2 electrode is signicantly larger than that of the TiO 2 -free MnO 2 electrode, while the areas of the Fe 2 O 3 electrode are little varied. The calculated C sp value of the MnO 2 electrode increased from 114.0 F g À1 to 141.5 F g À1 aer the TiO 2 incorporation. The CVs of the half-cell electrodes before and aer TiO 2 incorporation measured at various scan rates are shown in Fig. S3. † More quantitative analyses on the effect of the TiO 2 incorporation were carried out by deconvoluting the capacitive elements of the electrodes. At a given applied voltage, a current is regarded as a sum of the two capacitive elements, namely a surface element (k 1 v) and faradaic insertion element (k 2 v 1/2 ), according to the following equation; 35,36 i where v is the scan rate and k 1 , k 2 are constants. The process of extracting elements was described elsewhere. 37 The deconvoluted CV graphs for the MnO 2 electrodes at scan rates of 30 mV s À1 and 100 mV s À1 are shown in Fig. 2b. The k 1 v plot corresponding to the surface capacitive element is shown as a solid area, and the other shaded area corresponds to the insertion element. The surface element of the TiO 2 -free MnO 2 electrode was approximately 34.9 F g À1 , which was virtually independent of the scan rate. 36 However, the insertion element decreased from 79.0 F g À1 at 30 mV s À1 to 23.8 F g À1 at 100 mV s À1 because the diffusion-controlled insertion was less accessible at a higher scan rate. 38,39 The contribution of the insertion element in the total capacitance hence decreased from 69.3% at 30 mV s À1 to 40.5% at 100 mV s À1 . The incorporation of the TiO 2 NPs signicantly increased the surface element to 69.2 F g À1 . The insertion element also increased to 72.3 F g À1 at 30 mV s À1 and 33.0 F g À1 at 100 mV s À1 . As a result, at 30 mV s À1 , the total specic capacitance of the TiO 2 -incorporated MnO 2 electrode (141.5 F g À1 ) was 1.24 times larger than that of the TiO 2 -free MnO 2 electrode (114.0 F g À1 ). At a high scan rate, the effect of the TiO 2 incorporation was more apparent, reaching a 1.73 times increase in C sp from 58.8 F g À1 to 102.2 F g À1 at 100 mV s À1 . This enhancement in specic capacitance and voltammetric response is probably attributed to the reduced resistance of the electrode due to incorporation of electrically more conductive TiO 2 NPs. The resistive elements of the electrodes was evaluated by electrochemical impedance spectroscopy (EIS) measurements ( Fig. 3a and b). The diameter of the round curve in the high-frequency region of the Nyquist plots, indicative of the charge-transfer resistance, R ct , were signicantly smaller for the TiO 2 -incorporated electrodes. The diffusive resistance, R d , which is inversely proportional to the slope of the line in the low-frequency region, also decreased by the TiO 2 incorporation. While the incorporation of the TiO 2 NPs improved both the capacitive elements of the MnO 2 -based electrode, only the insertion element was increased in the case of the Fe 2 O 3 electrode (Fig. 2c). The scan-rate-independent surface element of the TiO 2 -free Fe 2 O 3 electrode was approximately 25.2 F g À1 , while the insertion element was 35.6 F g À1 (58.1%) at 30 mV s À1 and 16.6 F g À1 (42.4%) at 100 mV s À1 . For the TiO 2 -Fe 2 O 3 electrode, the insertion elements were signicantly larger over the whole range of the scan rate, e.g. 47.2 F g À1 (78.0%) at 30 mV s À1 and 19.3 F g À1 (59.3%) at 100 mV s À1 , indicating that the incorporation of the TiO 2 NPs improved the diffusion of the charge carriers in the Fe 2 O 3 electrode. In contrast, the TiO 2 incorporation decreased the surface elements from 25.2 F g À1 to 13.3 F g À1 . The opposite effect of the TiO 2 incorporation on the surface elements of the MnO 2 and Fe 2 O 3 electrode is probably related to the change of their surface area. The BET isotherms, with an adsorption-desorption hysteresis loop as shown in Fig. S4, † indicate that the synthesized MnO 2 and Fe 2 O 3 NPs can be categorized as a mesoporous adsorbents. 40 This porous nature of the electrode materials is favourable for the application to supercapacitor electrodes due to the increased surface sites for the energy storage. [41][42][43][44] While the specic surface area of the MnO 2 NPs increased from 137.5 m 2 g À1 to 164.3 m 2 g À1 by the TiO 2 incorporation, that of the Fe 2 O 3 NPs decreased from 142.5 m 2 g À1 to 107.5 m 2 g À1 (Fig. 3c). Although it is not clear yet, the incorporation of TiO 2 NPs on the MnO 2 NPs with larger pore volume and mean pore diameter (Fig. 3d) might lead to more morphological hierarchization. On the other hand, the TiO 2 incorporation on Fe 2 O 3 NPs seemed to result in a reduction of contact sites with the electrolyte. All the deconvolution results are summarized in Tables S1 and S2. † Based on the results of the two composite TMO NPs-based single electrode cells, an asymmetric full-cell consisting of Fig. 4a and b show the CV contours of the TiO 2 -Fe 2 O 3 and TiO 2 -MnO 2 symmetric cell, respectively, at various scan rates from 10 mV s À1 to 100 mV s À1 . The calculated C sp values are plotted as a function of the scan rate (Fig. 4d). The C sp value of the TiO 2 -Fe 2 O 3 symmetric device was 35.5 F g À1 measured at a scan rate of 30 mV s À1 . This C sp value is approximately half the value of the TiO 2 -Fe 2 O 3 single electrode, 60.5 F g À1 , according to the following equation for the C sp calculation of symmetric or asymmetric devices, where m is the mass, C is the specic capacitance, and + or À denotes the values for positive and negative electrodes, respectively. The C sp value gradually decreased as the scan rate increased, and reached 24.3 F g À1 at 100 mV s À1 . For the TiO 2 -MnO 2 symmetric device, the C sp decreased from 73.7 F g À1 at 10 mV s À1 to 45.8 F g À1 at 100 mV s À1 . The C sp values of the symmetric cells at various scan rates are plotted in Fig. 4d and summarized in Table S3. † Prior to the assembly of the TiO 2 -Fe 2 O 3 kTiO 2 -MnO 2 ASCs, the mass ratio of the TiO 2 -Fe 2 O 3 and TiO 2 -MnO 2 NPs was determined by the C sp values of the Fe 2 O 3 -and MnO 2 -based symmetric cells using the following equation in order to make each electrode contribute equivalent capacitance, The largest C sp value of 72.7 F g À1 was obtained at a scan rate of 10 mV s À1 , and the value decreased to 45.6 F g À1 at 100 mV s À1 . This scan rate retention of 62.7% was larger than 51.8% for the TiO 2 -Fe 2 O 3 symmetric cell and 62.1% for the TiO 2 -MnO 2 symmetric cell, as shown in Fig. 4d, indicating a higher voltammetric response of the ASC. The capacitive performance of the Fe 2 O 3 -and MnO 2 -based symmetric and asymmetric cells were also estimated by galvanostatic charge-discharge (GCD) measurements shown in Fig. 5. The C sp value of the TiO 2 -Fe 2 O 3 symmetric cell was 12.8, 10.3, and 3.5 F g À1 measured at a current density of 1, 2 and 3 A g À1 , respectively (Fig. 5a). The considerably extended discharge time of the TiO 2 -MnO 2 symmetric and asymmetric cells (Fig. 5b and c) indicate the better capacitance of the MnO 2based electrode. The calculated C sp values of the TiO 2 -Fe 2 O 3 -kTiO 2 -MnO 2 ASC were 87.5, 77.7, and 33.6 F g À1 at a current density of 1, 2 and 5 A g À1 , respectively. The obtained C sp values at various current densities are summarized in Table S4. † In addition to the larger C sp value of the TiO 2 -Fe 2 O 3 kTiO 2 -MnO 2 ASC, the doubly extended voltage window contributed to the enhancement in the energy density of the device shown in the Ragone plots (Fig. 6a). The specic energy (E sp ) and specic power (P sp ) of the cells were calculated using the following equations, 45,46 where DV (V) is the applied voltage window and Dt (s) is the discharge time. As shown in the gure, the specic energies of the Fig. 6b. The cycle stability of the TiO 2 -Fe 2 O 3 kTiO 2 -MnO 2 ASC was also evaluated by repetitive GCD measurement cycles at a specic current of 2 A g À1 (Fig. 6c). The C sp retention aer 2000 GCD cycles was 83.5% of the initial value. The apparent change in the surface morphology of the electrodes was not observed aer 2000 GCD cycles (Fig. S5 †), also indicating the good structural stability of the TiO 2 -Fe 2 O 3 kTiO 2 -MnO 2 ASC electrodes. Conclusions An all-metal-oxide-NPs-based ASC with electrodes made of TiO 2 NPs-incorporated Fe 2 O 3 and MnO 2 NPS was successfully fabricated. In the MnO 2 NPs-based single electrode, the incorporation of the TiO 2 NPs improved the specic capacitance from 114.0 F g À1 to 141.5 F g À1 at a scan rate of 30 mV s À1 . The deconvolution of the capacitive elements indicated that this enhancement was attributed to an increase in both the surface element and insertion element. However, for the Fe 2 O 3 NPsbased electrode, the insertion element was increased while the surface capacitive element was reduced so that the total capacitance did not improve apparently. An ASC comprised of these two composite TMO electrodes, the TiO 2 -Fe 2 O 3 and TiO 2 -MnO 2 electrodes, was fabricated. The C sp value of the TiO 2 -Fe 2 O 3 kTiO 2 -MnO 2 ASC reached 72.7 F g À1 measured at a scan rate of 10 mV s À1 , which is slightly lower than the value for the TiO 2 -MnO 2 kTiO 2 -MnO 2 (73.7 F g À1 ) symmetric capacitors. However, the ASC could operate over a voltage window of 2.0 V, twice as large as that of the symmetric devices, which resulted in a signicant enhancement in the specic energy of the device. The specic energy of the TiO 2 -Fe 2 O 3 kTiO 2 -MnO 2 ASC was 48.6 W h kg À1 at a specic power of 1000 W kg À1 while that of the TiO 2 -Fe 2 O 3 kTiO 2 -Fe 2 O 3 and TiO 2 -MnO 2 kTiO 2 -MnO 2 symmetric capacitor was 1.4 and 6.0 W h kg À1 , respectively. The C sp retention of the TiO 2 -Fe 2 O 3 kTiO 2 -MnO 2 ASC was approximately 83.5% of the initial value aer 2000 GCD cycles. All the benets, i.e. wide operating voltage window and comparable specic capacitance, demonstrate that an all-metal-oxide-NPsbased ASC is a promising architecture for high energy density supercapacitors. It also implies that the device performance will be further improved by optimizing the type and composition of the TMO NPs used. Conflicts of interest There is no conicts to declare.
2019-10-10T09:11:39.840Z
2019-10-07T00:00:00.000
{ "year": 2019, "sha1": "fda89a7d61e137716fd9080e49350c6cefb9b02c", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c9ra06066a", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7b2dbc680f97755c6b326a4d2cec6113abe7d4b1", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
254743740
pes2o/s2orc
v3-fos-license
Finger Growth and Selection in a Poisson Field Solutions are found for the growth of infinitesimally thin, two-dimensional fingers governed by Poisson’s equation in a long strip. The analytical results determine the asymptotic paths selected by the fingers which compare well with the recent numerical results of Cohen and Rothman (J Stat Phys 167:703–712, 2017) for the case of two and three fingers. The generalisation of the method to an arbitrary number of fingers is presented and further results for four finger evolution given. The relation to the analogous problem of finger growth in a Laplacian field is also discussed. Introduction Systems in which an interface separating two different phases evolving in response to diffusion arise in many different scenarios over widely varying scales. Even in two dimensions the deforming interface frequently leads to complicated, often striking, patterns. Examples include Saffman-Taylor fingering [16], diffusion limited aggregation [18], the formation of ramified river valley networks by groundwater flow [6,15], combustion fronts [19], magnetic flux dendrite formation in superconductors [10], and growth of bacterial colonies [8]. In all these examples the interface is characterised by long narrow protrusions-fingers-of one phase penetrating the other. Even in the simplest mathematical framework with only one active phase whose diffusion is modelled by Laplace's equation, and when the interface velocity is proportional to the gradient of the phase, theoretical study of two-dimensional Laplacian growth and its resultant pattern formation involves consideration of difficult nonlinear free boundary problems. One assumption which enables progress is to assume that the fingers are infinitesimally thin and advance at their tips only with velocity proportional to the local gradient of the phase. In terms of complex analysis the fingers can be thought of as evolving slits in the complex plane. Communicated by Irene Giardina. This realisation, coupled with conformal-mapping methods, has resulted in considerable understanding of the Laplacian growth of thin fingers, or their straight-line counterpart: needles e.g. [1,5,17]. Laplace's equation is frequently used as a natural first approximation to more complicated physics when the interface dynamics is governed by a general diffusive-type PDE. Studies concerned with the latter are rare primarily because application of mathematical tools based on complex analytic methods are not immediately obvious for general PDEs. Some exceptions include [11,13,14]. The present work is a contribution to non-Laplacian growth in that it considers Poisson's equation as the governing PDE. This work considers a specific example of finger growth in a Poisson field (with constant right-hand side) applying to the growth of a stream network incised by groundwater flow. Laplacian growth models have been successful in describing many features of these networks: a prominent example of this being an explanation of the remarkable observation that the angle at which streams bifurcate is close to 2π/5 e.g. [6,7,15]. However, the groundwater flow field φ is more appropriately modelled by Poisson's equation, in non-dimensional form, Δφ = −1 where the constant right-hand side represents a constant source (precipitation). This paper derives explicit results for fingers growing in a Poisson field which agree with the recent numerical experiments of Cohen and Rothman [4] who grow two or three fingers in a long strip-like channel and find, interestingly, after carrying out a large number of numerical experiments that the fingers eventually grow parallel to the channel boundaries with the same well-defined spacing irrespective of their initial starting locations. Their method and results are discussed in Sect. 2. Here, analytical results using conformal mapping combined with the principle of local symmetry suitably modified to account for the non-zero right hand side of Poisson's equation are derived in Sect. 3 and compared to [4]. The relationship between the Poisson fingers to the analogous Laplacian case is discussed in Sect. 4. The general case of 2N -fingers is discussed in Sect. 5. Background: The Numerical Experiments of Cohen and Rothman Cohen and Rothman [4] consider the growth of either 2 or 3 infinitesimally thin fingers in a two-dimensional, narrow strip of width 2l and length L = 50l. The fingers penetrate the interior of the strip from one of the short sides, and their dynamics determined by solving Poisson's equation Δφ = −1, subject to φ = 0 on the fingers and all boundaries of the strip, except the side opposite from which the fingers grow where a zero flux condition ∂φ/∂n = 0 is imposed. Figure 1 shows the equivalent set-up to be considered here with l = 1 and complex coordinates z = x + iy chosen so that the centreline of the strip is aligned with y = 1 and the fingers grow from the edge aligned with the imaginary axis x = 0, 0 ≤ y ≤ 2. The fingers computed in [4] are grown at their tips with constant velocity in directions according to the principle of local symmetry. This involves numerically solving the Poisson equation in the slit domain at a given timestep and using the solution to find the leading terms in the local expansion of φ(x, y) near the tip of each finger: where d 1,2 are coefficients determined by the global numerical solution of the Poisson problem, r and θ are local polar coordinates such that θ = ±π coincides with the finger near its tip at r = 0. The principle of local symmetry requires that the fingers grows in a direction such that d 2 = 0 [3,7]. As noted in [4] while the path selection and growth mechanism of fingers is similar at a local level near finger tips for both harmonic (Laplacian) and Poisson fields, the trajectories will differ since the coefficients in (1) depend on the global field. Using the above procedure Cohen and Rothman [4] carry out a large number of experiments in which they initiate either 2 or 3 fingers at random positions y i along the edge at x = 0 and grow each at constant velocity in directions determined by the principle of local symmetry. Their Fig. 4 shows that after an initial adjustment of order the width of the strip the fingers grow approximately parallel to the long axis of the strip. The notable feature being that irrespective of the starting locations y i , the fingers asymptotically approach the same particular-selected-paths parallel to the along-strip axis. For 2 fingers, [4] find in 200 numerical experiments with differing initial conditions that the straight paths are symmetrically placed either side of the strip centreline with each finger a distance l w = 0.74 ± 0.027 from the edge of the strip (see Fig. 1 for the definition of l w ). For 3 fingers, the ultimate selected paths are again symmetric about a middle finger aligned with the strip centreline, with the other two fingers growing parallel at a distance l w = 0.60±0.031 from the outer edges of strip. In Sect. 3 the asymptotic length scales l w are obtained explicitly using conformal mapping combined with the principle of local symmetry suitably modified to account for Poisson's equation. Derivation of Asymptotic Finger Paths Problem formulation: two finger case Motivated by [4] it is assumed that the selected paths are parallel to the long sides of the strip, are symmetric about the strip centreline and that the fingers themselves are long compared to the strip width, but short compared to the length L of the strip. Thus the problem is approximated by semi-infinite straight fingers in an infinite strip (L → ∞) as shown in the z-plane sketch of Fig. 2. Symmetry about y = 1 is assumed so that only half the strip width 0 ≤ y ≤ 1 need be considered. The mathematical task is to find y a = l w , by solving Poisson's equation in the cut strip, subject to φ = 0 along y = 0 and on the finger itself along y = y a , φ y = 0 along y = 1 and φ x = 0 for |x| → ∞, and that the principle of local symmetry holds at the finger's tip. Let The sequence of conformal maps from the z to the w-plane and then ζ -plane as indicated by the arrows. The dashed line indicates the portion of the boundary in which the zero flux boundary condition applies. The finger tip z a is mapped to w = a and then ζ = ζ a with these locations indicated by small dots so that in the cut strip Δψ = 0, and ψ = 0 on y = 0, ψ = c = y 2 a /2 − y a on the finger along y = y a , ψ y = 0 on y = 1 and ψ x = 0 as |x| → ∞. Since ψ is harmonic, the solution of the boundary problem is facilitated using conformal mapping. The z-plane cut strip geometry is a degenerate polygon and can be mapped to the upper half of the w = u + iv plane by a Schwarz-Christoffel map-see Fig. 2. The map is where a is a real parameter such that |a| < 1 and maps to the tip of the finger at z = z a where y a = (1 − a)/2 (e.g. [12], p. 155). The boundary conditions along the real axis of the w-plane are ψ = c for |u| ≤ 1, ψ = 0 for u > 1, ψ v = 0 for u < −1. Additionally, ∇ψ → 0 for w → ∞. A further map to the ζ -plane given by ζ = (1 + w) 1/2 maps the upper half of the w-plane to the first quadrant of the ζ -plane, with the zero flux boundary condition being mapped to the positive imaginary axis in the ζ -plane i.e. ψ ξ = 0 on ξ = 0, η > 0, where ζ = ξ + iη. This boundary condition implies that it is sufficient to seek a solution to Laplace's equation which is symmetric about the imaginary axis in the upper half of the ζ -plane, with boundary conditions on the real axis Fig. 2. The image of the finger tip z a in the ζ -plane is denoted ζ a . The solution in the ζ -plane satisfying the zero flux condition at infinity is ψ = ImF(ζ ) where and hence that h (ζ a ) = 0 as expected. Further differentiation yields be a small displacement that a general point z is from the finger tip z a , and = ζ − ζ a be the corresponding increment in the ζ -plane. Using results in [2] based on Taylor expansion, (δ) can be expanded as series Expanding the difference in complex potential as a power series in using solution (4) gives where In the Laplacian case the principle of local symmetry states that the coefficient of δ vanishes in the expansion (8) [3]. Effectively this guarantees that the path taken by the finger is such that it maintains local symmetry in the potential field about the tip and is equivalent to maximising the flux into the tip [4,7]. However in the Poisson case this must be modified owing to the contribution of the term (2) that has been added to ψ needed to satisfy the right hand side of Poisson's equation. In this problem the term is given by y − y 2 /2. Now, compare (8) with (1) and identify d 2 = β 1 α 2 + β 2 α 2 1 with the gradient of the potential field (the imaginary part of F(z)) near the finger tip in the direction orthogonal to it (here the imaginary i.e. y-direction). In the Poisson case in order that the field be symmetric about the tip this gradient must balance the gradient of the term y − y 2 /2 in the same direction: V a = 1 − y a . Thus the principle of local symmetry becomes Substituting (6), (7) and (9) into (10) and simplifying gives an algebraic equation for a: with solution a ≈ −0.48099 on the permitted interval |a| ≤ 1. This in turn gives y a = l w ≈ 0.74 in agreement with [4] who found l w = 0.74 ± 0.027. Three finger case The case of 3 fingers proceeds similarly to the above, the primary difference being the application of different boundary conditions on separate portions of the axis of symmetry y = 1 owing to the presence of a growing finger along a semi-infinite part of y = 1. It is assumed that this middle finger is of the same length of the other two fingers and Fig. 3 The sequence of conformal maps for the three finger case from the z to the w-plane and then ζ -plane as indicated by the arrows. The dashed line indicates the portion of the boundary in which the zero flux boundary condition applies. The finger tip z a is mapped to w = a and then ζ = ζ a , while the finger tip along the symmetry axis is mapped to w = −b and then ζ = 0; both are indicated by the small dots so its tip is at x a + i while the tip of the other finger in the domain is x a + iy a . Figure 3 shows the situation. The boundary condition along y = 1 is then: ψ = φ − y + y 2 /2 = −1/2 along the finger x ≤ x a , and ψ y = 0 for x > x a . The same conformal map used in the two finger analysis maps the z-plane to the w-plane with z b = x a + i being mapped to w = −b, where b > 1. There are two unknown parameters to be determined: a and b. In the w-plane the boundary conditions are mapped to the real axis and are ψ = c for |u| ≤ 1; ψ = 0 for u > 1; ψ = −1/2 for −b ≤ u < −1; ψ v = 0 for u < −b. Again, the upper half of the w-plane is subsequently mapped to the first quadrant of the ζ -plane via the map ζ = (w + b) 1/2 which has the effect of mapping the zero flux boundary condition to the positive imaginary axis in the ζ -plane. Extending the region of consideration to the upper half of the ζ -plane and demanding the solution is symmetric about the imaginary axis results in results in the following boundary conditions along the ξ axis: ψ = 0 for ξ > (b + 1) 1/2 ; ψ = c for (b − 1) 1/2 < |ξ | < (b + 1) 1/2 ; ψ = −1/2 for |u| < (b − 1) −1/2 ; and that the gradient of ψ vanishes at infinity-see Fig. 3. The solution for ψ in the ζ -plane is ψ = Im f (ζ ) where Note in this case z a is mapped successively to w = a and then ζ a = (a + b) 1/2 . Let z = h(ζ ) be the composite map from the ζ -plane to the z-plane. Computing the first three derivatives and evaluating at ζ a gives as expected h (ζ a ) = 0, and Expanding the difference in complex potential F(z) (12) evaluated at the points z and z a as a power series in of the same form as (8) with now In the limit b → 1 (14) reduces to (9); physically in this limit the length of the middle finger becomes negligible compared to the other two fingers and the two finger case is recovered. For the three finger case, the additional unknown parameter b is found by assuming that the length of the middle finger is the same as the two fingers either side, this being consistent with the numerical experiments [4] in which fingers are grown at constant speed. Hence the Schwarz-Christoffel map (3) upon equating real parts for the tips of the fingers gives the following relation between a and b: Substituting (13) into the local symmetry condition (10) gives where the coefficients β 1,2 are given by (14). Equations (15) and (16) are a pair of coupled nonlinear equations for a and b which can be solved numerically e.g. using matlab's vpasolve routine. The permissible solution is a ≈ −0.2213 and b ≈ 1.2904, leading to y a = l w ≈ 0.61 for the three finger case. This is again in good agreement with [4] who find l w ≈ 0.60±0.031. Relation to the Laplacian Growth Case A similar method to that used in Sect. 3 can be used to find the asymptotic paths selected by fingers growing in a semi-infinite strip which are governed by Laplace's equation Δφ = 0. In this case the flux needed to drive the finger growth is provided at infinity. Paths can be found using two alternative methods: the first methods proceeds by solving the chordal Loewner equation for slit evolution in the upper half of the w-plane and then mapping the paths to the strip domain. This is done below and then the result compared to that using the method used in Sect. 3. The Laplacian paths which obey the principle of local symmetry ('geodesic' Loewner paths) for two fingers growing from the real axis into the upper half ζ * -plane while maintaining symmetry about the imaginary axis are known exactly [9]. After initial adjustment they approach straight line paths diverging with an angle π/5 between them. That is, fingers, on which φ = 0, asymptote toward the rays r exp(2π/5) and r exp(3π/5) where r and θ are polar coordinates with θ = 0, π coinciding with the real w-axis along which φ = 0. The far-field condition used in deriving the solution is that used in standard Loewner growth: φ → Im(ζ * ) as ζ * → ∞. To find the equivalent paths in a semi-infinite strip (the z-plane) with φ = 0 on all boundaries, the above finger trajectories (assumed to emanate from the interval |Rew| ≤ 1) are simply mapped to the z-plane using the map ζ * = cosh(π z/2) which takes the upper half of the ζ * -plane to the semi-infinite strip 0 ≤ y ≤ 2 and x > 0. Interest here is on the far-field paths, one branch of which tends to and so y → 4/5, a path a distance 0.2 from the centreline y = 1. The other path is symmetrically placed the other side of the strip centreline. The method of Sect. 3 is now used to reproduce this result: two fingers are assumed to grow parallel and equidistant from the strip centreline as in Fig. 2. The condition ψ = c is replaced by ψ = 0 in the Laplacian case since there is no difference between φ and ψ. The primary difference is instead of the vanishing flux condition at infinity used in Sect 2, the far-field condition in the z-plane is now φ → Im[ 1 2 exp(π z/2)]. This condition comes from the map ζ * = cosh(π z/2) applied to the standard chordal Loewner far-field condition φ → Imζ * as ζ * → ∞. After the same successive transformations from the z to w and then ζ -planes described in Sect 3, the problem in the ζ -plane then becomes that of finding a harmonic field ψ which vanishes on the real axis. Note that in this sequence of transformations the far-field behaves as z → (1/π) log w → (2/π) log ζ . Hence in the ζ -plane, ψ tends to (1/2)Imζ at infinity, and thus ψ = ImF(z) where F(z) = ζ /2 is the solution. Expanding F(z) − F(z a ) in the same way as (8) gives β 1 = 1/2 and β 2 = 0, for which in turn the principle of local symmetry (10) with V = 0 immediately gives a = −3/5 or y a = l w = 4/5. Essentially this symmetry principle is equivalent to demanding that the third derivative (6) h (ζ a ) = 0-see also [2]). The fingers grow parallel and distance 0.2 either side of the centreline in agreement to the above argument based on mapping the solution [9] for 2 fingers growing in the half-plane. General Case: 2N Fingers This section generalises the method of Sect. 3 to determine the paths selected by 2N parallel fingers propagating along the strip i.e. N fingers placed symmetrically either side of the centreline. It is possible to also consider the 2N + 1 finger case with the middle finger propagating along the centreline but the details are not presented here. Let z i = x i + iy i , i = 1, . . . , N be the tips of the N fingers arranged with increasing imaginary part i.e. 0 < y 1 < · · · < y N < 1. It is assumed that the fingers are of equal length: x i = x j for all i and j. On each finger ψ = c i = y 2 i /2 − y i and, as before, ψ = 0 on y = 0 and ψ y = 0 on the centreline y = 1. The Schwarz-Christoffel map from the upper half of the w-plane to the z-plane z = f (w) can be constructed from the primitive where w = a i , i = 1, . . . , N , are real parameters such that z i = f (a i ), and b i are real parameters which map to x → −∞. The parameters are ordered according a i > b i+1 > a i+i , i = 1, N − 1. Figure 4 shows the finger arrangement in the z-plane and the sequence of a i and b i on the real w-axis. Further, define b 1 = 1 and b N +1 = −1. The parameters a i , i = 1, . . . , N and b i , i = 2, . . . , N represent 2N − 1 unknown real parameters which need to be found. Writing (18) as partial fractions and integrating gives z = f (w) The distances y i of the fingers from the strip boundary y = 0 are found by considering the imaginary part of the logarithms in (19) Demanding that all N fingers have the same length gives N − 1 equations of the form A further N equations for the unknown parameters are obtained by considering the principle of local symmetry at each of the N finger tips. As in Sect. 3, the upper half of the w-plane is mapped via a square root ζ = (w + 1) 1/2 to the first quadrant on the ζ -plane which is subsequently extended to the entire upper half ζ -plane by virtue of the symmetry condition along the imaginary ζ -axis. The complex potential in the ζ -plane is then where c 0 = 0 is understood. Now considering the principle of local symmetry at each tip requires calculation of the expansion F(ζ a i + ) − F(ζ a i ) = β 1i + β 2i 2 · · · = d 1i δ 1/2 + d 2i δ + · · · , where β 1i α 2i + β 2i α 2 1i = d 2i . The first two terms in the expansion of F(z a i + ) − F(z a i ) using (23) are and the coefficients α 1i and α 2i have the same form as (7) and are calculated from the map (19). Once the expressions d 2i are found then local symmetry demands where which in turn from (20) and (21) determines the elevation of each of the fingers: y 1 ≈ 0.53 and y 2 ≈ 0.85. Concluding Remarks Explicit solutions for the asymptotic paths selected by fingers in a Poisson field have been found which compare well with the equivalent paths recently found numerically [4]. The method relies on finding a particular solution for the Poisson field, and then using conformal mapping on the resulting harmonic problem to find the local field in the vicinity of the fingertips. The principle of local symmetry, suitably modified to account for the particular solution needed to account for the Poisson forcing, determines the unknown parameters of the mapping enabling the paths selected by the fingers to be calculated. To the author's knowledge these are the first exact solutions, albeit applying to the asymptotic idealisation to the finite length strip geometry, for the non-Laplacian growth of infinitesimally thin fingers. It is of interest to see if a similar approach can be usefully employed in other geometries and boundary conditions, and for other forms of Poisson forcing which may involve either non-constant or time varying functions. Beyond Poisson's equation, other elliptic PDEs are relevant to modelling other physical processes and studying their fingering patterns is also of interest e.g. stream networks formed by the non-Laplacian flow of groundwater with spatially varying diffusivity. While the fingers here evolve steadily in a fixed direction, the time-varying problem in which they take curved paths requires study. The classical approach to studying slit evolution in the half-plane involves solution of the chordal Loewner equation. In Laplacian growth this involves so-called geodesic dynamics [1,7,9,17] in which the Loewner forcing function has a precise form and results in paths equivalent to those which evolve according to the principle of local symmetry [7]. How is this Loewner evolution modified for Poisson growth? Even more fundamentally, it is interesting to speculate on whether it is possible to derive a Loewner-type equation governing slit evolution in a Poisson field. The approach taken here of finding a particular solution for the inhomogeneous Poisson term results in a harmonic problem for the potential ψ to be solved subject to ψ = 0 on the fingers. Approximating the fingers as straight needles evolving parallel to the strip meant here that ψ = constant on the fingers, but in general, ψ = ψ 0 (x, y) on curved fingers where ψ 0 is a given function depending on the choice of particular solution. It is an open problem to incorporate boundary conditions more general than ψ = 0 on fingers in Loewner theory. Another approximation employed here which does not necessarily apply in more realistic scenarios is the assumption that the different fingers grow with the same velocity. In contrast, it is well-known that interacting fingers and needles grow competitively with velocities proportional to the local gradient of the phase field at their tips [1,5], with the effect that longer fingers tend to grow more rapidly than shorter fingers. This effect, referred to as screening, plays a key role in determining patterns selected in finger growth [1,5,9]. Section 4 shows the paths selected by Laplacian fingers are different to those than in the Poisson case. There is no continuous parameter which connects the two limits since the forcing is different: in the Laplacian case there is a flux from infinity, whereas in the Poisson case the forcing is uniform over the whole domain. However, by combining the two different forcings in an appropriate proportion related to the constant on the RHS of the Poisson equation is may be possible to connect them continuously, though this set-up is perhaps physically difficult to justify. From the stream network development view, use of the Laplace approximation has been highly successful in explaining phenomenon which act on a local level (e.g. stream bifurcation), but the same approximation should be used with caution over larger scales when streams interact with each other and far-field boundary conditions.
2022-12-17T14:12:29.926Z
2019-11-28T00:00:00.000
{ "year": 2019, "sha1": "1c1967b03680c45dee9a73d6058a8c6fcfaead5a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10955-019-02454-6.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "1c1967b03680c45dee9a73d6058a8c6fcfaead5a", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [] }
248050524
pes2o/s2orc
v3-fos-license
Treatment of primary epiglottis collapse in OSA in adults with glossoepiglottopexy: a 5-year experience SUMMARY Objective To review our 5-year experience with a modified version of glossoepiglottopexy for treatment of obstructive sleep apnoea syndrome (OSA) in two hospitals. Methods A retrospective analysis was carried out on a cohort of adult patients affected by OSA suffering from primary collapse of the epiglottis who underwent a modified glossoepiglottopexy. All patients underwent drug-induced sleep endoscopy, polysomnographic and swallowing evaluation, and assessment with the Epworth Sleepiness Scale (ESS). Results Forty-nine patients were retrospectively evaluated. Both the apnoea-hypopnoea index (AHI) (median AHIpost-AHIpre = -22.4 events/h; p < 0.001) and oxygen desaturation index (ODI) showed a significant postoperative decrease (median ODIpost-ODIpre = -18 events/h; p < 0.001), as did hypoxaemia index (median T90% post-T90% pre = -5%; p < 0.001). The ESS questionnaire revealed a significant decrease in postoperative scores (median ESSpost-ESSpre =- 9; p < 0.001). None of the patients developed postoperative dysphagia. Conclusions Our 5-year experience demonstrates that modified glossoepiglottopexy is a safe and reliable surgical technique for treatment of primary epiglottic collapse in OSA patients. Introduction Obstructive sleep apnoea syndrome (OSA) is a prevalent disease affecting around 20% of the population (reaching up to 60% in those over 65 years) 1 with potentially life-threatening consequences as it is associated with other comorbidities such as cardiovascular events, neurocognitive impairment and stroke. The gold standard in the management of upper airway collapse in OSA is continuous positive airway pressure (CPAP). Despite its proven efficacy, a significant number of patients cannot tolerate this form of treatment and seek other alternatives. In this regard, surgery has a role in non-compliant patients who are not willing to receive CPAP therapy and in cases where CPAP therapy fails to restore normal breathing 2 . In this setting, the effectiveness of surgical treatment in reducing OSA-related cardiovascular morbidity and mortality has been demonstrated 3 . In recent years, the role of surgery in the management of OSA has been evolving: the development of new techniques, together with a better understanding of their indications, has allowed for more precise selection of patients in order to tailor surgery to the multitude of anatomical conditions that characterise OSA 4 . When choosing the right technique, structured pre-operative work-up is essential: drug-induced sleep endoscopy (DISE) is crucial, as it is the only exam that can recreate sleeping conditions and allow surgeons to identify the site and mechanism of obstruction 5 . Moreover, DISE facilitates the recognition of obstructive patterns that are not evident during awake flexible transnasal video endoscopy, such as epiglottic collapse 6 . Epiglottic collapse is well known in the literature as one of the possible conditions implicated in paediatric laryngomalacia. In adults, while laryngomalacia is more of an anecdotic entity that has not been clearly described, primary collapse of the epiglottis can be related to treatment failure of CPAP 3 . This happens as CPAP-generated airflow pushes down the epiglottis, closing the laryngeal aditus, and consequently worsening the airway obstruction 7 . Non-surgical treatment has been demonstrated to be inadequate to treat this condition, with mandibular advancement devices (MADs) leading to disappointing results 8 . In this respect, a surgical modification of the Monnier's glossoepiglottopexy was developed and published with preliminary data in 2017, demonstrating encouraging results 7 . Herein, we describe our experience with this technique together with the results obtained in a cohort of patients suffering from primary collapse of the epiglottis treated at two hospitals. Study design and population A retrospective multicentric analysis was carried out on a period from January 1, 2015 to December 31, 2019 on a cohort of patients affected by OSA who underwent glossoepiglottopexy at the Department of Otorhinolaryngology and Head and Neck Surgery of the University of Genoa and the Department of Otorhinolaryngology -Fabrizio Spaziani Hospital in Frosinone. For patients in whom multilevel surgery was performed, all procedures were considered in the analysis. Tonsillectomy was carried out by preserving the palatoglossus and palatopharyngeus muscles. Septoplasty and turbinoplasty were performed under rhinoscopy 9 . Non-resective pharyngoplasty was performed using the barbed suspension pharyngoplasty technique (BSP) 9 , barbed reposition pharyngoplasty (BRP) 10 , functional expansion pharyngoplasty (FEP) 11 , modified expansion sphincter pharyngoplasty (MESP) 12 , or barbed anterior pharyngoplasty (BAPh) 13 . Inclusion criteria were: OSA confirmed by polysomnography study with an apnoea-hypopnoea index (AHI) > 15 episodes/h; body mass index (BMI) < 35 kg/m 2 ; primary epiglottic collapse diagnosed by DISE; and ease of laryngeal and oropharyngeal exposure (Laryngoscore < 6). The latter was evaluated with the Laryngoscore instrument, which is commonly employed in our hospitals to select patients for transoral procedures 14 . As is standard policy at our clinics, we strongly suggest CPAP therapy, especially to patients with an AHI > 30, and reserve surgical treatment only for those who are not compliant or who do not respond to non-surgical therapy. Major comorbidities, severe tongue base hypertrophy, cranio-facial malformations, laryngeal dysfunction (swallowing, motility disorders, and laryngeal stenosis) and other sleep-related disorders were considered exclusion criteria. All patients underwent a standard preoperative evaluation protocol consisting of awake flexible transnasal video endoscopy, polysomnographic assessment and DISE evaluation. Postoperative evaluation was performed at 6 months after surgery by clinical, polysomnographic and endoscopic assessment. All data were extracted from a single database. Clinical evaluation Preoperatively, all patients underwent thorough otolaryngologic physical examination. Clinical history was collected focusing on sleep habits and sleep disturbances. BMI was also reported. The Epworth Sleepiness Scale (ESS) was used to rate daytime sleepiness 15 , while swallowing function was assessed by the Eating Assessment Tool (EAT 10) 16 and the penetration aspiration scale 17 . Respiratory polygraphic study All patients underwent a sleep study with cardiorespiratory monitoring (Vital night, Vital aire, Milan Italy). The cardiorespiratory analysis comprised nocturnal snoring sound, arterial oxygen saturation, body position, nasal and mouth airflow, thoracic and abdominal respiratory movements and heart rate. To determine the severity of sleep apnoea, we considered the AHI, oxygen desaturation index (ODI) and T < 90% (percent of total time with oxygen saturation less than 90%). Drug-induced sleep endoscopy At both centres, DISE was performed with the patient in a supine position. Transnasal flexible endoscopy was performed using a high definition fibreoptic videoendoscope connected to an Evis Exera II CLV-180B light source (Olympus Medical Systems Corporation, Tokyo, Japan). During the examination, head rotation and positioning of the patient in lateral decubitus were performed to assess the positional component of the collapse. Moreover, chin lift and mandibular pull up manoeuvres were routinely performed to evaluate the possible benefit in applying a MAD. Midazolam was administered with intravenous repeated bolus in a range of 1-3 mg, while propofol was administered intravenously via target-controlled infusion. At the end of the procedure, flumazenil was used to antagonise the effects of midazolam. The use of a low dose of midazolam and propofol allows sedation to be as physiologic as possible, with snoring, apnoea events, controlled desaturations and rapid recovery 18 . The NOHL classification was used to assess the obstruction severity at multiple levels 19 . Surgical glossoepyglottopexy The glossoepyglottopexy procedure has been previously described 7 ; the main steps are briefly summarised as follows. The procedure is carried out under microlaryngoscopy with the patient lying in Boyce-Jackson's position; firstly, the surgeon with a Sataloff laryngoscope (Microfrance Sataloff Laryngoscopes 124, Medtronic ENT, Jacksonville FL USA) exposes the base of the tongue, the entire valleculae and the epiglottis. Secondly, with a CO 2 laser (Ultrapulse Dualpro Laser CO 2 , Lumenis, Yokneam, Israel) coupled with a microscope, the operator vaporises the mucosa overlying the valleculae and the base of the tongue. Finally, from outside of the neck, two 16-gauge needles are inserted projecting out of the valleculae, serving as a guide to apply, through a loop, number 1 Premilene ® sutures (Premilene, Braun, Melsungen Germany) that embrace the hyoid bone and stitch the lingual surface of the epiglottis to the base of the tongue. Both wires are then fixed outside of the neck, anteriorly to the larynx, using a Silastic sheet to protect the skin from local trauma. Outcome evaluation Surgical success was evaluated at least 6 months after surgery, performing a respiratory polysomnographic study and repeating the ESS questionnaire 15 . Criteria for evaluation of the outcomes are in agreement with Montevecchi et al. 20 . Cured: AHI < 5 and ESS < 10 and reduction of both >50%. Success: AHI < 20 and ESS < 10 and reduction of both > 50%. Failure: AHI > 20 and any ESS value and reduction of both < 50%. Statistical analysis The results are expressed as mean ± standard deviation, median, or percentage. Sample size was calculated by assuming effect size = 0.45, α = 0.05, power (1-β error probability) = 0.80 for a two-tailed paired test. With these parameters, a minimum sample size of 43 patients was required. The Shapiro-Wilk test was used to assess normal distributions of continuous variables. Categorical variables were analysed with χ 2 test or Fisher's exact test as appropriate. Comparisons between continuous variables were performed with the Mann-Whitney-Wilcoxon rank sum test. The evaluation of the continuous variables before and after treatment was carried out using the Wilcoxon signed-rank test, plotting them with paired boxplot and adding lines to points for each patient to show the trend change. Considering the binary successful outcome, pre-treatment clinical and polysomnographic covariates were investigated with univariable and multivariable logistic regression models. Statistical significance was assumed in each test with a two-tailed p value < 0.05. Statistical analysis was carried out using the R software/environment (version 3.6.3; R Foundation for Statistical Computing. Vienna, Austria). Results From January 2015 to December 2019, 49 patients affected by OSA with primary epiglottic collapse underwent glossoepiglottopexy. By DISE, the most frequent pattern of epiglottic collapse was complete anteroposterior collapse, followed by partial anteroposterior collapse; in our series, no patients showed a lateral collapse of the epiglottis. In 37 patients (75.5%), concomitant to glossoepiglottopexy, a non-resective pharyngoplasty was also performed according to the palatal collapse. Furthermore, tonsillectomy was performed in 26 patients (53.1%), and septoplasty and turbinoplasty in 9 patients (18.4%). The main patient characteristics are reported in Table I Complications occurred in 2 of 49 procedures: one patient had suture breakage at 7 days post-operatively, but flexible transnasal video endoscopic control revealed the stability of the glossoepiglottopexy, while the other had the epiglottis lacerated by the sutures that were probably placed too high. Neither bleeding, dysphagia, nor aspiration occurred in any case, and postoperative AHI improved from 66 to 10 and from 37 to 9, respectively. Comparisons between paired parameters measured before and after surgery are reported in Table II, Clinical swallow evaluation was negative and EAT-10 scored 0 in all patients after surgery. All patients received a score of 1 at the penetration-aspiration scale evaluation postoperatively. Post-treatment AHI was < 5 in 40.8% of patients (n = 20) and postoperative ESS scored less than 7 points in 89.8% of cases (n = 44). Considering the criteria for outcome evaluation, a successful procedure was obtained in 34 cases (69%) and failure in 15 (31%); notably, 8 of these failures were due to ESS post > 10 (1 patient) or its improvement < 50% (7 patients); on the other hand, only 4 failures were due to an AHI post > 20, as shown in Figure 2. The analysis of pre-treatment clinical and polysomnographic covariates showed that higher ESS pre values were associated with a higher chance of successful surgery at both univariable (OR = 1.24, 95% CI 1.08-1.47, p = 0.005) and multivariable analysis (OR = 1.22, 95% CI 1.06-1.46, p = 0.013), whereas older age was related to a lesser chance of success (OR = 0.94, 95% CI 0.88-0.99, p = 0.035), which was not confirmed after adjustment for the ESS effect (p = 0.152; Tab. III, Fig. 3). None of the other clinical variables were associated with post-treatment outcomes. Discussion In adult patients suffering from OSA, collapse of the epiglottis may be primary or secondary: the latter occurs when a bulky tongue base pushes the epiglottis backward. On the other hand, primary collapse of the epiglottis may result from an altered conformation of the epiglottis (cartilage deformation due to pharyngeal wall compression during sleep or laxity of the glossoepiglottic ligament) in combination with the high negative intrathoracic pressure generated during obstructive events 7 . This deformity of the epiglottis can be congenital or created by the pressure of parapharyngeal fat pads and chronic collapse of the retroglottal airway. In OSA patients, both medical and surgical treatment must be tailored to the physiopathology of the individual case. New tools have been developed to improve patient selection and the therapeutic solutions that can be offered. One such tool is DISE, which allows to accurately identify patterns of collapse that cannot otherwise be seen with awake investigations. In awake findings, the laryngeal obstruction may be only hypothesised as being due to the deformed epiglottic shape, while the direct visualisation of the laryngeal collapse in many cases is possible only during the sedated state 21 . The introduction of DISE in the diagnostic routine has revealed the high incidence of laryngeal obstruction due to primary and secondary collapse of the epiglottis, as widely documented in the literature, reaching up to 73% of cases 3,6 . OSA is a disease with multiple treatment modalities including positive airway pressure devices, surgery, behavioural treatment and oral appliances 3 , but when approaching epiglottic collapse the therapeutic choices are limited. CPAP has demonstrated suboptimal results with these patients who often require higher pressures to be effective, with possible consequences on patient compliance 3 Estimates of success probability with 95% CI bands according to univariable or multivariable logistic regression models presented in Table III. 24 . In order to treat primary epiglottic collapse without ablating it, a surgical technique called epiglottis stiffening was recently developed: it consists of cauterising the lower half of the lingual side of the epiglottis in the area between the lateral glossoepiglottic folds, and avoiding reaching the free margin of the epiglottis itself 25 . By maintaining the free edge unharmed, the risk of dysphagia and aspiration is overcome. However, in our view, adding two nylon sutures to embrace the hyoid bone ensures safer and long-lasting results, while maintaining the important epiglottic function of glottic plane protection. In the present work, this technique was demonstrated to be a safe and reliable choice to treat primary collapse of the epiglottis in OSA patients. In our series, the complication rate was very low (4%) and none of these cases experienced dysphagia or inhalation. Regarding outcomes, our results showed significant (p < 0.001) post-operative improvement of AHI, ODI, T 90% and ESS after surgical treatment with mean values approaching the normal population. In our opinion, these results are consistently achievable if reasonable selection criteria are respected. As a general rule, patients should not be affected by severe OSA with > 30 AHI and should also not be obese: otherwise, CPAP should be always advised in the first instance and surgical treatment should be reserved only for those who do not respond to or who refuse treatment with positive airway pressure. Further research needs to be carried to determine if glossoepiglottopexy can enhance compliance and increase the effectiveness of CPAP in this category of patients. In our cohort, considering pre-treatment clinical and polysomnographic variables, ESS score was the only covariate that independently associated with surgical outcomes as previously observed in patients treated with palatal or lateral oropharyngeal collapse 4 . This finding may be explained by the role of ESS reduction in defining successful treatment in the applied classification 20 . In 7 (47%) of our patients, the definition of failure was due only to not achieving ESS reduction < 50%, despite having an ESS post < 10 and obtaining recovery of AHI parameters. In fact, low values of baseline ESS in non-symptomatic patients are less likely to change, despite recovery by polygraphy. These findings may help to improve this classification in defining surgical outcomes. Finally, we acknowledge that the present study has intrinsic limitations considering its retrospective nature. Moreover, the cohort of patients analysed is limited, even though to our knowledge it is one of the largest case series present in the literature on this specific category of patients 3 . In fact, OSA is a complex disease, and many factors can contribute to upper airway collapse. A clear understanding of such a complex condition, by identifying the key factors that play a role in the different upper airway obstruction patterns can aid in the definition of better therapeutic protocols and targeted compound surgical strategies. Conclusions Our 5-year experience demonstrates that glossoepiglottopexy is a safe and reliable surgical technique for the treatment of primary epiglottic collapse in OSA patients. By not producing major anatomical alterations, it maintains oropharyngeal and laryngeal functions. In addition, it is capable of improving the main polysomnographic parameters and has the potential to increase compliance and efficacy of positive airway pressure devices in this category of patients.
2022-04-10T06:22:48.417Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "d6e7e9ddb470419f268773ee6276542ba3048bcf", "oa_license": "CCBYNCND", "oa_url": "https://www.actaitalica.it/article/download/1676/754", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57dfb620c1edd3717cb5f3fc204897a0519df541", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
240441566
pes2o/s2orc
v3-fos-license
The learning curve and difficult points of the O-RADS ultrasound risk stratification system in 54 trainees Purpose This study aimed to evaluate the learning curve and explore the difficult points of the Ovarian-Adnexal Reporting and Data System (O-RADS) ultrasound risk stratification system. Methods One hundred adnexal masses (AMs) were randomly selected for five tests as training data. Two experienced trainers had an inter-rater agreement of 0.95 for the O-RADS scores. Fifty-four trainees (26 level I practitioners [group 1], 17 level II practitioners [group 2], and 11 experienced level II practitioners [group 3]) attended the training. Every trainee received assessment and feedback after 20 scored cases. The outcomes of the five tests were compared among the three groups using repeated-measurements analysis of variance. Results Of the 100 AMs, 52 were pathologically benign and 48 were malignant; the O-RADS scores were 2, 3, 4, and 5 in 22, 11, 48, and 19 AMs, respectively. The between-subjects effects test showed no significant differences between groups 1, 2, and 3 for the five tests (P=0.501). For each group, the differences among the five tests were significant (P<0.001, P=0.006, and P=0.044 for groups 1, 2, and 3, respectively). Test 2 was the worst. In 23 cases, more than 40% of trainees gave incorrect answers, which mainly related to classic benign lesions, the color flow score, and solid-appearing masses. Conclusion After training, junior doctors at different levels can reach a coincident O-RADS ultrasound risk stratification. The difficulties primarily related to subjective judgments of classic benign lesions, the color flow score, and solid-appearing masses. More experience is needed to improve the applicability of the system. Introduction Ultrasound (US) imaging is the first choice to describe ovarian adnexal masses (AMs) and estimate their malignancy risk [1]. US is low-cost and easily accessible, but highly operator-dependent. To Ultrasonography 41(2), April 2022 e-ultrasonography.org improve the malignancy risk estimate and the management of AMs, many guidelines and structured reporting systems have been established, using subjective assessments, simple scoring, or statistically derived scoring [2][3][4][5][6][7][8][9][10]. The Ovarian-Adnexal Reporting and Data System (O-RADS) ultrasound risk stratification and management system is the only lexicon and classification system encompassing all risk categories of AMs, with a management recommendation for each risk category [10]. It may be the most complex AM diagnosis system, including six categories (O-RADS 0-5) and at least 21 detailed combined lexicon descriptors for scoring. Meanwhile, it is the most effective US system, as it improved the accuracy of assessments of the malignancy risk of AMs by providing a standardized reporting tool describing masses in terms of echogenicity, size, cystic wall, internal septum, boundary, shape, and blood flow [1,9,11]. The learning curve can reflect the difficulties and important steps of clinical diagnosis and treatment methods, and then strengthen clinicians' cumulative experience. Analyses of the learning curve are widely used in various fields of medical imaging diagnosis [12][13][14][15]. The aim of the present study was to conduct an O-RADS US system training for junior doctors, draw the corresponding learning curve, and explore the difficult points of this system to provide a reference for clinical training. Compliance with Ethical Standards This study was approved by the Human Research Ethics Committee of Second Xiangya Hospital with a waiver of informed consent (No. 2021-038). Patients The diagnostic US images, clinical records, and pathological information of 642 women who underwent adnexal tumor resection at the Second Xiangya Hospital between June 2018 and June 2020 were collected. The inclusion criteria were (1) a clear pathological diagnosis; (2) an intact clinical, ultrasonographic, and surgical record; (3) US images showing enough diagnostic signs without artifacts; and (4) an interval of less than 1 month between ultrasonography and surgery. In total, 100 AMs were randomly selected for five average groups as the training data. The authors Wen and Zhao, two senior doctors with more than 10 years of gynecological US experience, read all the images blinded to pathological information. The intraclass correlation coefficients for inter-rater agreement were 0.95 (95% confidence interval, 0.93 to 0.96) for the O-RADS US score. The two authors determined all the O-RADS US scores together with lexicon descriptors. Fifty-four doctors from 18 hospitals participated in the training in May 2021. Of the 54 doctors, 26 who had finished their secondyear training for residents were included in group 1; 17 who had completed their 1-year attending doctor training in gynecological US were group 2, and 11 experienced attending doctors comprised group 3. The doctors in group 1 can be seen as level I practitioners, those in group 2 as level II practitioners, and those in group 3 as experienced level II practitioners, according to the European standard training requirements for gynecological US practice published by the European Federation of Societies for Ultrasound in Medicine and Biology, including standards for theoretical knowledge and practical skills [16,17]. Two trainers (the authors Wen and Zhao) were equivalent to level III practitioners (experts). All trainees consented to the use of their data for this research. The author Wen conducted the training. First, the definition of all terms was explained in detail, including (1) normal ovaries; (2) simple cysts, unilocular cysts, and multilocular cysts; (3) typical benign lesions; (4) smooth or irregular inner margins or walls; (5) papillary projections, solid components, and solid-appearing masses; (6) ascites and peritoneal nodules; and (7) color scores of 1-4 [9,11]. After receiving the feedback that all doctors understood the above terms, the specific rules of O-RADS US scoring and classification were further explained with corresponding legends. All the legends used in the explanation of theoretical knowledge did not appear in the subsequent assessment. In the subsequent image reading test and training, every trainee read the diagnostic images of each 20 cases independently. All O-RADS US scores with lexicon descriptors were listed on the answer sheet. The trainee only needed to tick the correct answer for each case. After the test, the trainee received feedback from the trainer, had sufficient communication with the trainer, and continued to the next 20 cases. All tests and training were finished within 1 week. All the answers were reviewed. One point was assigned for a correct answer and 0 for a wrong answer. The maximum possible score for each test was 20. Statistical Analysis Statistical analysis was performed using SPSS version 26.0 (IBM Corp., Armonk, NY, USA). Repeated-measurements analysis of variance was used to test the differences among the five tests and three comparison groups and to produce the learning curve. The Mauchly test was used to evaluate the distribution of variation in terms of sphericity, followed by the between-subjects effects test. The within-subjects effects test was used to analyze the interaction between the two factors, followed by the simple-effect test if the interaction was significant. A P-value <0.05 was considered to indicate statistical significance. Results Of the 100 AMs, the pathological findings showed that 52 were benign and 48 were malignant. Twenty-two had an O-RADS score of 2, 11 had an O-RADS score of 3, 48 had an O-RADS score of 4, and 19 had an O-RADS score of 5. The outcomes of the five tests for groups 1 to 3 are shown in Table 1. The Mauchly test for sphericity yielded a P-value of 0.219 (F=11.921). The between-subjects effects test showed there was Ultrasonography 41 (2), April 2022 e-ultrasonography.org components and one solid mass were incorrect. An incorrect color score was a common mistake in tests 1 and 2. The cases in test 2 had the most difficult points. In test 5, the main points of difficulty were distinguishing classic benign lesions from unilocular/multilocular cysts. Discussion The O-RADS US system is the only lexicon and classification system that encompasses six risk categories (O-RADS 0-5), incorporating the range of normal to high risk of malignancy [1,10]. The system provides the necessary lexicon descriptors for AM malignancy risk stratification. Understanding the lexicon descriptors is the key to reaching an accurate and consistent interpretation for doctors at different levels [9,10]. The application needs to be tested through extensive clinical practice [11]. In this study, for the first time, the authors explored the difficult points for junior doctors in practice by drawing a learning curve. The most common difficulty point in the training was the subjective color flow grading in the system. The color flow score seems to provide a quantitative assessment of the blood flow of AMs, but it is a subjective parameter, with grades of minimal, no significant difference between groups 1, 2, and 3 at the five test times (F=0.708, P=0.501), as shown on the learning curve (Fig. 1). The subsequent within-subjects effects test demonstrated significant differences among the five tests (F=10.849, P<0.001). No significant interaction was found between the test times and the comparison groups (F=1.944, P=0.059). The simple-effect test showed no significant differences among the three groups for all five tests (P=0.060-0.910) ( Table 1). For each test, no significant difference was found in the pairwise comparison between groups. For each group, the differences among the five tests were significant (P=0.001, P=0.006, and P=0.044 for groups 1, 2, and 3, respectively) ( Table 2). The outcome of test 2 was the worst and was significantly poorer than test 5 for all groups. More than 40% of the trainees failed to give a correct answer in 23 cases, which were reviewed to find the difficulty of the O-RADS US system (Table 3, Fig. 2). The main difficult points were (1) eight cases of classic benign lesions were wrongly read as unilocular or multilocular cystic masses; (2) there was a failure to differentiate unilocular and multilocular cysts in five cases; (3) solid-appearing masses, which had solid components of more than 80%, were confused with multilocular/unilocular cysts with a solid component in four cases; (4) the color scores of five multilocular cysts with solid Ultrasonography 41(2), April 2022 e-ultrasonography.org moderate, and strong flow [9,10]. In addition, "0" is generally used to represent "nothing." To use this color score system, more practice is needed to change the familiar idiom. In this study, this difficult point disappeared after three tests involving practice with 60 cases. In the O-RADS US system, there are many detailed lexicon descriptors of cystic masses and their walls [10]. These terms were not easy to apply in the initial practice. After four tests (practice with 80 cases), wrong answers became rare for distinguishing between a The definition of a solid-appearing mass is that "the lesion should be at least 80% solid when assessed subjectively in perpendicular two-dimensional planes" [9]. This lexicon descriptor was poorly understood and applied in training. Incorrect answers were present in all five tests. In the final test, some trainees still could not correctly distinguish classic benign lesions from other unilocular and multilocular cystic tumors. The lexicon of "classic benign lesions" represented multiple kinds of lesions [10]. Each kind of lesion has varied shapes and echoes, which were difficult to cover by the images in the references. An endometriotic cyst or hydrosalpinx may resemble a unilocular or multilocular cyst with a smooth inner wall. For less experienced doctors, the content of "classic benign lesions" was not as clear as that of other lexicon items. Improving the recognition of classic benign lesions may need a lot of practice over a long time. A significant increase in the learning curve may be far in the future for this task. In summary, if the definition of a lexicon descriptor needs subjective judgment, it was a difficult point for junior doctors. More experience was needed to better the understanding of these lexicon descriptors, such as the color score, solid-appearing masses, and classic benign lesions. There are many limitations in this study that need to be acknowledged. First, the test with 20 cases was not enough to test all the lexicon descriptors in the O-RADS US system. The tests had various difficult points due to the random selection of cases. Second, no remarkable improvement was observed in the learning curve with five tests. More training data or a more effective training modality is needed for future studies. Third, the effect of experience could not be fully evaluated because no senior doctors attended the training. A large study of interobserver variability would be needed to validate the use of the system by experts as well as less experienced observers [11,[18][19][20]. In conclusion, after training, junior doctors at different levels can reach a coincident O-RADS US risk stratification. The difficulties focused on the subjective judgment of classic benign lesions, the color flow score, and solid-appearing masses. More experience is needed to improve doctors' understanding of the system and ability to apply it in real-world circumstances.
2021-11-03T15:09:43.779Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "948ff73940ba1c44855311381b99c06a321f2e41", "oa_license": "CCBYNC", "oa_url": "https://www.e-ultrasonography.org/upload/usg-21158.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41bfb48d5f38766a630955fd14fc394bc2f96cc1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261983885
pes2o/s2orc
v3-fos-license
Changes in Holstein Heifer Salivary Cortisol Concentrations and Behavior after Regrouping : The objective of this study was to analyze the effect of regrouping on Holstein heifer salivary cortisol concentrations and behavior. Eighteen heifers (192.8 ± 13.6 days of age) were used during this study. Each of these heifers was introduced into a pen of older existing heifers. The heifers were assigned to four groups that corresponded to each of the four regroupings. Saliva samples were collected the day before regrouping (baseline; pre-regrouping), the day of regrouping, and the day after regrouping (post-regrouping). Video cameras continuously recorded from the hour before each regrouping through one day after each regrouping. Salivary cortisol concentrations were higher than the baseline for novel and existing heifers on the day of regrouping and one day post-regrouping ( p = 0.01). More aggressive and agnostic behaviors occurred during Regroupings 1 and 4 than during Regroupings 2 and 3. Novel heifers spent more time standing ( p = 0.05) and drinking ( p = 0.05) than the existing heifers and less time lying ( p = 0.05), but no other differences were observed between the behavior of existing and novel heifers. The salivary cortisol results of this study demonstrate that regrouping is a stressful event for both novel and existing dairy heifers. Introduction As the global population continues to increase, the importance of developing practices that maximize reproductive efficiency and milk production while minimizing the amount of cattle and space needed is crucial [1].Many management systems have been developed, but these will continue to be enhanced to encourage growth and proficiency in the coming years.As a result, much is required from a dairy cow's body every day.For instance, a dairy cow must have the ability to maintain pregnancy and an appropriate body condition score while producing high amounts of milk.In addition, dairy cows experience multiple potential stressors frequently, such as movement in and out of the milking parlor, handling for pregnancy diagnosis and health exams, and movement between pens, possibly introducing social interactions with other cows.Some of these demands indirectly expose dairy cattle to various stressors, such as social stress, which can have negative effects on their fertility as well as their overall health [2].Additionally, the presence of stress, as indicated by endocrine responses, is correlated with a decrease in milk production [3].Stress among animals is often demonstrated through both physiological and behavioral changes and can be broadly defined as an animal's inability to cope with a situation while maintaining its full genetic potential [4].Long-term stress inhibits dairy cattle from reaching their full genetic potential.In turn, this can lead to financial, economic, and production deficits.More importantly, it causes decreased animal welfare and, therefore, should be addressed. Dairy cattle are naturally curious and social herd animals.They establish social hierarchies.For example, calves form social bonds that appear to be related to how long a particular group of cattle have been housed together [5].Every time novel cattle are introduced into a previously established group, this drastically changes the herd dynamic and new hierarchies are eventually formed.Dairy cattle may encounter forms of social stress due to social hierarchy patterns when being introduced to new social groupings [6].For example, during heifer regroupings, changes in behavior were observed to be statistically significant among the regrouped heifers [5].The regrouped animals spent more time standing without moving, less time lying, more time sniffing the pen and displayed more fight-oriented and aggressive behaviors than the control heifers [5].Similar results were obtained in another study utilizing lactating dairy cows [7].Furthermore, the study concluded that the amount of social regroupings that the heifers endured did not change their behavior or habituate them to the situation [5].Dominance hierarchies were not established any faster the more regroupings that the heifers endured [5].In an additional study examining heifers, when the heifers were separated from the older, more dominant cows, their feed intake increased, and the amount of time they spent lying also increased, indicating less competition for feed and water [8].Social regrouping among dairy heifers can cause social stress due to aggressive behaviors and dominance patterns that, due to increased competition for resources, can hinder the animal from proper and easy access to feed, water, and lying time. Social stress has also been seen to affect reproductive efficiency in dairy cattle.Data from a behavioral study on herd dynamics found that the time from calving to conception was longer among cows that had a lower or submissive social status [4].Additionally, it generally took more artificial inseminations for submissive cows to become pregnant than for cows who had dominant status [4].Attributing this subfertility to stress can be supported by endocrine responses.While there is still much to be learned about the full function of the endocrine pathway in identifying stress, research clearly demonstrates that cortisol concentrations can be a key indication of stress and affect the animal's behavior [9].Cortisol is a steroid hormone secreted by the adrenal gland in response to stressors.It provides negative feedback to the hypothalamus, inhibiting the proper secretion of Gonadotropin Releasing Hormone (GnRH) and, therefore, the secretion of Luteinizing Hormone (LH) [4].GnRH and LH are vital to a cow's fertility and may have far-reaching consequences, especially regarding development and proper reproductive benchmarks in heifers that may be experiencing chronic stress. There are several ways to measure cortisol concentrations in cattle, including plasma cortisol, hair cortisol, and salivary cortisol.Plasma is perhaps the most common method scientists use to determine cortisol concentrations.However, cortisol concentrations in the blood change quickly in response to stressors.Due to the potential stress caused by handling the animal, unless the blood is obtained almost immediately, the cortisol concentration may be misleading [10].Plasma collection is considered an invasive procedure while obtaining a saliva sample is generally believed to be a simple and low-stress collection method.Studies have shown that cortisol concentrations in plasma and saliva are directly correlated [11,12].While one study found there to be no lag time between peak concentrations of plasma and salivary cortisol, another study demonstrated that there is likely a 10 min lag from when plasma cortisol peaks to when salivary cortisol peaks [11,12].During an additional study with dairy cattle, researchers obtained a blood sample followed by a saliva sample that took three minutes to obtain and still observed a positive correlation between the two [10].Thus, while more research needs to be conducted to fine-tune this, it is logical to hypothesize that salivary cortisol concentrations tend to peak within 3-10 min following a stressful event. High cortisol concentrations have been linked to immunosuppression, lack of fertility, and other issues in dairy cattle, making it an accurate partial indication of the overall well-being of the animals [13].Therefore, it is important to continue research to find the most accurate way to measure cortisol to properly assess dairy cattle welfare.Transitional periods, including moving cattle from one pen to another, are often coupled with a change in social grouping.The change in environment, as well as the change in social hierarchy, causes distress in cattle.The objective of this study was to analyze the effect of regrouping on Holstein heifer salivary cortisol concentrations and behavior.The results from this study will be used to develop heifer regrouping management practices that minimize distress during this transition. Materials and Methods All methods involving animals were approved by the Washington State University Institutional Animal Care and Use Committee (ASAF# 6218).This study was conducted from October to December 2020.No signs of thermal stress were observed in the heifers.The use of a sample size calculator (https://clincalc.com/stats/samplesize.aspx(accessed on 3 September 2020); 80% power; 0.05 alpha) led to eighteen Holstein heifers (192.8 ± 13.6 days of age at regrouping for novel heifers) being enrolled into this study.All animals were healthy upon enrollment into the study, and no health ailments were detected among the enrolled heifers.All heifers were housed in a building with three concrete walls, a feed bunk area with head gates along the front of the building, and access to an outdoor area (125 m 2 per pen).Each pen was 49 m 2 , with 30.89 m 2 of resting area bedded with wood shavings.All heifers had ad libitum access to water and a total mixed ration.Four novel groups of heifers were introduced into a pen of approximately 15 existing heifers (214.6 ± 16.5 days of age) throughout the course of this study.Two of the novel groups consisted of three heifers, and the other two novel groups consisted of six heifers.A novel group was introduced once every 14 ± 2.16 days, and each novel group of heifers was considered a new regrouping.These regroupings were labeled as Regrouping 1, Regrouping 2, Regrouping 3, and Regrouping 4. Heifers that were novel for Regrouping 1 were considered existing heifers for Regrouping 2, Regrouping 3, and Regrouping 4. Heifers that were novel for Regrouping 2 were considered existing heifers for Regrouping 3 and Regrouping 4. Heifers that were novel for Regrouping 3 were considered existing heifers for Regrouping 4. The day before a novel group was moved into the existing heifer pen was considered to be Day −1, and Day 0 was considered the first day that the novel heifers were introduced into the heifer pen.Heifers were regrouped by 1000 h on Day 0. Saliva samples were obtained for the novel heifers for each regrouping on Day −1, Day 0, and Day 1, and samples were obtained for the existing heifers on Day 0 and Day 1 for each regrouping.All saliva samples were collected using the SalivaBio's Children's Swab (SCS) System for Animals (Salimetrics, State College, PA, USA).Saliva was collected from the novel heifers on Day −1 between 10:30 and 11:30 h.On Day 0, saliva collection began 30-40 min post-regrouping, and on Day 1m saliva was collected between 10:30 and 11:30 h.Careful care was taken to use a minimal amount of restraint (using head gates located at the feed bunk) and to complete the collection process within three minutes per heifer.Saliva was collected two hours after feeding.The swab was placed inside the corner of each heifer's mouth near pooling saliva by the gum for 90 s, which resulted in the collection of 1-2 mL of saliva.After saliva was collected for all of the heifers involved in each regrouping, the tubes were centrifuged within 30-60 min of collection at 1500 RPM for 15 min.After this, the swab was removed from the tube, and the saliva samples were frozen at −20 °C until the day of analysis.The cortisol concentrations were obtained using enzyme-linked immunosorbent assays (Expanded Range High Sensitivity Enzyme Immunoassay Kit, Salimetrics, State College, PA, USA).The ELISA kit's analytical sensitivity was 0.07 ng/mL.The intra-assay CV was 5.8%, and the inter-assay CV was 8.2%. Four video cameras (Brinno TLC 200) were positioned within the heifer pen, with one facing the feed bunk, one facing the back wall, and two facing the front of the pen with views of the water trough and each side of the pen to provide a comprehensive view.The cameras were set to record continuously beginning one hour before each regrouping on Day 0 until the end of Day 1.The decision to only record the video footage for 48 h was based on a previous study that concluded that dairy cattle typically return to their baseline behaviors after two days [14].Behavior observations were recorded from the 48 h of video footage, but only during daylight hours.Individual animals were identified by using weatherproof animal paint on both sides of each heifer. Two trained observers viewed the video footage using a VLC Media Player and documented behaviors.Their inter-observer reliability was 91.3%.To evaluate daily time budgets for the heifers, all-animal instantaneous scan sampling for specific behaviors was recorded at five-minute intervals for novel and existing heifers (Table 1).During the instantaneous scan sampling, the following behaviors were focused on: feeding, lying, standing, sniffing, locomotion, and drinking.All-animal scan sampling was determined to be the most efficient and accurate way to find the average percentage of animals observed performing these behaviors during any given observation period.It was decided that the above behaviors could be classified as a state defined as a continuous or ongoing behavior where duration should be considered [15].Scan sampling has been determined to be an effective way to analyze behaviors that are a state [15].Focal group, alloccurrence sampling was used to record specific social behaviors, with the focal group being the novel heifers for each regrouping (Table 2).The behaviors for the focal animal all-occurrence sampling was recorded continuously for the span of each day.The analyzed behaviors included head-butting actor (heifer performing the behavior) and reactor (heifer responding to the actor's behavior), charging actor and reactor, fighting, displacement actor and reactor, grooming actor and reactor, mounting, standing to be mounted, mounted run, chin rest actor and reactor, and stereotypic behaviors.Focal group all-occurrence sampling was determined as the best approach for the above behaviors because the frequency of each event for each of the novel groups was desired and, thus, the data would be considered nominal, which is appropriate for this sampling method [15].For scan sampling data, the proportion of existing heifers exhibiting the behavior each hour was calculated for further analysis.The proportion of novel heifers exhibiting the behavior each hour was also calculated.For focal group sampling, the number (frequency) of occurrences of the behavior exhibited each hour was calculated.Animal performs behaviors such as pacing (animal moves back and forth with no end destination and a continuation of motion), oral manipulation (an animal performs oral behaviors that do not serve an essential function), over-grooming (an animal continues to lick themselves until they start to lose hair) SB (P, O, OG) The data were tested for normal distribution using PROC UNIVARIATE in SAS (SAS 9.4, Cary, North Carolina, USA).Salivary cortisol concentrations were statistically analyzed using a mixed model ANOVA in SAS.Repeated measures were used in the model as multiple cortisol samples were obtained from each heifer (n = 18) over the course of the study.The experimental unit was each heifer.The independent variables were treatment (novel or existing) and day.Behavior data recorded using the all-animal scan sampling were analyzed using PROC GEN MOD in SAS for each behavior, with a Poisson distribution and log link.The dependent variable was the specific behavior being analyzed with the independent variables being treatment and day.The experimental unit was the regrouping number (n = 4).The significance was established to be p ≤ 0.05.The means are reported with the standard error.Behavior data recorded using the focal group and alloccurrence sampling was analyzed using PROC MIXED in SAS.The dependent variable was the specific behavior being analyzed, with the independent variable being day.The experimental unit was the regrouping number (n = 4). Salivary Cortisol Concentrations A significant interaction was detected between the treatment group and day (p = 0.01; Figure 1).On Day −1, which was considered the baseline cortisol concentration for novel heifers, the mean concentration was 1.18 ± 0.28 ng/mL (Figure 1).Salivary cortisol concentrations in novel heifers significantly increased between the baseline (Day −1) and Day 0 (p = 0.01) and Day 1 (p = 0.01).A significant increase in salivary cortisol concentrations occurred between the baseline cortisol concentration (Day −1) of the novel heifers and of the existing heifers on Day 0 (p = 0.001) and Day 1 (p = 0.005). No significant difference was detected between existing and novel heifer cortisol concentrations on the same day.No significant difference occurred between existing heifer and novel heifer cortisol concentrations on Day 0 (p = 0.69), and no significant difference occurred between existing heifer and novel heifer cortisol concentrations on Day 1 (p = 0.76).No significant change was detected within the novel heifer cortisol concentrations on Day 0 and Day 1 (p = 0.65). Heifer Time Budgets Too few observations of sniffing and locomotion behaviors were recorded, so statistical analysis of those behaviors was not possible.For feeding behavior, no significant differences were detected among days (p = 0.81) or between treatments (p = 0.12).On average, 42.4 ± 2.4% of novel heifers and 42.0 ± 2.5% of existing heifers were observed feeding during the same observation period. For lying behavior, a significant difference existed between treatment groups (p = 0.05), and a trend existed among days (p = 0.06; Figure 2).The mean percentage of existing heifers lying was 39.3 ± 2.6%, while the mean percentage of novel heifers lying was 36.5 ± 2.8% (Figure 2).A greater percentage of all (novel and existing) heifers was lying on Day 1 than Day 0, with the means per day as follows: Day 1 was 41.8 ± 2.5%, and Day 0 was 31.6 ± 2.7%.For standing behavior, significant differences were detected between treatment groups (p = 0.05) and among days (p = 0.05; Figure 2).The mean percentage of novel heifers standing was 37.5 ± 3.2%, and the mean percentage of existing heifers standing was 20.1 ± 1.2%.There was a greater percentage of all heifers standing on Day 0 than on Day 1, with the mean percentages per day as follows: Day 0 = 30.5 ± 2.9% and Day 1 = 27.1 ± 2.7%. No significant differences were detected for drinking behavior among days (p = 0.49).A significant difference existed between treatment groups (p = 0.05).The mean percentage of novel heifers drinking was 25.1 ± 3.0%, and the mean percentage of the existing heifers drinking was 10.0 ± 0.5%. Focal All Occurrence Sampling Behavior For the number of occurrences of novel animals exhibiting head-butting behavior as an actor, no significance was observed among days (p = 0.70).As for fighting behavior exhibited by novel heifers, no significance was detected among days (p = 0.46).No significant difference was observed among days for novel heifers reacting to another heifer charging (p = 0.59).Too few observations of novel heifers charging at other heifers were recorded; therefore, no statistical analysis could be performed. For chin resting behavior exhibited towards novel heifers, no significant difference was observed among days (p = 0.47).No significant difference was observed among days (p = 0.32) for novel heifers exhibiting chin resting behavior.In regard to novel heifers receiving grooming from another heifer, no significant differences were observed among days (p = 0.47).For the number of occurrences of novel heifers grooming another heifer, no significant differences were observed per day (p = 0.40). For the number of occurrences in which novel heifers displaced another heifer, no significant differences were observed between the days (p = 0.77). No significant difference was observed between days (p = 0.25) for the number of displacements during walking that the novel heifers received.As for the number of displacements that occurred while walking that novel heifers displayed towards other heifers, no significant differences were observed between days (p = 0.60).The number of dis-placements novel heifers received while lying down had no significant differences observed between days (p = 0.81).Too few observations of novel heifers displacing heifers that were lying down, mounting, standing to be mounted, mounted run and stereotypic behaviors were recorded; therefore, statistical analyses of these behaviors were not conducted. Discussion The results of this study show the short-term physiological and behavioral effects of regrouping on dairy heifers.To our knowledge, this is the first study to analyze the effects of heifer regrouping on salivary cortisol concentrations.All of the salivary cortisol concentrations in this study were within the range obtained for dairy cattle under normal conditions in a previous study, but the concentrations in the present study were generally lower than those observed under stressful conditions in both cows and calves [10].In the previous study, the mean baseline for salivary cortisol concentration for the calves was approximately between 0 and 1 ng/mL, and for the cows was approximately between 1 and 4 ng/mL [10].Following ACTH administration, the range for the salivary cortisol concentration of the calves was approximately 2 to 11 ng/mL, and during milking, the range for the salivary cortisol concentrations of the cows was approximately 5 to 15 ng/mL [10].However, the previous study subjected the calves to ACTH administration using a dosage intended to achieve what is known to be a maximum cortisol response similar to what would be expected after a severe stressor [10].When looking at the salivary cortisol concentrations of beef calves in painful and stressful situations, the salivary cortisol concentrations of the current study are of a very similar nature to what that study found when sampling within four hours of a stressful event (1.22-3.66 ± 0.53 ng/mL) [16]. The baseline salivary cortisol concentrations for the novel heifers in this study were significantly lower than the salivary cortisol concentrations for both the novel and existing heifers on the day of regrouping and the day after regrouping.This was to be expected as there is a known increase in aggressive behaviors and agnostic interactions on the day of regrouping, leading to social stress [5].We would have also expected there to be a difference between the salivary cortisol concentrations of the novel and existing heifers; however, no significant difference was observed.While no studies have analyzed salivary cortisol during regrouping or during regrouping by treatment groups (i.e., novel versus existing) in dairy cattle, a few studies have analyzed salivary cortisol concentrations during regrouping in swine.The results of one study saw higher cortisol concentrations in regrouped animals than in the control animals that were not regrouped [17].That study did not separate the subjects in the regrouped pens into novel and existing animals for analysis [17].An additional study in swine found a difference in salivary cortisol concentrations between the incoming (novel) sows and the resident (existing) sows two hours after regrouping, where the salivary cortisol concentrations greatly increased for the incoming sows [18].Perhaps, since heifers are young animals (the heifers in this study were between five and six months of age), they are not as accustomed to regrouping and reestablishing social hierarchies with social interactions as older animals.This may explain why the cortisol concentrations of both the existing and the novel heifers increased to similar concentrations on the day of regrouping and the day following regrouping as opposed to the baseline. Regrouping 1 had significantly higher cortisol concentrations than the other three regroupings.In Regroupings 1 and 4, three novel heifers were introduced into a pen of existing heifers and in Regroupings 2 and 3, six novel heifers were introduced into a pen of existing heifers.The largest difference in mean salivary cortisol concentrations across novel and existing heifers was between Regrouping 1 and Regrouping 4. No significant differences were detected for mean salivary cortisol concentrations between Regrouping 2, Regrouping 3, and Regrouping 4. This indicates that moving a larger group of novel heifers together into a pen may also reduce stress during regrouping, as six heifers were moved during Regroupings 2 and 3.It is possible that with the movement of six heifers during these regroupings, there was generally less physiological stress than during regroupings, where only three heifers were moved.It should be noted that this study was not originally designed or intended to test the effect of group size on stress parameters in heifers.The goal was to conduct a field study that did not interfere with the management requirements of the cooperating dairy.Due to the spatial and operational needs of the dairy, three heifers needed to be moved during Regroupings 1 and 4, and six heifers needed to be moved during Regroupings 2 and 3. However, the salivary cortisol concentration results are supported by the behavioral results of this study.Generally, more aggressive behaviors, chin rests, and displacements were exhibited by the existing heifers towards the novel heifers during the regroupings, with three novel heifers being introduced as opposed to six novel heifers being introduced.This presents an area where future research should be focused, as it appears that moving a larger group of novel animals together improves welfare and reduces physiological and psychological social stress by minimizing aggressive interactions.One study focused on analyzing the effects of moving three cows at a time into a new pen [14].They witnessed less aggressive and agnostic interactions than in a previous study, which only introduced one focal cow to a new pen at a time [7,14].The author of the current study would suggest that a future study be designed to evaluate the frequency of agnostic and aggressive interactions when moving one, three, and six novel focal dairy heifers into pens of existing dairy heifers. For the majority of the above-listed behaviors, no significant differences occurred between the two regroupings with three heifers and between the two regroupings with six heifers.The increase in displacements from the feed bunk on the day of regrouping is consistent with the findings from another study of regroupings in dairy cows [7].Fewer displacements at the feed bunk occurred during Regroupings 2 and 3 than during Regroupings 1 and 4. Although unexpected when taking into account the high prevalence of displacements at the feed bunk during Regroupings 1 and 4, no significant differences were observed between regroupings or between the percentage of novel and existing heifers feeding during any given observation period.These findings are relatively consistent with other studies of a similar nature.In the past, a decrease was found in feeding bouts and feeding time one hour after feeding on the day of regrouping, but no overall changes in feeding occurred when not taking specific times of day into account [7].An additional study found no difference in feeding time between novel and existing cows but found that feed intake or the feeding rate decreased among the novel animals, which they suggested may be due to an increase in vigilance behavior [14].The latter study was unable to replicate the decrease in feeding bouts and feeding time following the hour of regrouping in the previous study as their test subjects were fed at inconsistent times throughout the study [14].During the current study, the novel animals were introduced approximately an hour after morning feeding time and no statistical significance was observed for the percentage of animals feeding based on the time of day. No differences were detected among regroupings for feeding, lying, standing, and drinking behaviors.Throughout all the regroupings, typically, more novel heifers were standing than existing heifers, and more novel heifers were drinking than existing heifers.During any given observation period, more existing heifers were lying than novel heifers.While this was considered statistically significant, the mean and standard errors of the mean for these percentages do not appear to have a significant difference.More animals across both treatment groups and all regroupings spent less time lying on the day of regrouping and more time lying on the day after regrouping, although this was not considered significant.Additionally, the heifers spent more time standing on the day of regrouping and less time standing on the day after regrouping.This is consistent with the findings of other studies analyzing regroupings, which also found lying time to decrease and standing time to increase on the day of regrouping [5,7]. The results of the current study as they pertain to lying and standing on the day of regrouping and the day post-regrouping may be indicative of more activity, movement, and general unrest in the pen on the day of regrouping.Data from an unpublished study at the WSU Knott Dairy Center found that novel heifers spent more time drinking during regrouping than existing heifers, which is consistent with the findings of the current study [19].The same unpublished study found no significant differences between novel and existing heifers for lying and standing behaviors, which differs from the results of the current study.A significant difference between the percentage of novel and existing heifers drinking was observed, and more research must be carried out to analyze the effect regrouping had on this behavior.To our knowledge, no published studies have analyzed heifer drinking behavior during regrouping, and the majority of the studies analyzing regrouping in dairy cattle did not assign the animals to novel and existing treatment groups.Research on heifer behavior in general is also limited.There are several studies that have analyzed the effects of regrouping in cows, but cows are older animals with generally more mature and established social interactions [7,14].In retrospect, allowing for multiple regroupings to increase our sample size may have led to more significant and conclusive behavioral results. Conclusions The baseline salivary cortisol concentrations for the novel heifers were significantly lower than the concentrations for both novel and existing heifers on the day of regrouping and the day post-regrouping.We can conclude that regrouping does cause physiological signs of distress in dairy heifers.We can also conclude that there are differences in behavior between the novel animals and the existing animals and that there is a large presence of aggressive and displacement behaviors during regrouping. Figure 1 . Figure 1.Salivary cortisol concentrations by day and treatment (mean ± standard error ng/mL) for individual heifers (n = 18).Gray bars represent existing heifers, and black bars represent novel heifers. Figure 2 . Figure 2. The mean percentage (mean ± standard error) of heifers performing the specified behavior during observation periods by treatment group (novel represented in blue bars and existing represented in orange bars), the regrouping number is the experimental unit (n = 4). Table 1 . Ethogram of behaviors used during analysis of video footage for all-animal instantaneous scan sampling at every 5 min interval. Table 2 . Ethogram of behaviors used during analysis of video footage for focal group all occurrence sampling.
2023-09-17T15:16:49.632Z
2023-09-14T00:00:00.000
{ "year": 2023, "sha1": "446403d746a2f8dbd88b1866fdb6160d7ca92119", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-933X/3/3/24/pdf?version=1695093387", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "94eb0ad8e42b4a961a1f73e3de2e18a23f0fe373", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
232228578
pes2o/s2orc
v3-fos-license
Antidiarrheal Effect of Zornia brasiliensis Vogel (Leguminosae) on Mice Involves Adrenergic Pathway Activation Several secondary metabolites have been isolated from Zornia brasiliensis (Leguminosae), mainly flavonoids. These compounds are known for many pharmacological actions, such as antispasmodic and antidiarrheal. Therefore, we evaluated the antidiarrheal effect of the ethanolic extract obtained from Zornia brasiliensis aerial parts (ZB-EtOHAP), as well as its underlying mechanisms. Castor-oil-induced diarrhea, fluid accumulation, and intestinal transit (normal and castor oil induced) were performed to assess the antidiarrheal, antisecretory, and antipropulsive activities of the extract. The involvement of opioid and adrenergic pathways was also investigated. ZB-EtOHAP inhibited, in a dose-dependent manner, both total defecation frequency and the number of watery stools. The extract showed no effect on fluid accumulation or normal intestinal transit. On the other hand, when the animals were pretreated with castor oil, the extract decreased the distance traveled by the marker in the small intestine. Investigation of the involvement of opioid and adrenergic systems showed that the pharmacological potency of the extract did not change in the presence of naloxone, but it was reduced in the presence of yohimbine. The data indicate that Zornia brasiliensis has an antidiarrheal effect due to inhibition of the intestinal motility through adrenergic pathway activation. Introduction Diarrhea is an issue not only in the developing world but also in the western world. Diarrhea is considered the most common worldwide cause of death of children under 5 years of age [1]. In this context, the World Health Organization (WHO) has encouraged studies for the treatment and prevention of diarrheal diseases based on traditional medicinal practices and use of natural resources [2]. Zornia brasiliensis Vogel is a herbaceous plant popularly known as urinana, urinária, or carrapicho. It has been used in folk medicine as a diuretic and for venereal disease treatment [3]. According to phytochemical analysis of the ethanolic extract obtained from the aerial parts of Z. brasiliensis (henceforth ZB-EtOH AP ), several secondary metabolites have been isolated from this species, including saponins (soyasaponin IV and A 3 ) and terpenes (dihydromelilotosideo and roseosideo), among others. However, extracts obtained from Z. brasiliensis are mainly known for containing considerable amounts of flavonoids, such as 5hydroxy-7-methoxyflavone, 7,4-dimethoxy-isoflavone, 7methoxyflavone, and 5,7-dimethoxyflavone [4][5][6]. Plants with high amounts of flavonoids have been used to treat a wide variety of diseases, including diarrhea [7]. For instance, a mixture of flavonoids obtained from Malus pumila leaves exhibited an antidiarrheal effect [8]. Similarly, isolated flavonoids, such as quercetin and rutin, have been shown to attenuate diarrhea symptoms by inhibiting intestinal muscle contractility, enhancing intestinal motility, and reducing fluid intraluminal accumulation in the gut lumen, as evidenced in different experimental studies [9]. erefore, considering that ZB-EtOH AP presents a phytochemical characterization that shows its high concentration of flavonoids and that these compounds may have antidiarrheal activity, ZB-EtOH AP was chosen for quantification of its main components, evaluation of antidiarrheal effect, and its underlying mechanisms. As the main hypothesis of the work, we have that the extract will present an antidiarrheal effect by inhibiting the production of intestinal fluids and decreasing peristalsis. Aerial parts of Z. brasiliensis were dried and crushed in a knife mill. e powder (5 kg) was subjected to extraction with ethanol (EtOH) 95%, by maceration, in proportion of 1 : 3 [w (kg)/v (L)] at room temperature. e extractive solution was collected every 72 hours, this collection, and the replacement of the solvent were repeated four times under the same powder. en, the extractive solution was concentrated under reduced pressure in a rotary evaporator [5]. Flavonoids 7-methoxyflavone and 5,7-dimethoxyflavone were used as chemical markers for analysis and quantification, once they were isolated from ZB-EtOH AP [5,6]. Analysis by High-Performance Liquid Chromatography. A Shimadzu Prominence chromatograph equipped with an LC-20AT solvent pump, a SIL-20A self-injector, a DGU-20A degassing system, an SPD-M20A diode arrangement detector, a CTO-20A furnace, and a CBM-20A controller system was used for the chemical analysis. A Kromasil® C4 column (250 mm × 4.6 mm ID, 3.5 μm) and a Kromasil® C4 precolumn (4.6 mm ID x 3.0 mm, 3.5 μm) were used. Analyses of the data obtained by HPLC-DAD were performed using the software Lab Solutions® (Shimadzu). e mobile phase was composed of water (0.1% formic acid) and acetonitrile (1 : 1, v/v) in an isocratic mode for 23 min at a flow rate of 0.45 mL/min, temperature of 40°C. e injection volume was 10 μL, and the detection was performed at 254 nm. Samples were filtered on 0.45 μm nylon membranes (Tedia). Five solutions with different masses were prepared with 7-methoxyflavone (2.7-3.9 μg) and 5,7-dimethoxyflavone (1.2-2.0 μg) and injected in triplicate to obtain the respective linear regression equations, as well as the determination coefficients. Both markers were quantified in 20.0 μg of the crude extract. e limits of detection (LOD) and quantification (LOQ) were determined based on what is recommended by the Resolução da Diretoria Colegiada (RDC) no 166, June 24, 2017 [10]. Animals. A total of 264 two-month-old male and female Swiss mice (Mus muscullus) weighing 25-35 g, obtained from the bioterium Professor omas George of Instituto de Pesquisa em Fármacos e Medicamentos (IPeFarM)/UFPB, were used. Prior to the experimental protocols, the animals were kept under a balanced dietary control (Labina®), with free access to water, in a temperature-controlled room (21 ± 1°C), and were daily exposed to a 12-hour light/dark cycle (light period from 6 am. to 6 pm.). On the day of the experiments, the animals were transferred to the experimental facility and acclimated for 30 min until the experimental protocols were initiated. After acclimatization, the animals were randomly assigned to the different groups. All experimental procedures were conducted following the principles of animal care Guidelines for the ethical use of animals in applied etiology studies [11] and approved by the Ethics Committee on Animal Use/CEUA (Certificate Nos. 0605/12 and 3206/13) of Universidade Federal da Paraíba. ZB-EtOH AP was solubilized in Cremophor EL® (3%), dissolved in distilled water to the concentration of 50 mg/ mL, and rediluted in distilled water as required for each experimental protocol. 7-Methoxyflavone (94%) and 5,7dimethoxyflavone (94%) were isolated from aerial parts of Z. brasiliensis. e purities of these substances were obtained by mass spectrometry. ZB-EtOH AP Dose Scheme. e stock solution (50 mg/ mL) was diluted in distilled water to lower concentrations as needed. As an initial screening for the extract effect, the mice were treated with doses of 31.2, 62.5, and 125 mg/kg, with the exception of the normal intestinal transit protocol, in which the animals did not receive the diarrhea-inducing agent (see below). Depending on the observed effect in each experimental protocol, the doses could be increased by multiples of two, up to 500 mg/kg, or decreased by half until no more effects of the extract were observed. Effect of ZB-EtOH AP on Castor-Oil-Induced Diarrhea. Male and female mice were divided into three groups (n � 6) that received saline 0.9% plus Cremophor® (10 mL/kg, p.o., negative control), loperamide (10 mg/kg, p.o., positive control), or ZB-EtOH AP (different doses, p.o.). After 30 min of treatment, castor oil was administrated (10 mL/kg, p.o.). e animals were separated and placed into individual cages lined with white paper. en, the animals were inspected for the number of stools and its consistency for 4 h, which were classified into solid or liquid. e total number of stools and the number of liquid episodes were determined. [12]. eir small intestine was carefully removed, preventing any content leakage, and immediately weighed [13]. e results corresponding to the measurement of intestinal fluid accumulation were expressed as (Pi/Pm) × 1000, where Pi is the weight of the intestine and Pm is the weight of the animal in grams [13]. Effect of ZB-EtOH AP on Normal Intestinal Transit. e male and female mice were divided into three groups (n � 6) and, after a 12-hour fasting period, were treated with saline 0.9% plus Cremophor® (10 mL/kg, p.o., negative control group), atropine (2 mg/kg, p.o., positive control group), or ZB-EtOH AP (different doses, p.o.). irty min later, activated charcoal 5% (10 mL/kg) solubilized in carboxymethylcellulose (0.5%) was administered, and after 30 min of the administration, the animals were euthanized by cervical dislocation. e abdominal cavity was opened, and the small intestine was removed. In order to measure the intestinal transit, the total length of the small intestine (distance from the pylorus to ileocecal valve) and the distance traveled by the marker were measured and expressed in percentage [14] as follows: IT � DT/TL x 100, where IT is the intestinal transit, DT corresponds to the distance traveled by the charcoal, and TL stands for the total length [15]. Effect of ZB-EtOH AP on Castor-Oil-Induced Intestinal Transit. e same procedures described in "Effect of ZB-EtOH AP on normal intestinal transit" were applied in this phase, with the exception that castor oil (10 mL/kg, p.o.) was administrated 30 min before the activated charcoal. Investigation of the ZB-EtOH AP Antidiarrheal Action Mechanism. To evaluate opioid system involvement, animals were divided into four groups: one group was treated orally with 0.9% saline solution plus Cremophor® (10 mL/kg, negative control), two other groups were treated subcutaneously (s.c.) with morphine (10 mg/kg, positive control), which is an opioid agonist, and the fourth group received ZB-EtOH AP (different doses, p.o.). One of the groups that received morphine and the group that received ZB-EtOH AP were pretreated with naloxone (2 mg/kg, s.c.), an opioid antagonist, 30 min before the administration of morphine or extract. e same method was used to assess the adrenergic pathway, but animals were treated with α 2 -agonist clonidine (0.1 mg/kg, p.o., positive control) instead of morphine and the antagonist yohimbine (1 mg/kg, i.p.) instead of naloxone [16]. After 30 min of the treatments described above, animals were treated with castor oil (10 mL/kg, p.o.). e protocol was followed as mentioned before in "Effect of ZB-EtOH AP on normal intestinal transit". 2.11. Statistical Analysis. All results were expressed as the mean ± standard error of the mean (S.E.M.) and were statistically analyzed using Student's t-test or one-way variance analysis (ANOVA) followed by Bonferroni's posttest, as needed. e difference between the means were considered significant when p < 0.05. All data were analyzed using the GraphPad Prism® software, version 7 (GraphPad Software Inc., San Diego, CA, USA). e inhibitory effect of each extract dose was calculated as the difference, in percentage of the mean ± S.E.M. values of each group out of the mean-± S.E.M. value of the negative control group. e maximum inhibitory effect of the extract (E max ) was used as an efficacy parameter. e dose of a drug or extract that produces 50% of its own maximal response (ED 50 ) was calculated by nonlinear regression analyses and was used as a potency parameter. Chemical Analysis. After the extraction process, the crude ethanolic extract yielded 11% of the weight of the initial plant material. e compounds quantified in ZB-EtOH AP were 7-methoxyflavone (14.65%) and 5,7-dimethoxyflavone (7.44%). Both flavonoids were quantified by high-performance liquid chromatography coupled to a diode array detector (HPLC-DAD). ese two phenolic compounds are the most abundant in the crude ethanolic extract, representing 22.09% of all secondary metabolites. Limits of detection (LOD) were 0.19 and 0.33 μg, and limits of quantification (LOQ) were 0.58 and 1.00 μg for 7methoxyflavone and for 5,7-dimethoxyflavone, respectively ( Figure 1). Evidence-Based Complementary and Alternative Medicine 3 Effect of ZB-EtOH AP on Castor-Oil-Induced Intestinal Fluid Accumulation. ZB-EtOH AP (31.2, 62.5 and 125 mg/kg) had no significant effect on the intestinal secretion, differently from loperamide, which inhibited fluid secretion (E max � 39.0 ± 11.0%) (Figure 3). Evidence-Based Complementary and Alternative Medicine charcoal in mice intestine when compared with the negative control group (Figure 4). Evaluation of Opioid System Involvement. An inhibitory effect of 65.9 ± 3.0% on castor-oil-induced transit was observed in the animals treated with morphine. However, when naloxone was administered, this inhibitory effect was thoroughly reversed (Figure 6). Although the maximum effect obtained with a dose of 250 mg/kg was slightly reduced to 84.5 ± 2.2%, the inhibitory potency of ZB-EtOH AP on castor-oil-induced transit (ED 50 � 20.0 ± 4.6 mg/kg, Figure 5) did not change in the presence of naloxone (ED 50 � 10.9 ± 2.6 mg/kg, R 2 � 0.961 ± 0.007, Figure 6). Evaluation of Adrenergic System Involvement. Clonidine reduced the distance traveled by the marker in 66.6 ± 4.6%. As expected, this effect was diminished when the mice were treated with yohimbine (Figure 7). e effect of ZB-EtOH AP was also significantly attenuated in the presence of yohimbine (E max � 57.8 ± 2.1%), and its potency was reduced (ED 50 � 67.9 ± 14.2 mg/kg, R 2 � 0.959 ± 0.018) when compared with the extract potency in the absence of yohimbine (ED 50 � 20.0 ± 4.6 mg/kg, Figure 5). Discussion Based on the chemotaxonomic criterion, this study aimed at assessing the antidiarrheal effect of Zornia brasiliensis on mice and its possible underlying mechanism by evaluating its effect on intestinal secretion and motility. ZB-EtOH AP showed an antidiarrheal effect due to the activation of the adrenergic system, which leads to inhibition of gut motility. As expected, the quantification of secondary metabolites showed that ZB-EtOH AP is composed mainly of flavonoids, being 7-methoxyflavone and 5,7-dimethoxyflavone present in higher concentrations. ese compounds, together, Evidence-Based Complementary and Alternative Medicine represent more than one-fifth of the metabolites contained in the extract. It has been reported that extracts that have high concentrations of these compounds present vasorelaxant and antispasmodic effects [17]. In addition, natural products known for containing these substances have been used to treat various gastrointestinal disorders [18]. Castor oil is widely used for the screening of drugs with a possible antidiarrheal property [19]. is pharmacological tool causes diarrhea due to the increase of intestinal fluid contents, through ricinoleic acid formation, which affects electrolytes and water transport [20]. Castor oil causes a reduction in intestinal absorption of Na + and K + and decreases Na + /K + -ATPase activity in the small intestine and colon through its active metabolite, resulting in changes of electrolyte permeability, leading to diarrhea [21,22]. When tested against castor-oil-induced diarrhea in mice, ZB-EtOH AP produced a remarkable antidiarrheal effect, evidenced by both stool frequency reduction and decrease of the liquid stools number, just like loperamide, one of the standard antidiarrheal agents [23]. In addition, we deem the extract to have a surprisingly interesting antidiarrheal action profile since the extract still presented antidiarrheal effect at doses as low as 31.2 mg/kg and a low ED 50 value, especially considering that it is a crude extract rather than the isolated substance, which highlights the potential of this species for a possible therapeutic use. Once the antidiarrheal effect of ZB-EtOH AP was confirmed, it became necessary to evaluate which mechanism was involved in this action, as this effect can occur by reduction of intestinal motility or intestinal secretion inhibition [24]. e intestinal epithelium absorbs and secretes large volumes of fluid. Intestinal fluid secretion involves ionic transport from the blood into the intestinal lumen [25]. For example, Cl − is transported through channels on the enterocyte apical membrane, which include the cAMP-gated channel CFTR (cystic fibrosis transmembrane conductance regulator) [26]. Drugs that are able to inhibit those channels may be effective against secretory diarrheas [27]. ereunto, the effect of ZB-EtOH AP on castor-oil-induced intestinal fluid accumulation was investigated. However, the extract did not decrease intestinal secretion, partially rejecting our initial hypothesis. On the other hand, the standard drug loperamide significantly decreased the intestinal content, provided that it acts on μ-opioid receptors and performs antisecretory actions. erefore, we conclude that the antidiarrheal activity of ZB-EtOH AP does not depend on intestinal secretion inhibition. It was still necessary to evaluate the effect of ZB-EtOH AP on the intestinal tone; thus, its effect on intestinal motility was investigated. Activated charcoal (marker) has been used since the 50s as a tool to evaluate laxative effect [28]. is method is an indicator of the maximum distance traveled by the marker and consists of marker administration and assessment of its course in the small intestine over a period of time [16]. As a result, we observed that the extract decreased the distance traveled by the marker in castor oil presence, which was not observed in the evaluation of the normal traffic model. ese results suggest that the extract antidiarrheal effect occurs due to the decrease of intestinal motility during diarrhea, confirming our initial hypothesis. However, in a healthy intestine, the extract did not have a constipating effect, even when higher doses of the extract were administered. In a different manner, atropine, the standard drug, inhibited both normal and castor-oil-induced transit. Atropine is a muscarinic antagonist that competes for the same Columns and bars represent the percentage of the mean ± SEM of the distance traveled by the marker, respectively (n � 6). ANOVA "one-way" followed by Bonferroni's posttest * * * p < 0.001 (saline vs. clonidine/yohimbine + clonidine/yohimbine + ZB-EtOH AP ); ### p < 0.001 (clonidine vs. yohimbine + clonidine). binding site of the acetylcholine neurotransmitter. Its actions include reduction of tone, amplitude, and frequency of intestinal peristaltic contractions [29]. Controlling the contractile activity is complex, with contributions from the intestinal layers themselves, as well as local nerves of the enteric nervous system, the autonomic nervous system, and circulating hormones. Gastrointestinal motor tone is regulated through multiple physiological mediators, such as acetylcholine, histamine, substance P, cholecystokinins, prostaglandins, and 5-hydroxytryptamine [30][31][32][33]. e opioid pathway is important in the intestinal activity modulation. ere are three different types of opioid receptors: δ, κ, and μ, and all of them belong to the G i/o protein coupled receptor family. Activation of these receptors by agonists, such as morphine, leads to the inhibition of adenylyl cyclase, which leads to the reduction of neuronal excitability and release of neurotransmitters, such as acetylcholine, resulting in decrease of intestinal motility [34]. In the investigation of the involvement of the opioid pathway, although the maximum effect of ZB-EtOH AP was minimally reduced, and the inhibitory potency of the extract on castor-oil-induced transit did not change in the presence of naloxone when compared to its effect in absence of naloxone. is means that the pharmacological potency of the extract was not altered in the presence of the opioid antagonist. erefore, we can conclude that the modulation of opioid receptors is not involved in the action mechanism of ZB-EtOH AP , unlike a considerable amount of other drugs that are currently used for the management of diarrhea [35]. is might attribute an interesting characteristic to the extract, considering that its metabolites act in different pathways, possibly being another alternative in the therapeutic arsenal for treatment, for example, of patients who do not respond to loperamide. Another important pathway in the regulation of intestinal activity is the adrenergic system, which modulates the release of stimulatory neurotransmitters. Alpha-2 (α 2 ) adrenergic receptors are extensively distributed in the gastrointestinal tract, playing a crucial role in the release of neurotransmitters such as acetylcholine, which regulates the intestinal smooth muscle tone [36,37]. Clonidine, an α 2adrenergic agonist, may be useful for the treatment of diarrhea in diabetic patients and in patients with diarrhea due to opioid withdrawal [38]. It stimulates absorption, inhibits secretion, and delays intestinal transit [39,40]. In the investigation of the adrenergic pathway involvement, we observed that both efficacy and potency of ZB-EtOH AP were reduced in the presence of yohimbine, an adrenergic antagonist. us, we concluded that ZB-EtOH AP presents metabolites that positively modulate the adrenergic pathway, resulting in delay of intestinal transit and antidiarrheal effect. Conclusions is study showed that the ethanolic extract of Zornia brasiliensis contains high concentrations of flavonoids. In addition, the extract presents an antidiarrheal effect due to the inhibition of intestinal motility, mediated by the activation of the adrenergic pathway. Further studies are necessary to entirely clarify the antidiarrheal mechanisms of the extract, since other pharmacological targets might be involved in the antipropulsive effect of the extract, such as calcium channels. Data Availability All the results used in this work to support the conclusions of this study are included in the article. Conflicts of Interest All authors have no conflicts of interest to declare.
2021-03-16T05:32:06.080Z
2021-02-28T00:00:00.000
{ "year": 2021, "sha1": "5f627ccd52789ad7523f0e41d37a5fdabe831769", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ecam/2021/1385606.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f627ccd52789ad7523f0e41d37a5fdabe831769", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235285981
pes2o/s2orc
v3-fos-license
The application of medoid-based cluster validation in desirable dietary pattern data A desirable dietary pattern (DDP) index is an index to measure the balance and variance of the nutrition intake of an individual. This index is composed of the calory values of protein, fat, and carbohydrates. Grouping individuals based on the DDP index is required to measure and improve an individual food security state. We took 14 individual purposively as samples to fill a set of DDP questioner. They were asked about their daily food consumption. They were grouped based on the DDP variables. A 3-dimensional plot showed that there were three to four clusters. Then, medoid-based partitioning algorithms, namely partitioning around medoids (PAM) and simple k-medoids (SKM), were applied in the data set. The inputted distances were generalized distance function to vary the distance options. The cluster results were then validated by medoid-based shadow value validation. This index was comparable to the 3-dimensional plot such that four clusters were opted as the most suitable number of clusters. The barplot of the cluster results showed that cluster 1 was characterized by an abundance of fat, while cluster 2 had very sufficient carbohydrates. Cluster 3 and 4 were two clusters with opposite characteristics where the former had a shortage of protein, fat, and carbohydrates, while the latter had an abundance of them. Introduction An approach to measure an individual food variety consumption applies desirable dietary pattern (DDP) index. The maximum value of DDP index is 100 meaning that the individual food consumption is greatly vary and balance w.r.t nutrition compositions. It is calculated via a total energy based on the calorie of protein, fat, and carbohydrates intake [1]. Instead of a single index of DDP, a 3-dimensional plot of the calories is also possible to describe the grouping of individual based on the dietary patterns. The grouping of individual is essential to measure and improve the individual food security state. It is similar to a cluster analysis or an unsupervised classification where homogeneous individuals are assigned into a group and heterogeneous ones are separated [2,3]. In a distancebased clustering, a similarity measure of individuals can be based on their distance, the more similar individuals have the closer distance while the less similar ones have the further distance. In a partitioning algorithm, the utmost importance is the distance choice because it affects the clustering result [4]. Besides the distance choice, a validation step is also important because the partitioning result excludes pre-determined class membership, i.e. unsupervised method. The relative, external, and internal validation methods are applicable for the clustering result to discover the best of the data structure [5,6]. A popular internal validation index is a silhouette index [7], which becomes the general index because it offers a visual plot. The index is vis-a-vis to a centroid-shadow value [8], which offers more variety of visualization to explore the data. A medoid-shadow value, moreover, is applicable for any type of data, i.e. numerical, categorical, and mixed variable data set [9]. Thus, this article applies the medoid-shadow value in the clustering validation of the DDP data. Method The DDP data set was collected from 14 individuals purposively. They administered a set of questions of their daily consumption. The questions consist of two parts, where the first part was multiple choice answer and the other one was opened questions. The former was about the general dietary pattern awareness, while the latter was their daily dietary list. The second part of the questions, i.e. the opened questions, was analysed by the ddp R package [10]. However, adjustments were required to comply the ddp package. Each individual food consumption has to be categorized among 217 kinds of food. Then, the dimension of the data expands n x 217, where n is the number of individual, and it becomes the input data for the ddp package. The ddp package resulted in both the DDP index and calorie consumption of each individual. Although the single index of the ddp index could be obtained easily by this package, it nonetheless less depicted the individual position among others w.r.t dietary pattern. Analysing the original variables composing the ddp index, i.e. protein, fat, and carbohydrates, on the other hand, could illustrate the data structure of the individual better. These variables were also easily produced from the ddp package. The data structure of the individual based on the protein, fat, and carbohydrates were grouped via medoid-based clustering algorithm. We applied partitioning around medoids (PAM) algorithm [11] as the most common of the medoid-based algorithms. As a comparison, we included also simple k-medoids (SKM) algorithm [12]. The detail steps for analysing the ddp data are (i) Plot the original data set and principle component (pc). (ii) Set the distance function. To increase the distance choices in the numerical distance options, we applied the generalized distance function (GDF) [12], which the distance between two individuals is defined as where α, β, γ, and ω are the weights for the numerical, binary, categorical variables, and whole distance respectively, while p n , p b , p c are the number of numerical, binary, and categorical variables. δ n , δ b , and δ c are the numerical, binary, and categorical distances, respectively, and δ n (x ir , x jr ) is the numerical distance between individual i and individual j in the variable r. Because the ddp data consist of numerical variables only, the binary and categorical distances are cancelled out so that equation 1 becomes Thus, the distances can be Manhattan weighted by range (mrw) [13], squared Euclidean weighted by variance (sev) [14], squared Euclidean weighted by range (ser), squared Euclidean weighted by squared range (ser.2) [15], and squared Euclidean (se) [16]. We add two other common distance for numerical variables, namely (original) Euclidean and Manhattan distances. (iv) Validate the result via the medoid-shadow value. The medoid-shadow value is obtained by where the distances between individual x to the first and second closest medoid are formulated by δ(x, m(x)) and δ(x, m (x)), respectively. The analysis of the ddp data in this article was run in R software [17] installed in an Intel i3 4GB RAM. The supporting packages are the ddp [10], kmed [18], gg3D [19], cluster [20], and geomnet [21] packages. Result and Discussion The ddp index of the 14 individuals has an average of 68 score out of 100 (Table 1). It indicates that the desirable dietary pattern of the individuals is unbalance and limited. A further exploration is necessary to analyse each dimension that contributes to the ddp index and the structure of the data. Thus, clustering of the individual is indispensable. Plot of the ddp data The ddp index is composed of three dimensions, namely protein, fat, and carbohydrates. The plot of these three variables shows that there are 3 to 4 clusters ( Figure 1) with 2 clusters consisting of only an individual. The principle components plot (Figure 2), which has 95% variance of the data are explained, similarly, results in 3 to 4 clusters as well. Medoids-based clustering algorithm and validation The first problem in the partitioning algorithms is the distance choice because a difference distance option can results in a difference result [4]. We vary the distance into seven types, namely Euclidean (eu), Manhattan (manh), Manhattan weighted by range (mrw), squared Euclidean weighted by variance (sev), squared Euclidean weighted by range (ser), squared Euclidean weighted by squared range (ser.2), and squared Euclidean (se). The data are also unstandardised for interpretation purposes. The PAM algorithm is applied in the seven distances. Table 2 shows the medoid-based shadow value of the PAM results for the number of clusters (k) from 2 to 5 clusters, where the higher value indicates the better separation among clusters. It suggests that the best number of clusters is 4 clusters in both the sev and ser.2 distances. The SKM algorithm, on the other hand, results in 5 clusters as the best number of cluster in term of medoid-based shadow value with the se distance (Table 3). In addition to the 3-dimensional original variables ( Figure 1) and principle component plots (Figure 2), the cluster validation tables (Table 2 and 3) present the similar result. Figure 3 and 4, moreover, depicts well separated clusters indicated by a clear distinction in both the compactness and separation. While the compactness is shown by the tick lines within a cluster, the separation is pointed out by the thin lines between clusters. Thus, we prefer 4 clusters as the most suitable number of cluster because it is also practically easier to measure and monitor the individual food security state. Cluster 1 (14%): fat. It has higher fat consumption than the population means meaning that its fat calorie is assured. However, the calorie of the protein is similar to that of the population means. Cluster interpretation Cluster 2 (7%): carbohydrates. The calorie from the carbohydrates in this cluster is the Cluster 3 (71%): lack of calorie. It has lower calorie of protein, fat, and carbohydrates compared to the population means meaning that the members of this cluster have an unbalance and limited nutrition intake. It is surprising that this cluster consist of 71% of the respondents. Cluster 4 (7%): abundance of calorie. This cluster is an opposite cluster of the cluster 3. While cluster 3 has lack of calorie, cluster 4 consumes much of calorie. Similar to cluster 2, it consists of an individual only. The individual comes from a high income family with less frequent drinking milk and its variance and is very frequently eating house made food. Conclusion We applied medoid-based clustering algorithms and validation in the desirable dietary pattern (DDP) data. The 3-dimensional plot of the DDP variables, pc plot, and medoid-based clustering algorithms, namely partitioning around medoids (PAM) and simple k-medoids (SKM), resulted in 4 clusters as the most suitable and practicable number of clusters. The various types of distance were provided in the clustering algorithms. Then, the medoid-based validation produced a well separated cluster, as well. The characteristics of the 4 clusters yielded, which were presented in a barplot, were cluster 1 had an abundance of fat, while cluster 2 was characterized by a very sufficient carbohydrates. Cluster 3 had a shortage of protein, fat, and carbohydrates, while the latter had an abundance of them.
2021-05-21T16:58:12.980Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "2c1fbecd869127ff5db5e8ef4ddc87108ba09a40", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1863/1/012069", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "635bfd36bbf1c17172b146d2c5b54f5cf95ec500", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
35482672
pes2o/s2orc
v3-fos-license
An Investigation on the Barriers and Facilitators of the Implementation of Electronic Health Records ( EHR ) The application of technology in health care, in the form of electronic health records (EHR), is the most important and necessary issue in order to improve the quality of health care, and studies have shown that, not only is it a way to integrate information and represent the condition of patients, and a dynamic source for health care, however it leads to gain access to clinical information and records, electronic communications, comprehensive training and management, and ultimately enhancing the public health; the aim of this study is to investigate the factors influencing the implementation of EHR, which are known as barriers and facilitators. The research is conducted in the form of a review research, and with the help of the Introduction Information technology has had a profound impact on various businesses; the health care institutions have not been an exception either.E-health is a new arena between information, public health, and commerce [1].The application of technology in health care, in the form of EHR, is the most important and necessary issue in order to improve the quality of health care, and studies have shown that, not only is it a way to integrate information and represent the condition of patients, and a dynamic source for health care, however it leads to gain access to clinical information and records, electronic communications, comprehensive training and management, and ultimately, enhancing the public health [2].The priority of countries for the establishment of e-health is different, and its constituents are referred to by different names, e-health is founded based on EHR [3]. EHR is a confidential and secure record related to an individual's whole lifetime that includes information about the individual's health care records in the health care system.This record is electronically accessible for authorized service providers at any time and any places, and it is designed in order to facilitate data sharing between health care providing organizations [4].This definition is presented by the World Health Organization [5]. In a study in Canada, it has been shown that EHR is a key factor for the integration of different systems, which makes a safe and effective health care possible for every Canadian [6] that can improve the quality of care for patients, through sharing relevant, timely, and updated information among a care providing team.[4] The resistance of physicians and other health care staff towards the adoption and use of health information systems and electronic medical records is one of the important barriers that delays the successful adoption and implementation of such systems [7].The Medical Institute comments that the broad adoption of EHR can play a pivotal role in improving the patient safety and health care quality, and also it may reduce the costs of providing outpatient services [8]. And also several studies identify facilitators and barriers to the adoption of EHR, such as: costs, difficult implementation, resistance of physicians, and organizational features, such as: the hospital size, ownership, and training status [9]. In addition to the above mentioned cases, resistance of some medical and professional staff towards changing from a manual system to an electronic system may be a problem in developing countries.Most health and information managers are aware that this change may take time or moderate the behavior and motivation of health staff to a small extent.The reason for changing from a manual system to an electronic system is important.The attitude toward EHR is not constant, and this is both a challenge and strength in different societies [10].For this reason, the physicians' active support, and use of EHR for its benefits are required, and identifying the possible barriers to the implementation, from the physicians' view, is essential and important [11]. In a study, conducted on the successful selection and implementation of EHR in small outpatient institutions in the United States of America, it is shown that the experience of the implementation of EHR depends on various factors that include: technology, training, leadership, the change management process, and the unique characteristics of the outpatient environment [5]. But despite the potential benefits of EHR, its implementation is facing executive limitations and barriers, the most important limitations include: cost limitations, technical limitations, standardization limitations, individuals' attitudinal-behavioral limitations, and organizational limitations, researches show that individuals' attitudinal-behavioral limitations or resistance to change plays a greater role than other limitations [12]. In general, extensive attention has been paid to the capacity of EHR for reducing the health-care costs and significant improvement of the health care quality [13].Despite these attentions and serious concerns about the implementation of EHR in developed and developing countries, studies have shown that there is a wide gap between planning to start e-health systems and success in the implementation of such systems, especially in developing countries, as well as achieving the primary objectives and expected benefits [7] [14], that indicates the existence of serious barriers on the way of the implementation of this project, and it requires attention to the facilitators of the program. Hence, this study tries to identify the barriers to and facilitators of the implementation of EHR and provide solutions to overcome the barriers and take advantage of the facilitators in order to optimize the operation of implementing this important issue. Method The research method is a review research.The keywords of EHR; barriers to and facilitators of EHR were searched in these databases: SID, ELSEVIER, and PUBMED. Several criteria were intended for searching the articles that include the following items: 1) The year of publication: The articles related to the years before 2008 were excluded; 2) Articles, whose full texts can be downloaded for free, were investigated; 3) Articles which contained the barriers and facilitators were extracted; 4) Articles about the EHR subsystems were excluded. Finally, 19 articles were selected according to the above criteria, and their full texts were studied Figure 1. Findings In a study conducted by Marie-Pier Gagnon in Canada, 10 factors influencing the implementation of EHR are mentioned.These 10 factors include: perceived usefulness, efficiency, motivation, participation of end-users in the implementation, the interaction between the patient and clinical staff, lack of time and workload, available resources, management, the expected output, and interdepartmental interaction capability.Another study entitled "Barriers to the adoption of Electronic Health Records" has expressed that one of the barriers is denial and resistance by physicians, which is for a variety of reasons, such as: Lack of time, high costs, lack of computer skills, disruption in the working procedure, concerns about security and privacy, disruption in communication among users, disruption in physician-patient relationship, lack of motivation, complexity, lack of adequate physical space, concerns about the inability to choose a suitable EHR system, lack of technical supports, lack of interdepartmental interaction capability, the lack of access to computers and computer literacy, lack of trust in sellers, concerns about entering data in the system, insufficient training by sellers after selling the system and lack of physicians' access to the sellers' technical supports, insufficient exchange of data, concerns about the acceptance of the system by patients, inadequate formal training and training classes, low speed in some units, lack of the system integrity for inpatients and outpatients, and lack of wireless communication with some nursing homes and clinics [8]. The results of a study in Saudi Arabia, divided the barriers to the implementation of EHR, into 6 categories: barriers related to human resources, financial barriers, legal barriers, organizational barriers, technical barriers, barriers related to hospital staff [7].Also, in another categorization, barriers and facilitators were classified into 6 categories: 1) Features related to users: Learning, typing skills, perceived usefulness, motivation, strategy (procedure), and other items such as: Remembering or forgetting passwords, and how to work with the system; 2) Features related to the system: Hardware or software features, speed, system performance, and usability; 3) Supports from other groups: Technical supports, formal training, informal supports from colleagues 4) Organizational supports: Extra time (working with the system or communicating with patients), and intraorganizational integration 5) Environmental factors: Physical space, wireless communication, and social environment (a physician does not feel comfortable, when typing and talking to the patient at the same time); 6) General controls: Compulsory use of EHR and the lack of any alternative such as a print or copy facilitate the use of the system, but in cases where no type of data were entered about a patient, this would be considered a barrier [9].A study by Mokhtari et al. in Isfahan, has investigated the challenges and barriers to the implementation of EHR, from physicians', administrators' and intellectuals' point of view, and classified them into two groups: infrastructural and structural, so that the challenges related to the infrastructure include 4 categories: issues related to information technology, lack of a common language between designers and users and lack of uniform definitions and contents, cultural issues, and lack of needs assessment.And the structural challenges include 3 categories: Instability in the implementation, legal violations, lack of integration and sharing the investment [10]. In Canada, 10 important factors were identified as the barriers to and facilitators of the implementation of EHR, from the viewpoint of the users (managers, physicians, patients and hospital staff), that include: Concerns regarding the design and technical aspects of the project, ease of use, interdepartmental performance, concerns about privacy and security of data, the issue of cost, efficiency, ability and familiarity with the system, motivation, interaction between patient and health care staff, lack of time and workload [4] [5].Another division has dealt with these aspects as barriers: A. Financial: The high start-up cost, the cost of maintaining, managing and controlling the system, uncertain payback period, lack of financial resources (budget).B. Technical: Lack of computer skills, lack of technical supports and trainings, system complexity, system limitations, lack of adapting to the users' needs, lack of trust, the need for standardization, lacking in hardware.C. Time: The time required for selecting, buying, and implementing the system, the time required for learning the system, the time required for entering the data, further time for each patient, the time required for transfer data from paper documents to computer.D. Psychological: Lack of trust in the system, the need for control.E. Social: Lack of trust in sellers, lack of support from other local and national organizations, intervention in the physician-patient relationship, lack of support from other colleagues, lack of support from managerial levels F. Legal: Concerns about security and privacy of data.G. Organizational: The size of organization, and the type of organization.H.The process of change: Lack of support from organizational culture, lack of motivation, lack of participation, lack of leadership [11]. The executive barriers to the implementation of EHR in Uremia were identified and classified in this way: Technical limitations, standardization, organizational changes, individuals' attitudinal and behavioral barriers, and costs [12].From another view, the barriers are divided into two categories: Personal: disruption in the working procedure, lack of understanding of the benefits, privacy and security of data, usability and flexibility, lack of time for training and redesigning the work processes, lack of computer skills, and organizational: financial costs, lack of adequate informational resources, problems of implementation, designing and testing the software, and the facilitators of the implementation include: the users' motivation, saving the physicians' time, choosing the appropriate type of system, the provability of usefulness, adequate IT systems [15]. From another aspect, the barriers and facilitators can be expressed as follows: Barriers: The density of the data, information transfer, abnormal signals, being a multitasking system, ergonomics, IT architecture, documentation load, workload, training, and required skills; Facilitators: Efficiency and availability, readiness, connection between experts, preference of providers, completeness of records, detecting errors, educational tools, and establishing a relation between the patient and provider, adaption to the users' needs, and management of working processes and procedures [16].Also financial, ethical, legal, social, and technical barriers, and facilitators such as: financial supports from the government and insurance companies, and participation of staff were identified in another study [17]. Discussion The results of the studies reviewed, show that the two important factors; the participation of end users in selecting and planning, and the status of the physicians' salaries, are considered as the main barrier to the adoption of EHR by physicians.Also, understanding the factors facilitating the implementation plays a key role in the success of the system.The two factors; motivation and the perceived usefulness of the project from the users' point of view, should be taken into consideration in the time of implementation.These two factors are closely related to each other, inasmuch as the perceived usefulness of the system increases the motivation for using it [4]. The findings show that there are a lot of barriers to the implementation of EHR, among which individuals' attitudinal and behavioral limitations and organizational changes, have obtained a high score [12].According to the factors mentioned in the study, improving the ability of experts for the easy and effective use of the system, enhances the quality and improves the safety, through the use of EHR [8].The results of the studies showed that the most effective factors include: efficiency, motivation, management, and the participation of end users.And factors such as technical aspects ease of use, available resources, and human resources, have limited effects.And security and privacy, the expected output, lack of time, and workload have relative effects, and also the relation between the patient and clinical staff, has no effects, in the process of implementing EHR [13] [18]. In this regard, the identification of barriers, such as: data, information, infrastructure, data exchange standards, and processing of codes and the vocabulary list of these systems, is of particular importance.Due to the challenges and complexities of EHR, the creation of this system requires a purposeful strategy, because it imposes changes to the way of performing the tasks that requires compatibility and adaptability to clinical processes.Therefore, the creation of these systems makes necessary the cooperation of different groups, such as: providers, users, designers, and experts in health information management (medical records) [19] [20]. Conclusions EHR is a complicated and multidimensional project, that many factors will affect its implementation.A factor, in a situation, can be as a barrier, yet it can also be considered as a facilitator, such as motivation.If users do not have the sufficient motivation to work with the system, they will resist towards it and will refuse to perform it, while in another situation and condition, if the necessary motivation is provided, it will contribute to the project as a facilitator.What is certain is that cost and time are always regarded as two deterrent factors. For a successful implementation of the project, it is necessary to evaluate the readiness of the organization for the adoption and implementation.Individuals' attitudinal and behavioral barriers such as resistance to change and pessimism about the future of the project need changes in the culture and atmosphere of the organization.Also since EHR is an extensive and large-scale project, it needs intersect oral coordination and requires basic infrastructures, such as telecommunication infrastructures, and message exchange standards, and the government support and private sector investment, to compensate the costs associated with start-up, maintenance, and control of the project. Figure 1 . Figure 1.Related to the searched articles.
2017-08-15T11:14:45.351Z
2015-12-09T00:00:00.000
{ "year": 2015, "sha1": "62a540eddd7ad1105eee5844f706ed6b3869d21f", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=61902", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "62a540eddd7ad1105eee5844f706ed6b3869d21f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247384296
pes2o/s2orc
v3-fos-license
A Pyrene‐Triazacyclononane Anchor Affords High Operational Stability for CO2RR by a CNT‐Supported Histidine‐Tagged CODH Abstract An original 1‐acetato‐4‐(1‐pyrenyl)‐1,4,7‐triazacyclononane (AcPyTACN) was synthesized for the immobilization of a His‐tagged recombinant CODH from Rhodospirillum rubrum (RrCODH) on carbon‐nanotube electrodes. The strong binding of the enzyme at the Ni‐AcPyTACN complex affords a high current density of 4.9 mA cm−2 towards electroenzymatic CO2 reduction and a high stability of more than 6×106 TON when integrated on a gas‐diffusion bioelectrode. Materials and methods Materials and Instruments. 1-pyrenebutyric acid adamantyl amide was prepared as previously described. [1] All reagents were purchased from Sigma Aldrich. Commercial grade thin Multi-Walled Carbon Nanotubes (MWCNT, 9.5 nm diameter, purity > 99%). Carbon nanomaterials were used as received without any purification. RecRrCODH was co-produced in presence of the three Ni-chaperones (RrCooC, RrCooT and RrCooJ) and isolated as previously described [2] . When not used, the enzymes were stored at 4 °C. All the reagents were used without further purification. All solvents were of analytical grade. Distilled water was passed through a Milli-Q water purification system. Acetonitrile (HPLC) grade used for electrochemistry was obtained from VWR chemicals and used after drying on 4 Ȧ molecular sieves. NMR spectra were recorded on a Bruker AM 300 ( 1 H at 300 MHz, 13 C at 75 MHz) or a Bruker Avance 400 ( 1 H at 400 MHz, 13 C at 100 MHz). Chemical shifts are given relative to solvent residual peak. Mass spectra were recorded on a Bruker Esquire 3000 (ESI/Ion Trap) equipment. Electrochemical analysis. The electrochemical experiments in aqueous media were performed in 50 mM TrisHCl buffer pH 8.5 in a three-electrode electrochemical cell, using a Biologic VMP3 Multi Potentiostat, inside an anaerobic glove box (O2 <2 ppm, Jacomex). The surface of GC electrodes was polished with a 2 μm diamond paste purchased from Presi (France) and rinsed successively with water, acetone, and ethanol. A Pt wire placed was used as counter electrode, and the SCE or Ag/AgCl served as reference electrodes. All current densities are given considering the geometrical surface of the MWCNT-modified electrode (0.07 cm -2 ). Oxygen concentrations were measured in the electrolyte by using a Neofox Oxygen Sensing System from OceanOptics. Synthetic procedures for compound 1 and 2: Addition of 1-Pyrenylmethyl bromide (555mg, 1.87mmol) to a solution of tacn orthoamide (260 mg, 1.87 mmol) in tetrahydrofuran (6 ml) produced a precipitate almost immediately. Stirring was continued for another 30 minutes after which the product was filtered and washed with absolute ethanol (2 x 2 ml) and ether (3 x 2 ml). The green solid was then dissolved in water (6 mL) and heated at reflux for 4 hours. The solution pH was adjusted to 12 with NaOH, the product was extracted into chloroform (4 x 10 ml), the extracts dried over magnesium sulfate and the solvent removed under reduced pressure to give 1 (411 mg, 87 % yield). 1 (411mg, 1.1 mmol) was then dissolved in acetonitrile (30 mL), sodium carbonate (3 g) and ethyl bromoacetate (252 µL, 2.3 mmol) were added. The mixture was stirred at reflux for 6 h. Solvents were removed under reduced pressure. The crude product was dissolved in a 5M aqueous solution of NaOH (pH=12) extracted into chloroform (3 x 10 ml). The gathered extracts were dried over magnesium sulfate and the solvent removed under reduced pressure to give a viscous, yellow/orange oil. The product was purified by column CH2Cl2/Acetone affording a white powder (255mg, 63% yield). 1 Synthetic procedures for AcPyTACN: Compound 2 (225m g 0.49 mmol) was dissolved in 5M HCl (5 ml) and the solution refluxed for 3h. Removal of the solvent gave the deprotected AcPyTACN as a green solid. Preparation of the electrodes The working electrodes were glassy carbon and gas diffusion electrodes ( Dithionite (DTH) was diluted to 23 nM (monomer) in 50 mM TrisHCl pH 8.5, remaining 1 µM of DTT and DTH. Then, the enzyme was exposed to the air for several times (0-40 minutes) at 25 °C. The enzyme was further diluted to 5 nM (monomer) in anaerobic 50 mM TrisHCl pH 8.5, 5 mM dithiothreitol (DTT), 1 mM dithionite (DTH) and incubated 5 minutes in this buffer in order to preactivate the enzyme before measuring the remaining specific activity. ICP-AES metal ion analysis MWCNT films of 1.8 cm -2 were modified according to the procedure described in previous section. Then, the modified MWCNT electrodes were mineralized in the presence of 0,6 mL HNO3 (65%) at 60 °C for 24 h. In order to remove traces of carbon nanotubes, the solution is first centrifuged 10 min at 5000 rpm, filtered on a glass filter and washed with 10% nitric acid solution before completing the volume to 6ml with pure water. . The metal concentration of the supernatant was analyzed by inductively coupled plasma atomic emission spectroscopy (ICP-AES) (Shimadzu ICP 9000 with mini plasma torch in axial reading mode). Standard solutions of Ni and Fe for atomic absorption spectroscopy (Sigma Aldrich) were used for quantification (calibration curve between 1.9 and 1000 μg L −1 in 10% HNO3 (Fluka)). The results are presented in Table S1 Table S1. ICP-AES metal ion analysis (Concentration in μM in the supernatants) Electrochemical analysis The Langmuir-Freundlich model was employed to fit the experimental data using OriginPro 2020, according to equation 1: [3,4] Where %Losseq is the percentage of CO activity loss at the equilibrium, %lossmax is the maximum percentage of CO activity loss at maximum imidazole concentration, KImid app is the apparent association constant in water between imidazole and AcPyTACN sites at the modified electrode and n is Langmuir-Freundlich coefficient number. Table S2 shows the Langmuir-Freundlich model parameters obtained from fitting curves from Figure 4A.
2022-03-12T06:23:48.831Z
2022-03-11T00:00:00.000
{ "year": 2022, "sha1": "4d1d337cd1fbfb19fc3a9841aa100dc4a19606f2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/anie.202117212", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ebf16623455d1792de9e3732d3ed5379a37504c0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
122128914
pes2o/s2orc
v3-fos-license
EFFECT OF ROUGHNESS GEOMETRY ON HEAT TRANSFER AND FRICTION CHARACTERISTICS OF PCM STORAGE UNIT FOR NIGHT COOLNESS STORAGE IN SUMMER SEASON by This paper presents a theoretical analysis of therm al storage unit using phase change material (PCM) as storage medium. Storage unit consists of paralle l rectangular channels for the air flow which are separated by phase change storage material. The pur pose of storage unit is to absorb the night coolness and to provide cooled air at comfort tempe rature during day time in summer season. MATLAB simulation tool has been used to compute the air t mperature variation with location as well as time, charging and discharging time of storage u nit. Phase change material used for analysis is selected in such a way that it’s Melting point lies between comfort temperature and minimum night ambient temperatures. The air flow rate needed for charging of PCM is approximately four times greater than the flow rate required during day time to achieve comfort temperature for approximately eight hours, due to limited summer night time (only eight hours). The length of storage unit for which NTU value is greater than or equal to five will giv e the exit air temperature equal to PCM temperature for the case of latent heat utilization . It is found that artificial roughness on the duct surface effectively reduces the length of storage u nit in the cost of some extra pressure drop across the duct. Introduction: Energy storage not only plays an important role in conservation the energy but also improves the performance and reliability of wide range of energy systems, and become more important where the energy sources are intermittent.Due to the increased environmental concerns and limited nature of fossil fuels passive ways of space condition are getting more attention like solar cooling/heating, storage of night coolness, earth coupled cooling/heating systems, PCM based cooling/heating systems etc.Since many of the resources on which passive techniques depend are intermittent in nature so storage of these resources always plays a vital role for continuous utilization of these resources.So an efficient and reliable thermal energy storage system plays a vital role when talking about passive techniques.Thermal energy storage can be in form of sensible heat of liquid or solid, storage of high pressure steam, heat of hydration or utilization of heat of fusion or heat of evaporation.Among all above mentioned storage techniques latent heat storage energy technique is getting more attraction due to their very high energy storage densities and smaller temperature difference when storing the energy and releasing it as compare to sensible storage technique.Using coolness of night to achieve comfort temperatures in a space is one of the passive ways of cooling.If this night coolness is stored and being used during day time to achieve the comfort temperatures mechanical cooling can be either totally eliminated from day time also or at least can be limited to certain period of day time.Same can be practiced during winters, if in winter solar energy during day time is stored and used for night time heating a large amount of fossil fuels can be left unburnt which may help in reduction of pollutant gases like NO x and CO 2 .In this paper the study of PCM storage unit for night coolness storage system has been done with an objective to reduce the length of storage system by using different artificial roughness geometry which in turn reduces the cost of its use. Literature survey: Hed and Bellander [1] presented a method to simulate a PCM air heat exchanger.PCM used is having phase change in a given temperature range.The aim was to find a model that will fit into a finite difference based indoor climate and energy simulation software.To do that a fictive heat transfer coefficient is established.The fictive heat transfer coefficient includes aspects of the geometry and the airflow in the heat exchanger as well as the material properties of the PCM.Alkilani, M. M., Sopian, K., Sohif, M., Alghoul, M. A., [2] presented theoretical investigation of Output air temperatures due to a discharge process in a solar air heater integrated with a phase change material.Phase change material unit consist of inline single row of cylinders containing PCM.The PCM consists of paraffin wax with mass fraction 0.5% aluminum powder to enhance the heat transfer.Stritih & Butala [3] presented the experimental and numerical analysis of cooling buildings using night-time cold accumulation in phase change material (PCM) with constant inlet temperatures.The comparison of experimental and numerical results shows good agreement.Arkar and Medved [4] have presented a numerical study the free cooling concept with varying inlet temperatures using RT 20 paraffin as phase change material which is integrated into the ventilation system of the building.The cylindrical latent heat thermal energy storage (LHTES) device was filled with spheres of encapsulated RT20 paraffin.In this research a parametric study of storage unit has been carried out and ambient air is being used as inlet air.The correlation between the climatic conditions and the free cooling potential was investigated by Medved & Arkar [5] for different cities of Europe and for the case of a cylindrical LHTES with a packed bed of spheres encapsulated with PCM that is integrated into a building's mechanical ventilation system.For an experimental verification of the LHTES's numerical model a commercially available PCM (RT20 paraffin, Rubitherm GmbH) with a latent heat of 142kJ/kg was used.This PCM has a relatively large phase change temperature range.Morison and Abdel-Khalik [6] developed a theoretical model for studying the transient behavior of phase-change energy storage (PCES) unit and studied the performance of solar heating systems using both air and liquid as working fluid, this model based on three assumptions: axial conduction in the flow mode is negligible, Biot number is very low that temperature variations normal to the flow can be neglected, and heat loss from the unit can be ignored. Solomon [7] studied the behaviour of an array of PCM cylinders as a thermal storage unit under some assumptions which can make this study by looking at a single row of N cylinders.The heat transfer process in every cylinder is radially symmetric, and recommended to use this method to design the systems and their simulation, which is used by the authors in this study to predict the air temperature and freezing time.Joseph Virgone et al. [8] performed the assessment of PCM wallboard usage for the renovation of a tertiary (i.e.light weight) building.For this purpose, two identical rooms of a renovated tertiary building have been tested, one equipped with PCM wallboard the other being "classically" renovated.The results show that the PCM wallboards enhance the thermal comfort of occupants due to air temperature and radiative effects of the walls.Gideon Susman et al. [9], constructed PCM modules from a paraffin composite and tested in an occupied London office, in summer season.Design variations tested the effect on heat transfer of a black paint or aluminium surface, the effect of different phase transition zones and the effect of discharging heat inside or outside.The modules temperatures were monitored along with airflow rate, air temperature and globe temperature.Black modules transfer heat and exhaust latent storage capacity significantly quicker than aluminium modules, due to radiant exchange.This analysis presents the affect of limited summer night times and temperature during the storage of night coolness using Phase change materials in harsh summer climatic conditions for the utilization of during day time to achieve comfort temperatures from storage outlet.For this purpose storage unit used is composed of multiple rectangular channels for the flow of heat transfer fluid, which is air in this case, separated by the phase change storage material.MATLAB simulation tool has been used to compute the air temperature variation with location as well as time, charging and discharging time of storage unit.Effect of PCM mass, air flow rate and different inlet temperatures are considered both for the day time and night time operation of the storage unit.Melting point of Phase change materials used for analysis is selected in such a way that between comfort temperature and minimum night ambient temperatures.The five artificial roughness geometries on the duct surface as per the order of ability to create turbulence and a smooth surface have been selected.The correlations for heat transfer coefficient and coefficient of friction developed by respective investigators have been used to calculate the required length of storage unit and extra pressure drop across the duct. Methodology: For the analysis of storage of night coolness using phase change materials storage unit configuration is displayed as in Fig. 1.This system consists of two essential parts, PCM channels and air flowing duct in alternate orders.Number of air channels is taken as fives.Dimension of the air flowing duct is in which is the length of storage unit, is the width of storage unit which is taken as 0.45m, is the air gap which depends on air flow rate. Simplicity of this type of storage unit is the main advantage of its selection.Such types of model have been studied by Morrison & Abdel-Khalik [6] for the thermal energy storage coupled with solar water heating system.Here in this work this type of system is used to be studied with night coolness storage for day time utilization for cooling during summer season. Figure blow shows the portion of PCM container and air gap that will be used for the analysis.Half thickness of the plate and respectively half of its air gap is considered here and other portion of plate with the same thickness and same air gap will behave in the similar way. Fig. 1: Thermal energy storage unit to be analyzed for night coolness storage ----------------(1) Where, and The above equation ( 1) can be written with finite difference method in the following form: The convection heat transfer coefficient is determined by Where, For fully developed laminar flow in a smooth duct [10], , ( =180, 90) For fully developed turbulent flow in a smooth duct [11], , for cooling , for heating Where, and Heat carried away by air in time interval of at any time NTU (number of transfer units) for the storage unit can be defined as: Following is the relation that has been developed for calculating indoor comfort temperatures for any region which is coupled with mean outdoor temperatures and it is given by Nicol et al. and Raja et al. [12,13]. Where, is the mean daily temperature and is indoor comfort temperature.Comfort temperature range may vary about ±2 o C for air conditioned space.To overcome the low thermal conductivity problem of PVM we can add a powder of material having good conductivity such as copper or aluminium powder, in goal to low the system cost we preferred aluminium powder (1.5%), the physical properties of the stimulated compound calculated as follows [2]: Where Effect of roughness geometry on heat transfer and friction characteristics: Use of artificial roughness in the form of ribs on the heat transfer plate has been found to be an efficient method of enhancing the performance and to reduce the size of the system.There are several parameters that characterize the roughness elements, but for heat-exchanger the most preferred roughness geometry is repeated rib type, which is described by the dimensionless parameters viz.relative roughness height e/D h and relative roughness pitch P/e.The friction factor and Nusselt number are function of these dimensionless parameters, assuming that the rib thickness is small relative to rib spacing or pitch.Although the repeated rib surface is considered as roughness geometry, it may also be viewed as a problem in boundary layer separation and reattachment.The rib creates turbulence, by generating the flow separation regions (vortices) one on each side of the rib, which results in enhancement in heat transfer as well as friction.Pressure drop across the duct is given by Circular ribs [14] V shaped ribs [16] Wedge shaped ribs [17] Chamfered rib-groove [18] (a) (b) If e >> δ', roughness has more effect on fluid pressure drop as compared to heat transfer, owing to probable interference of turbulence induced in the already turbulent core. If e ≥δ', the intended purpose of noticeable increase in heat transfer and moderate fluid pressure drop could be served. The optimum chamfering angle on the basis of thermodynamically performance has been reported equal to 15-18° [19].The optimum relative groove position g/P is about 0.4.The induced form drag is reduced due to change in angle of attack for ribs from 90 o (transverse), and a better thermal to hydraulic performance is obtained by having optimum angle of attack. As the angle of attack decreases, the friction factor reduces rapidly; however, there is marginal decrease in Nusselt number with change in angle of attack from 90 o to 45 o . Results and discussion: Results obtained from the Matlab computer programs are presented here to show the effect of different parameters during charging and discharging of PCM storage unit.Thermal conductivity of PCM is increased by mixing aluminium powder to reduce biot number below 0.1 for PCM container.The main aim of this work is to obtain comfort temperature from the storage unit when it is hot in day-time during summer season by utilizing night coolness with air as the heat transfer fluid.Variation of air flow rates, length of storage unit and its effect on air outlet temperature is shown in Fig. 6 for inlet air temperature of 40 o C. Eight air flow rates have been considered here which varies from 10m 3 /h to 80m 3 /h with the increment of 10m 3 /h.All the flow rates mentioned here are the flow rates with single air channel i.e. total flow rate is five times of these values.For analysis purpose it is assumed that PCM is fully charged at its melting point and no sensible heat is considered for this analysis part only.The Reason behind no sensible heat consideration is that here latent heat action is of more interest than sensible heat for this type of storage system (sensible heat is10% of the latent heat). Here velocity of air has been assumed to be constant as 2.5 m/s and air gap will vary according to the flow rate required.it has been ensured that for all air flow rates, the air gap required will also generate fully developed flow within 10% length of the minimum length required of the storage unit to obtain comfort temperature.If we select air gap constant and make air velocity variable then either flow will not be fully developed (at lower air flow rate) or the power required for fan will be high (at higher air flow rates).The outlet air temperature is decreasing exponentially with increasing the length of storage unit.The outlet air temperature for higher air flow rate is higher than lower air flow rate at any length of storage unit.At lowest flow rate which is 10m 3 /h, the length of storage unit required is less than 1 meter.The minimum length of storage unit required for the highest flow rate is 3.7 meters.The length of storage unit corresponding to NTU value 5 as shown in the Fig. 7 is the minimum length at which temperature of the air will almost become equal to the material (PCM) temperature, while only latent heat of the material is being considered (Cengel, 2006).Therefore increasing the length of storage unit beyond this value will have no affect on the outlet temperature of air.NTU depends upon heat transfer coefficient and the surface area of the path along which air is moving.For surface area width of the channel is kept constant while the length of the air channel can be increased or decreased.So according to Fig. 4 and Fig. 7 higher air flow rates will require much larger storage length to exchange heat with PCM to be in comfort temperature limits at the exit of storage unit.On the other hand limitations of space availability and cost are always associated with length of the storage unit.Best approach will be first choose the minimum length in such a way which provides comfort temperature at the start of operation at required flow rate and then use the mass option of PCM to keep the exit air temperature in comfort range for desired time period. Fig. 8 and shows the effect of mass of PCM on discharge time for different inlet air It is assumed that initially PCM is in solid form at its melting point of 27 o C.During the melting of PCM its temperature will be constant therefore we are getting constant outlet air temperature in the comfort temperature range up to total mass of PCM is melted.Beyond this point output air temperature increases rapidly which represents that PCM temperature will also increase rapidly because sensible heat is very less as compare to latent heat due to very low specific heat of PCM.Increased inlet air temperature will always provide less time of comfort temperature as compare to decreased inlet air temperature.Change in inlet air temperature do not have effect on discharge time of storage unit, it is approximately 12 hours for all inlet air temperatures. Fig. 2 : Fig. 2: showing half portion of PCM and air gap used for analysis volume fraction of PCM in the Compound; is the volume fraction of aluminum powder in the Compound; is the mass fraction of PCM in the Compound; is the mass fraction of aluminum in the Compound.The compound formed after mixing is used as phase change material for night coolness storage. Fig. 4 Fig.4shows the various possible flow patterns downstream from a rib, as a function of the relative roughness pitch and relative roughness height.Flow separates at the rib, forms a widening free shear layer, and reattaches at a distance of 6-8 times rib-roughness height downstream from the rib.Reattachment does not occur for P/e less than about eight except for chamfered rib or rib-groove roughness.The local heat transfer coefficients in the separated flow region are larger than those of an undisturbed boundary layer and wall shear stress is zero at the reattachment point; the maximum heat transfer occurs in the vicinity of the reattachment point.A reverse flow boundary layer originates at the reattachment point and tends toward redevelopment downstream from the reattachment point.The value of p/e less than 10, indicates that the roughness elements are too close to allow the free shear layer to Fig 5 below shows the maximum, minimum and comfort temperatures for the location Varanasi.The comfort temperature as calculated from the above mentioned relation and known climatic data for the month of June is (30 o C±2 o C). Fig. 6 : Fig. 6: Outlet air temperature vs. length of storage unit during day time operation, T in =38 o C Fig. 11 :Fig. 12 : Fig. 11: Outlet air temperature vs. length of storage unit for various types of artificial roughness (during day time operation) =80 m 3 /h T in =40 o C Table 1 : physical properties Table 6 . 2: Correlations for heat transfer and coefficient of friction for various types of artificial roughness
2019-04-19T13:05:48.855Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "86dc731e70264b9d23425add122dcd2df1128a82", "oa_license": null, "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-98361200023S", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "80e1fa53fcb6b93486861bb5ec499f7cdd872cae", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
36031593
pes2o/s2orc
v3-fos-license
Biological Computing Fundamentals and Futures The fields of computing and biology have begun to cross paths in new ways. In this paper a review of the current research in biological computing is presented. Fundamental concepts are introduced and these foundational elements are explored to discuss the possibilities of a new computing paradigm. We assume the reader to possess a basic knowledge of Biology and Computer Science Introduction It is easy to miss nature's influence and subsequent impact on living forms. This applies to our day to day activities as well. Humans use a variety of gadgets and gizmos without realizing that the gadget could be working on a pattern already patented and perfected by Mother Nature. Computers and software are no exception. The last few decades have ushered in the age of computers. Electronics have invaded all walks of life and we depend on electronics to accomplish most of our day to day activities. As predicted by Dr. Gordon E. Moore, modern day electronics has progressed with miniaturization of electronic components. According to Dr. Moore, the miniaturization of integrated electronics will continue to be bettered once every 12 -18 months with a reduction in cost (Moore, 1965). True to his prediction modern day chips have up to 1 million transistors per mm 2 . However as with other things, miniaturization cannot continue forever, the laws of nature and in particular physics will soon catch up to impose a limit on the silicon chip. Such limitation will not prevent us from progression. The route is clear but the ways to reach it may be unusual. Imagine having billions of Deoxyribonucleic (DNA) acids instead of silicon chips powering the computer. The fact that silicon chips will even be replaced will be anathema to some but we are well on our way for some surprises. Hence it is imperative that software engineers have an understanding even if it just includes the basics of microorganisms and how they will impact computing. Our fascination and its logical conclusion, which is reflected in this paper, is due to the behavioral similarity between microorganisms (DNA) and computers. As soon as you understand what microorganisms can do, then relating that to a computer or program that runs on a computer becomes easy. Much like microorganisms, computers have evolved over a period of time. However time will tell if DNA will indeed play a prominent role in their march to future glory. It is our endeavor to shed light on biological computing thru a lay person's eyes. Concept This paper talks about how two diverse systems, biology and computers are brought together to take mankind into the future. A basic understanding of the lowest unit (Deoxyribonucleic acid -DNA) of life will help. People should not imagine that DNA will replace the CPU in biological computing. In our opinion such a scenario is at least two decades or more away from reality. As like other inventions one can safely anticipate or expect baby steps in this direction before conceiving bigger pictures. Although not exceeding a few microns in size, the DNA molecule has a number of tricks that will be useful for biological computing. One of them is the ability to generate proteins. Once programmed, by altering the cell by chemical or changing the environment the reprogrammed cell does its job to near perfection as per the changed environment Another trick that may be useful is the ability of DNA to make exact copies of itself. Imagine the advantage of having such molecules programmed for different purposes and its impact on applied sciences like medicine, agriculture, and various industries, in fact such molecules act like micro computers. There is no clear road map for this programmable feature to be taken advantage of to eventually replace the CPU. In essence, Biological computing is about harnessing the enormous potential of the DNA to the benefit of mankind by manipulating the DNA. Having laid down the concept and to provide clarity to better understand and appreciate biological computing we are providing a brief introduction to DNA. We will also provide the similarities between DNA and the computer; briefly provide information on current research and finally touch upon trends, impact and future prospects. Deoxyribonucleic Acid (DNA) This section provides a summary of DNA. This detailed information can be found in any biology book but is condensed here to set up this discussion of bio-computing. The essence of life is enclosed in a 20 micron long substance called DNA. The structure of DNA ( Figure 1) was first identified by Watson and Crick (1953). The earliest discovery of DNA was by Swiss physician Fritz Miescher in cell nuclei as early as 1868. According to the Watson-Crick model, the DNA molecule consists of two polymer chains. Each chain comprises four types of residues (bases) -namely A (adenyl), G (guanyl), T (thymidyl), and C (cytidyl). The sequence of bases in one chain may be entirely arbitrary, but the sequences in both chains are strongly interconnected because of the complementary principle so that: A is always opposite T; T is always opposite A; G is always opposite C; C is always opposite G. DNA was recognized as the most important molecule of living nature. In living organisms, DNA does not usually exist as a single molecule, but instead as a tightly-associated pair of molecules. These two long strands entwine like vines, in the shape of a double helix. The nucleotide repeats (structural units of DNA, Figure 1) contain both the segment of the backbone of the molecule, which holds the chain together, and a base, which interacts with the other DNA strand in the helix. In general, a base linked to a sugar is called a nucleoside and a base linked to a sugar and one or more phosphate groups is called a nucleotide. If multiple nucleotides are linked together, as in DNA, this polymer is called a polynucleotide (Frank-Kamenetskii, 1997). At the time of discovery of the structure by Watson-Crick, it was a great step for mankind in the field of biology but very difficult to have dreamt that half a century later it will also help mankind in another field -computing. Biological Computing Simplified The above brief definition about DNA may be Latin and Greek to pure computer engineers. We hope to change that via Figure 2 given below. In the figure, under Laboratory conditions it is possible to take part of a DNA molecule and engineer it to reproduce a particular protein (the end product of a successful DNA transcription is a protein). Computers use registers to flip the binary between 1 and 0. In microorganisms the same "Registers" and flip-flop occurs but at the DNA level. In the above example you can imagine Adenine, Cytosine, Guanine, and Thymine are the registers that are involved in protein synthesis. Any change to this structure or inhibition of the normal protein synthesis by changes in environment results in a completely new product; worse in some cases if no product is created. This is the whole idea of using DNA (refer to Concept section) in biological computing. As you can see the comparison between DNA and the computer is as close as one can imagine. DNA is ubiquitous in life forms and is self contained. Its intelligence and ability to adapt to changing conditions far surpasses anything and everything one could imagine. A double stranded DNA within a single cell is fully self contained. It works with clockwork precision, has the ability to repair itself; provide backup; create new patterns; select the best for its survival. Most complex computers exhibit the above in one way or the other. As a DNA has to survive in nature, only the fittest survive and hence the ability to adapt to changing environment. However the same environment (extreme heat, chemicals, Ultra Violet rays, etc.) can sometimes causes changes to DNA that may make them loose some of their magic and in carries to successfully replicate thereby passing some cases can be catastrophic. In real life, the DNA is intelligent enough to recover from catastrophic failures. There are many tools that it on the important traits to its progeny. A few of them include redundancy, self recovery by protein synthesis/translation, and ability to shut down malfunctioning parts of DNA. Compare this with a computer and the software that runs the computer. Even a pure software engineer will now be able to link the computer to the DNA. In fact I would go as far to say that what we know in computer jargon as "Primer", "Reusability", etc., has been in existence since time immemorial in the DNA molecule. Microsoft had in fact coined the terminology "DNA" in the late 1990's to market their Distributed networking solutions (since then Microsoft has dropped it for whatever reasons) and one can safely assume that they had borrowed it from biology. Table 1 below compares a DNA with a modern day computer. We would like to touch upon a few of the points mentioned in the above comparison table to highlight the benefits of taking biological computing to its next step, which is to make it a reality. The ability to store billions of data is an important feature of the DNA and hence to biological computing. While DNA can be measured in nano grams, the silicon chip is far behind when it comes to storage capacity. A single gram of DNA can store as much information as 1 trillion audio CDs (Fulk, 2003). This offers storage possibilities previously unheard of and at the same time businesses can reduce the cost of storage and plough investments into other areas. While we are all familiar with Von Neumann's sequential architecture which has stood the test of time, the fact that we could have millions of DNA molecules in a small vial allows us to think of massive parallel computing when using microorganisms. Parallel processing using DNA can achieve speeds that man could not have imagined. For comparison, the fastest supercomputers can perform around 10 12 operations per second, but even current results with DNA computing has produced levels of 10 14 operations per second or one hundred times faster. Experts believe that it should be possible to produce massively parallel processing in biological computers at a level of 10 17 operations per second or more, or a level that silicon-based computers will never be able to match (Fulk, 2003). Second, in the case of DNA computing, the biological reactions involved produce very little heat, wasting far less energy in the process. This allows for these computing processes to be up to one billion times as energy efficient as their electronic counterparts. Third, the components of a computer composed of DNA as the primary unit is non toxic when compared to the current systems which is highly toxic due to use of chemicals and other materials that are not easily degradable. Not only is the material toxic but in some cases production of such materials also results in toxic byproducts. The damage of such toxic materials to the environment is unimaginable and the cost to clean up is also high. Lastly, DNA has the inbuilt ability to repair itself in case of any impact to its functioning. This type of self-healing is not possible in a hardware based computer. It may sound a bit like an H.G Wells story, but imagine a computer that does not break down after a few years in operation and one that does not require a hardware upgrade? The benefits of moving towards biological computing appears immense. Current Research Before embarking on this paper we did some research to find out where the world is in terms of biological computing. As one would expect we see a lot of baby steps being taken in this field. Part of the reason is because software engineers need to first understand Biological sciences. It is a radically different field where there is no easy way to debug; to fix and run a program. Take for example the "Genetic Circuit", worked on by Michael Elowitz and his team . The circuit consists of four genes engineered into a bacterium. Three of them work together to turn the fourth, which encodes for a fluorescent protein, on and off. Although this circuit is a remarkable achievement, it doesn't keep great time-the span between tick and tock ranges anywhere from 120 minutes to 200 minutes. And with each clock running separately in each of many bacteria, coordination is a problem: watch one bacterium under a microscope and you'll see regular intervals of glowing and dimness as the gene for the fluorescent protein is turned on and off, but put a mass of the bacteria together and they will all be out of sync. This is a big first attempt and we have many miles to go . Another interesting work with a name that almost rhymes like a software object is being carried out by James J. Collins at Boston University. The main focus is on "Genetic Applets". Similar to what a Java applet is and does the genetic applet is modeled on the same lines, i.e. programmatically altered to perform one or more functions repeatedly with perfection . One might wonder how such DNA molecules that are programmed for one or a few functions can one day replace the CPU. To answer this one must look into the work that is carried out by Dr. Thomas F. Knight. His team has forayed into what is known as amorphous computing. Knight's lab is working on techniques to exchange data between cells and between cells and large scale computers as communication between components is a fundamental functionality of computers. The concept of bioluminescence is used for this purpose. Needless to say all of the techniques involve splicing and dicing of genetic materials which is nothing but the DNA. Trends, Challenges & Future Prospects It is clear that scientist and various teams have been working to realize the huge potential of the DNA molecule. James J. Collins and his team have gone to the extent of enabling communication between the molecules and a computer. Knowingly or unknowingly biology has been the inspiration for computers to a great extent. The similarities are too many to think otherwise. So it is time for harnessing the power of DNA using computers as the inspiration. While we live in the age of computers, biological computing is slowly gaining prominence but without much fanfare. True, biological computing has played a big role in modern medicine and will continue to do so, but to see a computer being solely powered by microorganisms/DNA is far away. We feel that we are not even close enough to say that the next years will see the dawn of biological computing where CPU is replaced by DNA. Some of the challenges that stare us in our face to eventually replace silicon chips with DNA include: a) Ability to control the DNA. b) How to make the various altered DNA's to communicate with each other. c) Can the programmed DNA or microorganism go wrong? d) Can it impact health? Maybe the above may not be an issue at all but still they need to be answered. For all those hard core computer professionals who are wedded to silicon chips it is time to look at the future and prepare for the next big thing in computers. The future for biological computing is bright. Already some of the medical/industrial products like Vaccines, Insulin (for diabetes treatment) are benefiting from this research. Most of the design/patterns coming out of various software companies have already been in existence in nature (DNA) and all we need to do to effectively use the DNA is to reverse engineer, understand the inner workings and make it fit to work to our requirements. The advent and gaining popularity of Nano technology offer more avenues to use DNA. Under laboratory conditions, DNA self-assembly has been demonstrated successfully, simple patterns (e.g., alternating bands, or the encoding of a binary string) that are visible through microscopy has been used successfully for simple computations such as counting, XOR, and addition (Wooley and Lin, 2005). Conclusion Biological computing is a young field which attempts to extract computing power from the collective action of large numbers of biological molecules. In our opinion the CPU being replaced by biological molecules remains in the far future. However if one can imagine such a scenario then it is safe to imagine or think of a biological computer as a massively parallel machine where each processor consists of a single biological macromolecule. By employing extremely large numbers of such macromolecules in parallel, one can hope to solve computational problems more quickly than the fastest conventional supercomputers. To many pure software professionals this may be far-fetched. A good compromise could be a hybrid system. A part of the system can be made of biological and the other using current or new hardware that may become available. This would give us the combined benefit of both systems. Companies and scientist that are involved in the biological computing work need to take care of legal, moral regulations. Maybe it is time to overcome Moore's law as the rate of doubling has slowed down (Mathews, 2006). We need to think of computing in a radically different way and who knows if in the near future we will be tackling real viruses instead of the electronic virus.
2009-11-09T05:16:01.000Z
2009-11-09T00:00:00.000
{ "year": 2009, "sha1": "714029d50604c6f17e29798ea43b3e15036611d0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "714029d50604c6f17e29798ea43b3e15036611d0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Biology" ] }
21560470
pes2o/s2orc
v3-fos-license
Incidental detection of pancreatic hemangioma mimicking a metastatic tumor of renal cell carcinoma Adult pancreatic hemangioma is a rare disease. We presented a case of a woman with pancreatic tail mass mimicking a distant metastasis from the kidney. A 68-year-old woman was found with a left kidney mass on medical checkup. Computed tomography scan showed a 4.3 cm-sized mass in the left kidney, suggesting renal cell carcinoma (RCC), and a strongly enhancing tiny nodule in the pancreatic tail. We could not rule the possibility of RCC metastasis, hence, surgical resection of the pancreatic mass simultaneously with radical nephrectomy for RCC was conducted. Gross pathologic examination revealed hemangioma. Immunohistochemistry revealed that the tumor was positive for CD34, CD31 and factor VIII-related antigen. There were no significant postoperative events, and the patient was discharged on postoperative day 7 without any complications. Treatment strategies for pancreatic hemangioma have not been established. To our knowledge, this was the first case report of asymptomatic pancreatic hemangioma. In previous literature, treatment differed on a case-by-case basis, ranging from observation to surgical resection. The most important factor in deciding whether to perform surgery is possibly risk-benefit effectiveness; however, tumor location, patient symptoms, and other factors are also important. INTRODUCTION Hemangiomas are tumors characterized by increased numbers of normal or abnormal vessels filled with blood. Occasionally, hemangiomas can occur internally, and nearly one-third of these internal lesions are found in the liver. Pancreatic hemangiomas are especially rare; pancreatic vascular neoplasms collectively account for only 0.1% of all pancreatic tumors. 1 These tumors are usually diagnosed fortuitously by laparotomies performed to diagnose a large, palpable abdominal mass. [2][3][4][5] We presented a very rare case in which a woman without specific symptoms was found with a cavernous hemangioma in the pancreas tail that mimicked metastatic tumor. CASE A 68-year-old woman was found with a mass in her left kidney on medical checkup. She had no significant past medical history except hypertension and no symptoms (e.g., hematuria, abdominal pain, or abdominal discomfort). An axial contrast-enhanced computed tomography (CT) scan showed a heterogeneous solid mass in the left kidney, suggesting the presence of renal cell carcinoma (RCC). There was a strongly enhancing tiny nodule in the tail of the pancreas that was most likely either a neuroendocrine tumor or a RCC metastasis (Fig. 1A). Because she had no specific symptoms or abnormal laboratory findings, the possibility of RCC metastasizing into the pancreas could not be ruled out. Therefore, surgical resection, including left radical nephrectomy and distal pancreatectomy was conducted. There were no significant postoperative events, The hemangioma was found incidentally at a preoperative evaluation for RCC. Unlike previous studies, we found no symptoms suggesting pancreatic hemangioma, likely because it was a tiny pancreatic tail mass. Typically, hemangiomas are strongly contrast enhancing in the arterial phase of conventional contrast-enhanced CT imaging; 8 however, our case did not present these findings, likely because of the small lesion size. The pancreatic hemangioma thus mimicked metastatic cancer originating from the RCC. To our knowledge, this case was the first report of pancreatic hemangioma without a symptomatic event, and the tumor is the smallest of the reported cases. After surgery, the pancreatic tail tumor was pathologically confirmed as a hemangioma. Microscopic findings revealed a typical feature of hemangioma i.e., blood-filled spaces separated by fibrous connective tissue. For a definite diagnosis, immunohistochemistry is required to assess the presence of the factor VIII-related antigen, a marker for vascular endothelium that was reported by Chang and colleagues. 9 Subsequently, Mundinger and colleagues reported that neoplastic cells in hemangioma also express the endothelial markers CD31 and CD34. 5 In our patient, immunohistochemical findings were positive for all 3 markers; whereas, D2-40, a marker for lymphatic endothelium, and CD56, a marker for neural cell, were both negative, further indicating that the tumor mass was a hemangioma. Because of its rarity, there is no standard treatment for pancreatic hemangioma. Reviewing the previous literature on pancreatic hemangioma, we found that multiple different treatments were administered, from observation to surgical resection. 2 10 Therefore, if a patient has a pancreas head hemangioma with minimal symptoms that can be controlled, close observation and regular follow-up can be one of the treatment options according to risk-benefit analysis. Because our case was confined to a tiny mass at the distal pancreas, and we could not rule out distant metastasis from the RCC tumor, we decided to perform a distal pancreatectomy. The literature review indicated that treatment decisions require assessment of the severity of symptoms and location of the tumor. When all the cases were collectively considered, determining the timing of surgery based on comparison of surgical risk-benefit analysis emerged as an important factor. Future reports will provide more data on the optimal treatment strategies for pancreatic hemangioma.
2017-08-15T11:35:31.749Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "4e647d4467ac4d27cef21703150920524ccce7e0", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.14701/kjhbps.2016.20.2.93", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e647d4467ac4d27cef21703150920524ccce7e0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266698688
pes2o/s2orc
v3-fos-license
Chronic Pelvic Puzzle: Navigating Deep Endometriosis with Renal Complications This case report delves into the intricacies of a challenging clinical scenario involving deep pelvic endometriosis, which manifested with renal complications. Endometriosis, a complex gynecological condition, is explored in this case, highlighting its multifaceted nature. The patient presented with a complex interplay of symptoms, including chronic pelvic pain, urinary tract issues, and severe deep adenomyosis. The diagnostic journey was protracted, emphasizing the need for early recognition and intervention in such cases. A thorough evaluation, including laparoscopic examination and histopathological analysis, revealed the extensive presence of endometriotic lesions in various pelvic and renal structures, ultimately leading to left hydronephrosis. The report underscores the significance of timely diagnosis and surgical intervention to prevent irreversible renal damage. This case provides valuable insights into the management of deep endometriosis with renal involvement and the importance of interdisciplinary collaboration. Understanding the complexities of this condition can aid in improving patient outcomes and enhancing the quality of care provided. Introduction Endometriosis is a clinically important and often underdiagnosed condition, characterized by the localization and growth of endometrial tissue outside the uterine cavity, accompanied by chronic inflammation [1][2][3].The most common sites for ectopic endometrial implants are the ovaries, ovarian fossa, uterosacral ligaments, and the posterior cul-de-sac [3].It can manifest as infiltrative, deep, or superficial lesions on the peritoneum and serosa [1].Additionally, endometriosis can potentially transform into a malignant condition, with a 0.7-1.6%chance of progressing to clear cell carcinoma or endometrioid carcinoma [4].In such cases, treatment becomes more complex and may involve surgical procedures, radiotherapy, immunotherapy, and algorithms for preventing complications like tumor lysis syndrome and acute kidney injury [5]. The prevalence of this condition increases up to 30% in patients with infertility and up to 45% in patients with chronic pelvic pain [1,6].It is a common pelvic pathology in females, affecting an estimated 15% of reproductive-age women [7][8][9].Endometriosis is a debilitating disease with various severe effects on social, occupational, and psychological functioning [10].It shares some similarities with malignant conditions, such as progressive and invasive growth, estrogen dependence, recurrence, and a tendency to metastasize [11,12].Genetically, genomic regions and anomalies in pro-cancer genes (PIK3CA, KRAS, and ARID1A) have been associated with endometriosis [13][14][15].The presence of pro-cancer mutations in non-malignant cells can partially explain the aggressive nature of deeply invasive lesions compared to superficial peritoneal lesions. Endometriosis is classified based on the severity of symptoms, affected areas, location, depth (infiltration more than 5 mm below the peritoneum), and growth rate into four stages: Stage I (minimal disease), Stage II (mild disease), Stage III (moderate disease), and Stage IV (severe disease) [16,17].However, this classification does not always predict clinical outcomes, including symptoms and pain [10]. Other sources of ectopic endometrial cells include the mesothelium, stem cells, Müllerian remnants, bone marrow stem cells, embryonic remnants, and lymphatic or vascular dissemination, as well as celomic metaplasia [18,19].The hypothesis of retrograde menstruation is questioned by the existence of endometriosis in girls before their first menstruation, implying the involvement of embryonic Müllerian remnants [20,21].These premenarchal lesions are considered preexisting forms of classical endometriosis and result from neonatal uterine bleeding, including retrograde bleeding, due to exposure to maternal hormones [22,23]. Diagnosing this disease remains challenging because many patients are asymptomatic, and the definitive diagnosis often involves surgical procedures.In recent years, progress has been made in identifying biomarkers that may correlate with a positive diagnosis of endometriosis, including the soluble fraction of urokinase-type plasminogen activator receptor, which is involved in various other pathologies [24,25]. Symptoms of endometriosis depend on the location of the endometriotic lesions.Chronic pelvic pain, often related to menstruation, dyspareunia, and dysuria are common manifestations, but they can also be seen in other conditions.In endometriosis, inflammatory changes occur due to the increased production of inflammatory mediators that trigger pain [26][27][28][29][30]. Extragenital lesions are more commonly diagnosed between the ages of 35 and 40 and are usually discovered later than genital lesions, approximately 5 years after their appearance [32].A small percentage of endometriosis cases (0.1-1%) occur in the urinary tract, often underestimated due to the late diagnosis of subclinical progression [7]. Bladder endometriosis is typically located on the posterior wall or dome, less frequently involving the base of the bladder, and is proximal to the ureteral orifice.Ureteral endometriosis can be intrinsic or extrinsic, with a 1:4 ratio, and the ureter is usually involved below the pelvic brim [33,34].Ureterohydronephrosis is a consequence of urinary tract obstruction and can affect a single ureter, more commonly the left one [35], or both ureters, especially in patients with extensive pelvic endometriosis.Ureteral endometriosis is typically discovered in women aged 30 to 35 years [36][37][38], with a lower frequency in postmenopausal women [39]. In the extrinsic form (75% of cases), ureteral endometriosis is localized in the adventitia or around the connective tissue of the ureter.In intrinsic endometriosis, endometriotic tissue infiltrates the muscular wall of the ureter.In about one-third of cases, urinary stasis caused by hydronephrosis promotes the development of urinary tract infections, especially upper urinary tract infections [40].Symptoms related to ureteral endometriosis are often nonspecific, and severe stenosis can lead to symptomatic hydronephrosis and ultimately a decline in renal function [41]. Rare cases have been reported of patients with chronic kidney disease based on obstructive nephropathy caused by bilateral ureteral obstruction, but the incidence of CKD due to endometriosis is unknown.The risk of silent renal loss in these patients is reported to be as high as 25-50% [42][43][44]. The aim of this paper is to explore a challenging clinical scenario characterized by extensive pelvic endometriosis, leading to renal complications.Emphasizing the critical role of prompt diagnosis and surgical intervention in avoiding irreversible renal impairment, it offers crucial insights into our course of action regarding deep endometriosis accompanied by renal implications.Moreover, the essentiality of interdisciplinary cooperation in handling such intricate clinical cases is highlighted. Case Presentation We report the case of a 48-year-old female patient with no remarkable personal pathological background, with menarche at 12 years old, with a surgical history that included two cesarean deliveries, who initially presented with symptoms suggestive of cystitis, including dysuria, polyuria, and urgency.A recent outpatient ultrasound detected Grade II hydronephrosis in the left kidney.Subsequent reevaluation by ultrasound did not reveal any obstructive causes for the hydronephrosis.Considering the presenting symptoms and left-sided hydronephrosis, renal calculi with migrating stones were suspected. A series of imaging investigations were conducted to establish a correct diagnosis and, consequently, determine the appropriate treatment.The next imaging investigation involved a contrast-enhanced abdominal CT scan, confirming Grade II left-sided hydronephrosis, with a possible cause being a low-level stenosis.It also described the uterus with a dense image adhered to the posterior wall, possibly indicating hematoma or pregnancy. Based on the CT results, an interdisciplinary nephro-urological report recommended gynecological consultation with subsequent urological assessment and left ureteral drainage with a JJ stent.Two gynecological consultations, which indicated no pathological findings during the clinical examination, were succeeded by transvaginal ultrasounds, highlighting significant changes in the uterine body and ovaries, with a recommendation for pelvic MRI. This imaging assessment did not reveal any apparent obstructive etiologies for urinary system disorders.However, it confirmed the existence of anatomical alterations within the reproductive system, including moderate changes indicative of homogeneous diffuse adenomyosis on the posterior uterine wall.Additionally, signal modifications were evident in the outer one-third of the myometrium along the posterior corporeal uterine slope, suggesting the presence of a focal adenomyosis area, concomitant with a small uterine leiomyoma.Following this, the patient sought a urological outpatient consultation, which addressed the suspicion of deep endometriosis and uretero-vesical junctional syndrome.It is noteworthy to highlight that abnormalities in the reproductive tract were the only risk factors associated with endometriosis identified in this patient's case. Approximately two months later, the patient underwent a subsequent pelvic MRI.The results displayed a retroflexed uterus, slightly laterally deviated to the left, with a maximum transverse diameter of 5.3 cm and a longitudinal diameter of 10 cm.The endometrium had a normal thickness of 8 mm with a homogeneous signal, while the myo-endometrial junction appeared diffusely thickened (up to 14 mm), with an irregular contour towards the rest of the myometrium and included T2 hyperintense spots.There were intramural leiomyomas, measuring 10/12 mm and 8/14 mm on the left lateral aspect of the uterine body and 9/10 mm on the posterior aspect of the uterine body.A small intracavitary leiomyoma was observed on the right lateral aspect of the uterine body, measuring 5/7 mm.A post-cesarean uterine scar with diverticular dilation was present at this level (8/14 mm).On the right side, an ovarian endometrioma, measuring 8/9 mm medially, and a newly developed right ovarian cyst, measuring 27/35 mm, with hemorrhagic content, were observed.The cyst did not exhibit characteristics of an endometrioma (hemorrhagic cyst-requiring ultrasound monitoring).There were no ovarian endometriomas on the left side, and no pathological tubal accumulations were found bilaterally.A large nodule of deep endometriosis was detected on the posterior uterine wall (Figure 1), in the body region, encompassing the attachment of the utero-sacral ligaments and retrocervical area.It had a maximum transverse diameter of approximately 50 mm, antero-posterior diameter of 30 mm, and a longitudinal diameter of 35 mm.The nodule showed infiltration into the myometrium and adhesion to the posterior vaginal fornix and upper rectal serosa (over a distance of approximately 10 mm, around 10 cm cranial to the external anal orifice). metrium had a normal thickness of 8 mm with a homogeneous signal, while th endometrial junction appeared diffusely thickened (up to 14 mm), with an irregu tour towards the rest of the myometrium and included T2 hyperintense spots.The intramural leiomyomas, measuring 10/12 mm and 8/14 mm on the left lateral aspe uterine body and 9/10 mm on the posterior aspect of the uterine body.A small in tary leiomyoma was observed on the right lateral aspect of the uterine body, me 5/7 mm.A post-cesarean uterine scar with diverticular dilation was present at t (8/14 mm).On the right side, an ovarian endometrioma, measuring 8/9 mm media a newly developed right ovarian cyst, measuring 27/35 mm, with hemorrhagic were observed.The cyst did not exhibit characteristics of an endometrioma (hemo cyst-requiring ultrasound monitoring).There were no ovarian endometrioma left side, and no pathological tubal accumulations were found bilaterally.A large of deep endometriosis was detected on the posterior uterine wall (Figure 1), in t region, encompassing the attachment of the utero-sacral ligaments and retrocervi It had a maximum transverse diameter of approximately 50 mm, antero-posterior ter of 30 mm, and a longitudinal diameter of 35 mm.The nodule showed infiltrat the myometrium and adhesion to the posterior vaginal fornix and upper recta (over a distance of approximately 10 mm, around 10 cm cranial to the external a fice).Nodular infiltration was observed along the course of the utero-sacral ligame bilateral parametrial regions, with more significant involvement on the left side an eral ovarian adhesions (Figure 2).A left parametrial nodule, measuring approx 20/40 mm, encompassed and stenosed the left pelvic ureter over a length of approx 15 mm, at around 25 mm cranial to the left ureteral orifice.A left pelvic ureteral upstream of the stenotic area was observed, with a maximum diameter of 12 mm.N nodules were detected in the sigmoid colon, bladder, or the anterior abdominal w right pelvic ureter remained nondilated, and there were minimal pelvic ascites maximum thickness of 24 mm in the Douglas space.Nodular infiltration was observed along the course of the utero-sacral ligaments and bilateral parametrial regions, with more significant involvement on the left side and bilateral ovarian adhesions (Figure 2).A left parametrial nodule, measuring approximately 20/40 mm, encompassed and stenosed the left pelvic ureter over a length of approximately 15 mm, at around 25 mm cranial to the left ureteral orifice.A left pelvic ureteral dilation upstream of the stenotic area was observed, with a maximum diameter of 12 mm.No deep nodules were detected in the sigmoid colon, bladder, or the anterior abdominal wall.The right pelvic ureter remained nondilated, and there were minimal pelvic ascites with a maximum thickness of 24 mm in the Douglas space. Following extensive multidisciplinary assessment and a series of imaging evaluations, the patient was admitted to our specialized endometriosis center for comprehensive disease management. The interdisciplinary team opted for surgical intervention through a laparoscopic approach to conduct the biopsy of multiple endometriosis nodules, perform a left ureter dissection followed by reimplantation, and undertake a radical hysterectomy with bilateral adnexectomy.Following extensive multidisciplinary assessment and a series of imaging eva tions, the patient was admitted to our specialized endometriosis center for comprehen disease management. The interdisciplinary team opted for surgical intervention through a laparoscopi proach to conduct the biopsy of multiple endometriosis nodules, perform a left ureter section followed by reimplantation, and undertake a radical hysterectomy with bila adnexectomy. The exploratory laparoscopy revealed a complex pelvic adhesive syndrome inv ing the small bowel loops, a significantly enlarged uterus (severe deep adenomyosis), adnexa (right ovarian endometrioma-20 mm), sigmoid colon, rectosigmoid junction per rectum, and both ureters (left ureter dilated) with extensive left parametrial inf tion.Viscerolysis was performed with difficulty, including dissection of the sigmoid lon, upper rectum, rectovaginal space, prevesical space, and both ureters, as well as b eral ovarian suspension.Several excisions of endometriosis nodules were performed cluding the ureteral nodule: extensive endometriotic lesions in the retro-uterine and rocervical regions; an endometriotic nodule in the right parametrium and right u sacral ligament-excision with ureteral dissection; a voluminous ureteral endometr nodule (extrinsic endometriosis); resection of the left ureter with vesicoureteral anasto sis; catheterization of the left ureter; and a voluminous endometriotic nodule in the parametrium, in the vicinity of the sacral roots. Histopathological Examination Results Examination of the "left ureteral nodule" specimen: the histopathological appear indicates the presence of endometriotic lesions (Figure 3).The exploratory laparoscopy revealed a complex pelvic adhesive syndrome involving the small bowel loops, a significantly enlarged uterus (severe deep adenomyosis), both adnexa (right ovarian endometrioma-20 mm), sigmoid colon, rectosigmoid junction, upper rectum, and both ureters (left ureter dilated) with extensive left parametrial infiltration.Viscerolysis was performed with difficulty, including dissection of the sigmoid colon, upper rectum, rectovaginal space, prevesical space, and both ureters, as well as bilateral ovarian suspension.Several excisions of endometriosis nodules were performed, including the ureteral nodule: extensive endometriotic lesions in the retro-uterine and retrocervical regions; an endometriotic nodule in the right parametrium and right uterosacral ligamentexcision with ureteral dissection; a voluminous ureteral endometriotic nodule (extrinsic endometriosis); resection of the left ureter with vesicoureteral anastomosis; catheterization of the left ureter; and a voluminous endometriotic nodule in the left parametrium, in the vicinity of the sacral roots. Examination of the "endometriotic lesions" specimen, which includes two tissue fragments: The histopathological appearance indicates the presence of endometriotic lesions (Figure 4).The patient s post-operative course was favorable.Three weeks later, she presented for the removal of the left JJ stent and subsequently had a good general condition, was afebrile, and had clear urine. The entire process from presentation to the nephrology department to the final diagnosis and surgical treatment took three months. Laboratory tests at three months showed no changes, and abdominal ultrasound of the left kidney revealed a structurally normal kidney without hydronephrosis.The patient received recommendations for periodic medical follow-up [45] Discussion In the context of endometriosis with ureteral invasion, understanding the diverse anatomical locations of endometriotic lesions becomes crucial for comprehensive management.Notably, studies have highlighted the multifaceted nature of endometriosis, implicating varied sites of involvement.Habib et al. provided a comprehensive perspective on bowel endometriosis, underscoring diagnostic and therapeutic aspects [46].Gustofson et al. conducted a case series and comprehensive review, emphasizing the association between endometriosis and the appendix [47].Furthermore, insights from Jenkins et al. and Audebert et al. underscored the significance of the anatomic distribution of endometriosis lesions, shedding light on its pathogenetic implications [48,49].Additionally, rare The patient's post-operative course was favorable.Three weeks later, she presented for the removal of the left JJ stent and subsequently had a good general condition, was afebrile, and had clear urine. The entire process from presentation to the nephrology department to the final diagnosis and surgical treatment took three months. Laboratory tests at three months showed no changes, and abdominal ultrasound of the left kidney revealed a structurally normal kidney without hydronephrosis.The patient received recommendations for periodic medical follow-up [45] Discussion In the context of endometriosis with ureteral invasion, understanding the diverse anatomical locations of endometriotic lesions becomes crucial for comprehensive management.Notably, studies have highlighted the multifaceted nature of endometriosis, implicating varied sites of involvement.Habib et al. provided a comprehensive perspective on bowel endometriosis, underscoring diagnostic and therapeutic aspects [46].Gustofson et al. conducted a case series and comprehensive review, emphasizing the association between endometriosis and the appendix [47].Furthermore, insights from Jenkins et al. and Audebert et al. underscored the significance of the anatomic distribution of endometriosis lesions, shedding light on its pathogenetic implications [48,49].Additionally, rare occurrences such as Villar's nodule and umbilical endometriosis, as documented by Victory et al., Jaime et al., and Lee et al., further exemplify the diverse and atypical locations in which endometriotic lesions can manifest [50][51][52].Understanding these varied sites of endometriosis manifestation is pivotal in diagnosing and managing cases involving unusual presentations, such as ureteral invasion, providing a comprehensive approach to treatment strategies. Deep infiltrating endometriosis is defined as lesions penetrating surrounding tissue by 5 mm or more and is likely of multifactorial origin [53].Endometriosis is a common disorder affecting women of all ages, that can lead to depression and anxiety disorders, with a decrease in workability, restrictions in social activities, and a diminished quality of life.It is estimated that 0.3 to 12% of women diagnosed with endometriosis also have urinary tract involvement.This number increases to 14 to 20% in patients with deep infiltrating endometriosis.Endometriosis can perturb the urogenital tract, in particular the ureter, which can potentially result in ureteral compression or stenosis.Even though this is rare, the consequences are dramatic, such as hydronephrosis or organ failure [45,[54][55][56].Jadoul et al. reported that the risk of loss of renal function in cases of ureteral endometriosis is 11.5% [57]. The pathogenesis of deeply infiltrative endometriosis is still a subject of study.One hypothesis is that pelvic endometriosis may be a direct extension of endometrial cells outside the uterine wall, possibly facilitated by previous pelvic surgeries [58].This hypothesis is supported by the fact that the diagnosis of ureteral endometriosis is often preceded by hysterectomy and bilateral salpingo-oophorectomy, possibly due to prior symptoms related to adenomyosis or pelvic endometriosis.Ureteral endometriosis typically involves the lower third of the left ureter because deep infiltration of the pelvis by endometriosis is often asymmetrical and primarily affects the left pelvis, including neighboring structures such as the bladder [59]. Endometriosis of the urinary tract is an uncommon clinical finding, and it may have a slow and insidious progression, sometimes diagnosed after irreversible renal structural changes have occurred, such as unilateral or bilateral renal atrophy in an undefined number of patients.Many patients initially present to nephrologists with advanced kidney disease due to chronic urinary tract obstructions.Nephrologists need to closely monitor renal function and renal ultrasound in patients with deteriorating renal function or renal structural changes to achieve an early diagnosis of endometriosis when necessary.In patients with signs of urinary tract dilation or echogenicity abnormalities on ultrasound that sometimes associate signs of renal dysfunction, further investigations are recommended, including MRI and ureteroscopy, as well as interdisciplinary consultations [59]. The indications of urinary tract endometriosis can be elusive.In our case, the patient exhibited no apparent symptoms except for dysuria, polyuria, and urgency, resembling cystitis.Interestingly, she did not report the typical symptoms associated with endometriosis, such as dysmenorrhea or dyspareunia.Highlighting the significance for healthcare professionals managing individuals with deep infiltrating endometriosis, it is crucial to consider the potential of asymptomatic ureteral involvement.Employing imaging tests for early diagnosis and addressing any obstruction through surgical intervention is imperative to prevent the loss of renal function.Ureteral endometriosis should be considered among the potential differential diagnoses when encountering unexplained hydronephrosis in women of childbearing age.Individuals experiencing dysmenorrhea should be particularly aware of this condition [35]. The presence of chronic inflammation in endometriosis can trigger signaling pathways that lead to necrosis and a compromised immune response, heightening the susceptibility to infections, particularly in the genitourinary system.Chronic endometritis and surgical site infections become more prevalent risks.Infections involving Gardnerella, Streptococcus, Enterococci, Escherichia coli, mollicutes, and shigella have been identified as associated factors [60,61]. There is emerging evidence suggesting a shared pathway between autoimmunity and endometriosis, presenting a promising avenue for research aimed at developing immunostimulatory drugs that could demonstrate effectiveness [62].Ongoing research endeavors focus on identifying non-invasive imaging markers for early diagnosis, aiding in the classification of cases that necessitate surgical intervention and those that could benefit from non-surgical management.This holistic approach involves a combination of pathophysiological diagnosis, imaging techniques, and laparoscopic surgeries [63]. The role of cystoscopy in diagnosis remains a subject of controversy.Ros et al. conducted a study where flexible cystoscopy was routinely performed in their institution for women exhibiting suspected bladder endometriosis affecting the muscular layer on ultrasound.They concluded that cystoscopy might not be necessary for nodules partly within the muscular layer detected via transvaginal ultrasound [64]. Despite being invasive, cystoscopy is cost-effective and aids in estimating the distances between ureteral orifices and nodule boundaries, enabling a potential biopsy.However, the intraperitoneal origin of nodules results in typical outpatient cystoscopy showing normal findings in about half of the cases, with the classic appearance of adenomatous and nodular red or bluish masses being visible only occasionally and ulcerations being rarer [64,65]. To enhance accuracy, a proposed approach involves performing cystoscopy under sedation concurrent with a pelvic examination, including bimanual palpation of the bladder.In a study of 157 participants using dynamic cystoscopy, researchers noted a high specificity (97.78%) but relatively low sensitivity (58.21%), with a significant positive predictive value (95.12%) and negative predictive value (75.86%). Recent reviews, such as the one by Lima Diniz et al., suggest that while ultrasonography and cystoscopy serve as initial diagnostic tools, magnetic resonance imaging emerges as the most reliable method for diagnosis [66]. Generally, MRI is very useful for guiding laparoscopy, and fat-saturation MRI allows for the adequate evaluation of the location, size, and subperitoneal lesion extension of deep pelvic endometriosis, providing key information for both the diagnosis and treatment planning [67][68][69].Our patient had endometriotic infiltration in the utero-sacral ligaments and bilateral parametrial regions, with more significant involvement on the left side, bilateral ovarian adhesions, and a left parametrial endometriosis nodule, which encompassed and stenosed the left pelvic ureter.From a prognostic standpoint, positive outcomes, especially in terms of renal function, can be achieved when both diagnosis and surgical intervention occur early, accompanied by long-term follow-up [54]. Given the infrequency of ureteral endometriosis, determining the optimal management strategies poses a challenge.For mild symptoms, conservative approaches involving nonsteroidal anti-inflammatory medications might offer sufficient relief.In more pronounced cases, medical management utilizing GnRH analogues or oral contraceptives could be considered, although hormone therapy is typically recommended for early-stage disease [33,70,71]. The prognosis of ureteral endometriosis depends on the timing of diagnosis [34].To determine the appropriate therapy for endometriosis, the patient's age, desire for reproduction, symptom severity, overall distribution of endometriosis, and the size of urinary tract lesions must be taken into consideration [68].Therapy for endometriosis includes bilateral ovariectomy, as well as surgical lesion resection preceded or combined with medical castration using drugs such as danazol or gonadotropin-releasing hormone agonists [72]. The successful laparoscopic treatment of ureteral endometriosis has been described thoroughly in the literature [6,12].Camanni and colleagues recently reviewed the surgical approach for different stages of ureteral involvement, suggesting that mild-to-moderate infiltration should be managed with ureterolysis, while cases of infiltrative endometriosis and intrinsic ureteral disease will benefit from the section of the involved segment and ureteral reimplantation-ureteroneocystostomy [73].The main issue is that unilateral or bilateral ureteral endometriosis can be asymptomatic, and many cases are discovered incidentally during abdominal ultrasound or laparoscopy for extensive endometriosis [73]. In contemporary practice, the established approach to treating endometriosis involves the comprehensive surgical extraction of endometrial tissue, followed by conservative hormone therapy.In terms of surgical procedures, there is a preference for employing minimally invasive techniques, and laparoscopic surgery is efficacious for the treatment of deep infiltrating endometriosis [74][75][76][77][78]. In 25-43% of instances, ureteral endometriosis may lead to ureteral obstructions, potentially resulting in a loss of kidney function.Up to 47% of patients may necessitate a nephrectomy, either due to the loss of kidney function or the presence of ureteral endometriosis lesions [7].Also, in accordance with the updated guidelines outlined by Barocas et al. (2020) [79] in the Journal of Urology, the American Urological Association (AUA) recommends office cystoscopy for individuals with a history of gross hematuria.However, for cases without gross hematuria, further testing is advised based on risk stratification.These guidelines emphasize the importance of tailored assessments for microscopic hematuria, highlighting the significance of precise diagnostic protocols in urological evaluations. The aim of the surgical treatment is an optimal surgical and therapeutic outcome, which results in an increase of quality of life with an efficient pain treatment policy of the disease [80]. Conclusions Deep pelvic endometriosis poses a rare and challenging diagnostic scenario due to the nonspecific nature of reported symptoms.Symptoms can range from the classic manifestations of endometriosis (most common) to less common urinary tract symptoms.In the index case, the presentation of urinary tract symptoms added complexity to the diagnosis, making it more elusive.The nonspecific symptoms associated with ureteral endometriosis can lead to misdiagnosis, potentially causing renal damage through prolonged hydronephrosis. Maintaining a high index of suspicion and utilizing advanced imaging modalities are crucial for achieving an earlier and more accurate diagnosis, ultimately facilitating the preservation of renal function.Radiological techniques, particularly MRI, stand as the gold standard diagnostic tools for achieving a preoperative diagnosis. In severe and recurrent cases of endometriosis where the ureter is affected and renal function is impaired, surgical intervention becomes necessary.However, if conventional treatments do not yield lasting success, there is a risk of renal function loss, and the levels of pain and discomfort can be substantial.In such challenging cases, laparoscopic ureteral reimplantation emerges as a promising therapeutic approach. Emphasizing a heightened awareness for early diagnosis and management is crucial, as both elements play a pivotal role in achieving a better prognosis. Figure 1 . Figure 1.Coronal T2 section.The endometriosis nodule (red arrow), isointense on T2, wi clear demarcation, located in the left pelvic area, closely related to the ureter.Simple lef cyst (blue arrow). Figure 1 . Figure 1.Coronal T2 section.The endometriosis nodule (red arrow), isointense on T2, with an unclear demarcation, located in the left pelvic area, closely related to the ureter.Simple left ovarian cyst (blue arrow). Figure 2 . Figure 2. Coronal T2 fat-saturated section.Ovarian endometrioma located on the medial side o right adnexa, displaying a characteristic "shading sign" (a radiological hallmark of endomet due to high protein and iron content secondary to hemorrhage). Figure 2 . Figure 2. Coronal T2 fat-saturated section.Ovarian endometrioma located on the medial side of the right adnexa, displaying a characteristic "shading sign" (a radiological hallmark of endometriosis due to high protein and iron content secondary to hemorrhage).
2024-01-02T16:04:50.307Z
2023-12-30T00:00:00.000
{ "year": 2023, "sha1": "4f866c9ccdb4c72699d52362bfbd9623758138e3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/13/1/220/pdf?version=1703916669", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "da353b35591743d2c844a31ca62a54f62f8035ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216513594
pes2o/s2orc
v3-fos-license
Cloud Collaborative Reflective Strategy and Its Effect Toward English Pronunciation of Pre-Service Teachers in Their Teaching Practice Program —The study aims at finding out the effect of the Cloud Collaborative Reflective Strategy (CCRS) on English Pronunciation. It was an experimental study with pretest-posttest control group design, and the forty samples of which were randomly taken from the population of seventy-nine pre-service teachers in their teaching practice. The data from the pretests and posttests were taken by recording class opening sessions of the pre-service teachers in experimental and control groups, and then they were rated using a percentage of the correctly pronounced word toward total words pronounced. The t-test was used to analyze the effects of CCRS improvement. The strengthen the data, a questionnaire measuring their perception toward the strategy was distributed. The results from the paired sample t-test in the experimental group show that there was a significant difference in scores for pretest and posttest. From an independent t-test, it was found out that that there was a significant difference in scores of posttest in experimental and posttest in control. Students’ perceptions toward this strategy, as seen from their response to the questionnaire, is supportive of the findings. They perceived the strategy as helpful toward their pronunciation improvement. This result suggests that CCSR affects the improvement of pre-service teachers’ pronunciation. I. INTRODUCTION Pronunciation is perhaps the most neglected aspect of language as compared to others, such as structure and vocabulary. An indicator of proficient language speakers is pronunciation. Zimmerman [1] expresses that pronunciation is important because it is the first thing to notice about one's competence in the language. Moreover, on the importance of pronunciation factor in the communication, [2][3] states that the main difficulty foreign language learners experienced in communicating in English were pronunciation. In general, they assume that pronunciation is the potential to be a major problem in communication. In a more empirical finding, Kurniawan [4] in his study concluded that the pre-service teachers in the program made a mistake in pronouncing the two phonemes that he investigated the voiceless tooth fricative (/θ/) and the voiceless dental (/ð/). It is important to have an improved English pronunciation for pre-service teachers since it is one of the inputs of language learning [5]. Furthermore, [6] summarizes that in the view of any language learning theories (behavioral, nativist, and interactionist) inputs play an important role in the process of language learning. Furthermore, on inputs in language learning in the classroom, [7] argues that the success of language learning may depend on the language used in the communication classroom. In line with this opinion, [8] concludes that English as foreign language learners can have the ability to achieve near-native speaker language competences if there is a communication process in the classroom in English. This study tries to offer a strategy called Cloud Collaborative Reflective or CCRS in short. This strategy is expected to help students improve their English pronunciation by reflecting on recorded speeches in the teaching practice phase. By sharing it with a competent and more competent person, it will provide collaborative input from the recording. Thus, this study investigates the effects of this strategy on one's English pronunciation and pre-service teacher perceptions about the application of the strategy. There has been a large number of studies, as elaborated in the next section, trying to find ways of improving the quality of teachers' pronunciation, but only a few deal with pronunciation by combining reflection and using cloud computing technology. This study tries to fill this gap, and by doing this, it can show how cloud computing can be beneficial to all language aspects and skills. A. Teacher Talk The language teachers use in the classroom; in this case, English is referred to as teacher talk [7]. Teacher Talk is a very important input in Language learning as part of language exposure [9][10][11][12][13]. For the Indonesian context in general, and South Sumatra in particular, teacher talk, is the main input in the English language learning process where communication in English outside of it is the most unlikely to occur. It can be concluded that in teaching and learning activities, the English language teacher use in the classroom is part of the input. Good input will allow students to learn better and succeed in the learning process. In other words, and a more specific aspect, a good English pronunciation from an English teacher can help students to learn English better. Concerning the finding from Kurniawan [4], it is sensible to say that the improvement of pronunciation is required for the pre-service teachers in the program. The preservice teachers have undergone the Teaching English as a Foreign Language Program for at least 3.5 years with some students who have been there for 4.5 years. They had received, before their teaching practice program, training related to pronunciation at least in 5 courses amounting to 12 credit hours. Teaching practice is the program from the university to send their pre-service teacher for actual teaching, guided and independent, experience at schools. Different treatments were needed to improve their pronunciation, as have been cited at the beginning that pre-service teacher made a mistake in pronunciation although they have undergone an intensive training before their teaching practice program, one of which is the reflective learning. It was also that they had taken all their courses, so it was very hard to have face-to-face treatments with them. B. Reflection in Language Learning Reflection is defined as an active, meticulous, and persistent thought on the belief or knowledge that serving to transform uncertainty, doubt, conflict, and distraction into clarity, certitude, certainty, and harmony (Dewey, 1933) [14]. Furthermore, on reflection and learning process, Kolb (1984) [15], who developed the experiential learning model, states that a reflection is a machine that makes learning go forward. Without reflection, learning will run in place with no new understanding. In line with Kolb (1984), Gibbs (1988) [16] stated that experience is not enough in the learning process. Without reflection, the experience will be forgotten, or the learning potential of the experience will be lost. Through reflection, thoughts and feelings will emerge that can encourage the development of concepts or generalizations. There are at least three models in reflective learning, the reflective cycle of Gibbs (1988) [16], Reflections on activity/reflection after the activities of Schon (1983) [17], and experiential learning [15]. All these models generally are the description process of what happens, the process of association with existing understanding, and the process of planning of what will be done. The reflective learning process can occur individually or collaboratively. Hoyrup (2004) [18] argues that in individual reflective learning, the reflection process focuses solely on the individual. In contrast, in collaborative, reflective learning, the reflection process requires communication and coordination among participants of the learning process. In the process of improving English pronunciation, the collaborative, reflective learning process will be more helpful. Stages of reflective learning will be more meaningful if, in the process, there are inputs from colleagues, lecturers, and linguists/native speakers. In short, the collaborative process of reflection will give more advantages as compared to the non-collaborative one. In the case of English pronunciation, without the event of pronunciation being recorded and reflected, of course, the experience will be quickly forgotten, so it is impossible to evaluate pronunciation and make improvements. Reflective learning or reflection proves to help to improve the language competences of language learners (Arikaan, 2006 [19] and Chau, 2010 [20]). Furthermore, Mathew (2012) [21] concludes that reflected teaching and learning will be beneficial for both teachers and students of ELT if conducted properly. Researches have been done concerning reflective teaching in ELT in all language skills, and their findings generally supported the use of reflection [22][23][24][25][26][27]. Furthermore, some studies were conducted on reflective teachings, such as Suwartono (2014) [28], who concentrated on increasing suprasegmental phonemes using reflective learning. Vitanova, and Miller (2002) [29], who examined their students" reflection of their improvement in pronunciation.Abbasian and Bahmanie (2013) [30] who saw the relationship between EFL teachers" and learners" reflection on pronunciation factor in the teachinglearning process and learners" motivation. None of the studies handled classes without face to face meetings. C. Cloud Computing Technology Two points discussed in the preceding paragraphs are that the pre-service teachers no longer have class-room courses, so it was impossible to give them treatment in-classroom courses. And collaborative learning, which is better than individual reflection for language learning, might be the answer for it, under the condition that it is the non-classroom treatment. Hence, besides it is an attempt to find a way of improving language skills for the pre-service teachers, this study also is a trial to fill the gap in studies of reflective learning in ELT, which is non-classroom collaborative learning in improving pronunciation. Information and communication technology has a large role in supporting the non-class collaboration process. English Language Learning has long been familiar with communication and information technology. Several studies have demonstrated the benefits of ICT in learning English. Such as to encourage students to collaborate and participate actively [31]. Develop the boundaries of traditional education and create a more independent learning community [32]. Allow students to link and connect ideas and show them on the internet that will ultimately help improve student performance [33]. All the results of this study indicate that the utilization of ICT can enhance ELT. Cloud computing is an internet service that provides joint processing of resources and data on computers and other devices on demand. In other words, with cloud computing technology, computer data can easily be shared with others. The process of collaborative, reflective learning can go Advances in Social Science, Education and Humanities Research, volume 422 smoothly without the constraints of space and time using this technology. (Change & Wills (2013) [34]. In short, it offers convenience in collaborative works and will help a lot in the process of online collaboration. In other words, non-classroom collaborative, reflective learning was made possible by this technology. II. METHOD As discussed in the preceding section, this study investigated the effect of CCRS to pronunciation. CCRS stands for a cloud, collaborative, reflective strategy. This strategy can be described in figure 1. It is an adaptive form from experiential learning cycles [15]. CCRS is a cycle rather than a linear process in that the process can go on non-stop until the expected result is achieved. CCRS is constituted of three stages: (1) recording, (2) reflection, and (3) cloud collaboration. In the recording stage, the subject"s audio record part of his/her language teaching process. By doing so, he/she tries to ensure a good quality of the recording. In the following stage, the corresponding subject listens to the recording multiple times and write a reflection of it. The reflection includes his/her general impression of the speech and what he/she identified as a mispronounced sound or words. In the third stage, the subject uploads the recording and the reflection to a shared cloud computing account common to the subject and the cloud collaborator. The cloud collaborators, in this stage, give collaborative input in two points, which are the general impression of the speech and the identification of the mispronounced word or sound. They will also provide audiorecorded correct pronunciation of the identified mispronunciation. All these are also uploaded to the shared cloud account. Having read and listened to the input, the students give general reply to the input as an acknowledgment that he/she has read and listened to them. The process then resumes stepping 1. This study was a pre-test post-test control group experimental design that was conducted with a population of seventy-nine pre-service teachers who were in their teaching practice program at a university in 2017. The population had completed their classroom courses. By using random sampling, twenty samples were assigned to the experimental group and another twenty to the control group. The experimental group applied CCSR as treatment, and pre-test and post were administered to both groups. Fig. 1. Experiential learning cycles [15]. Before the treatment started, a pre-test was administered to both groups. They were asked to record the first three to five minutes of their speech in their class opening session. The recordings were rated by two raters who were native speakers of English living in South Sumatera. They rated them on the basis of the percentage of numbers of the correctly pronounced word over the numbers of all pronounced words. After the treatment, the post-test was administered in the same way. The result of the pre-test and post-test in both groups were analyzed using the paired-sample and independent sample t-tests. The raters also acted as the cloud collaborators. Google Drive, as one of the cloud-computing platforms, was chosen for it was free and for its outspread use throughout the world. Despite its popularity, short pieces of training on the use of google drive use were conducted separately for experimental group and cloud collaborators to ensure the smooth process of reflective collaboration. Researchers distributed questionnaires to investigate preservice teacher perceptions about the application of CCSR. There were four aspects in it, (1) CCSR in general (items 1 and 2), (2) google drive (cloud) (item 3), (3) cloud collaborator (items 4 and 5), and (4) reflection (items 6 and 7). The items can be seen in table 1. Items in aspect 2, 3, and 4 are parts of CCRS that is Cloud, reflection, and cloud collaborator. The responses were of Likert-like with five scales. The higher the scale, the more it does not conform to the item, 1 (one) being the highest conforming the item, scored 5, and 5 (five) as the lowest, scored 1. The score from every item were averaged, and the higher the mean, the more it conformed to the item. As an addition to the closed-ended items, two openended items were provided. They were obstacles they faced in and the suggestion they proposed to CCRS. The resulted data were analyzed quantitatively. It helps me improve my pronunciation 3 Google drive helps a lot in the process of collaborative reflection 4 The cloud collaborators help me identifying the mispronounced words in my speech 5 They help me improve my pronunciation 6 The reflection I did helps me identifying the mispronounced words in my speech 7 It helps me improve my pronunciation Before use, it was tried out for validity and internal consistency. The validity of the questionnaire was checked using bivariate correlation, which shows that all items are valid. The internal consistency was measured via a split-half reliability index, coefficient alpha (Chronbach, 1951) index. The index number was 0.983. III. FINDINGS AND DISCUSSION There are three points discussed in this part, (1) Descriptive statistics, (2) t-test result, and (3) pre-service teachers" Advances in Social Science, Education and Humanities Research, volume 422 perception of CCRS. A discussion on the result will also be provided. The first paragraph after a heading is not indented. A. Descriptive Statistics The results show there is a tendency for one of the test scores to be distributed in the distribution center or the middle range between 51-85. None of the results were below 50, but few exceeded 85. In both groups, the mean of the posttest was higher than the posttest, and the increase was more significant at the experimental group compared to control. B. T-test The results from the pretest and posttest in both groups were tested for their normal distribution using the Shapiro-Wilk test for the sample number of samples. The p-value of the pretest and posttest of the experimental group and pretest and posttest of the control group with df (20) were .339, .321, .557, .104 (chosen α was .05) respectively. The results show that all the data in all the tests were normally distributed. The QQ plots of every test are displayed below. After conducting paired sample t-test, for experimental group, it was found out that there was a significant difference in scores for pretest (mean=73.05, SD = 4.26) and posttest (mean=83.90, SD = 2.67) conditions; t(19)=-19.264, p=0.001. This suggests that CCRS does really have an effect on pronunciation improvement. Specifically, the result suggests that when CCRS was applied to pre-service teachers, their pronunciation improved. For control group, it was also found out that there was a significant difference in scores for pretest (mean=79.05, SD = 2.52) and posttest (mean=81.35, SD = 2.56) conditions; t(19)=-4.435, p=0.001. This result more strongly suggests that CCRS has an effect on the pronunciation since the significant increase of mean in the experimental group (10.85) was far higher than that of the control group (2.3). Independent sample t-tests were performed to see if there were significant differences between the posttest in the experimental and control groups. From the test, it was found out that that there was a significant difference in scores of posttest in experimental (M=83.90, SD=2.67) and posttest in control (M=81.35, SD=81.35), conditions; t(38)=3.081, p=0.004. This result suggests that CCSR affects the improvement of pre-service teachers" pronunciation. C. Questionnaire The data from the questionnaire of the Application of Cloud Collaborative Reflective Strategy (CCRS) are displayed in table 3. The responses for the first aspects with a mean score of 4.60 showed that the pre-service teachers view CCRS as very helpful in identifying their mispronunciation and that it helps them improve it as seen in the table. The results of other aspects supported these results, google drive, cloud collaborators, and reflection, which also had a higher mean that was above 4.50. Most of the response chosen is one which has the highest conformity with the item, proceeded by which also high in conformity, and the least chosen is three which has fair conformity. None of the responses 4 and 5 were chosen, and these responses have negative conformity toward the item. There were two open-ended questions in the questionnaire, (1) obstacles and (2) suggestions in CCSR. None of the samples submitted suggestions for the strategy, but several obstacles were revealed, (1) time availability, (2) technical Advances in Social Science, Education and Humanities Research, volume 422 problem in making a recording, and (3) their perception of the use of the English language in the classroom. The strict schedules of their teaching practice program had made some of the samples to be too busy to sit back, listen to their recording, and write a reflection. It created a delay in the process of the strategy in general. Some students failed to prepare themselves to be ready to create a good quality recording. The perception that the students that they were teaching were confused about what they were saying and that they were afraid if they speak in English, the students would not understand the lesson had also been a barrier for the recording process. The results show that CCSR has an improvement effect toward pronunciation, one of the aspects of language skills, in the pre-service teacher. This finding is in line with other studies dealing with reflective teaching in ELT (Arikaan, 2006 Cloud computing was very helpful in the process of collaboration, and it is in line with what Change and Wills (2013) state that collaboration can go smoothly without the limitation of space and time. Collaborators were also seen as very helpful in the process of identifying mispronunciation and improving it. It is in line with the theory collaboration that of collaborative reflection that it will be helpful since in the reflection process, communication, and coordination among participants of the learning (Hoyrup, 2004). Reflection itself was also viewed as very helpful in identifying mispronunciation and improving it. It is in line with Arikaan (2006) and Chan (2010) that concludes that reflective learning or reflection proves to help improve the language competences of the language learner. CCRS, in general, as stated previously, was helpful in identifying mispronunciation and improving it. This statement was supported by the fact that the other items" mean in the questionnaire were all high. The other aspects were parts of CCRS that is a cloud, reflection, and cloud collaborators. This showed that there was the conformity of the general item and its parts. The results from the questionnaire suggest that students perceive CCRS as helpful to help them identify their mispronunciation and improve it. Specifically, the pre-service teachers perceive the strategy as helpful in improving their pronunciation. It supports the empirical finding from the t-test. This conformity puts forward for consideration that CCSR does work to improve English pronunciation. IV. CONCLUSION AND SUGGESTION CCSR, as seen from tests" results and students" perception, can be seen as having the possibility of helping students in improving their pronunciation. This strategy can be one option for the teacher when dealing with pronunciation since it is one of the most neglected aspects of the language. It is suggested that other studies on CCSR are conducted with larger samples with more variables taken into account and controlled, such as student's involvement and their attitude toward the class.
2020-04-02T09:14:09.153Z
2020-03-25T00:00:00.000
{ "year": 2020, "sha1": "89514475e913232628b42c8e7f915d2eb876e1d9", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/assehr.k.200323.107", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "db9c9dee9be5276d70e29a0895e8a88d87842d07", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
8595164
pes2o/s2orc
v3-fos-license
VigNet: Grounding Language in Graphics using Frame Semantics This paper introduces Vignette Semantics , a lexical semantic theory based on Frame Semantics that represents conceptual and graphical relations. We also describe a lexical resource that implements this theory, VigNet, and its application in text-to-scene generation. Introduction Our goal is to build a comprehensive text-tographics system. When considering sentences such as John is washing an apple and John is washing the floor, we discover that rather different graphical knowledge is needed to generate static scenes representing the meaning of these two sentences (see Figure 1): the human actor is assuming different poses, he is interacting differently with the thing being washed, and the water, present in both scenes, is supplied differently. If we consider the types of knowledge needed for scene generation, we find that we cannot simply associate a single set of knowledge with the English verb wash. The question arises: how can we organize this knowledge and associate it with lexical items, so that the resulting lexical knowledge base both is usable in a widecoverage text-to-graphics system, and can be populated with the required knowledge using limited resources? In this paper, we present a new knowledge base that we use for text-to-graphics generation. We distinguish three types of knowledge needed for our task. The first is conceptual knowledge, which is knowledge about concepts, often evoked by words. For example, if I am told John bought an apple, then I know that that event necessarily also involved the seller and money. Second, we need world knowl- edge. For example, apples grow on trees in certain geographic locations at certain times of the year. Third, we need grounding knowledge, which tells us how concepts are related to sensory experiences. In our application, we model grounding knowledge with a database of 3-dimensional graphical models. We will refer to this type of grounding knowledge as graphical knowledge. An example of grounding knowledge is knowing that several specific graphical models represent apple trees. Conceptual knowledge is already the object of extensive work in frame semantics; FrameNet (Ruppenhofer et al., 2010) is an extensive (but not complete) relational semantic encoding of lexical meaning in a frame-semantic conceptual framework. We use this prior work, both the theory and the resource, in our work. The encoding of world knowledge has been the topic of much work in Artificial Intelligence. Our specific contribution in this paper is the integration of the representation for world knowledge and graphical knowledge into a frame-semantic approach. In order to integrate these knowledge types, we extend FrameNet in three manners. 1. Frames describe complex relations between their frame elements, but these relations, i.e. the internal structure of a frame, is not explicitly formulated in frame semantics. FrameNet frames do not have any intensional meaning besides the informal English definition of the frames (and what is expressed by so-called "frame-to-frame relations"). From the point of view of graphics generation, internal structure is necessary. While for many applications a semantic representation can remain vague, a scene must contain concrete objects and spatial relations between them. 2. Some frames are not semantically specific enough. For example, there is a frame SELF MOTION, which includes both walk and swim; these verbs clearly need different graphical realizations, but they are also different from a general semantic point of view. While this situation could be remedied by extending the inventory of frames by adding WALK and SWIM frames, which would inherit from SELF MOTION, the situation is more complex. Consider wash an apple and wash the floor, discussed above. While the core meaning of wash is the same in both phrases, the graphical realization is again very different. However, we cannot simply create two new frames, since at some level (though not the graphical level) the meaning is indeed compositional. We thus need a new mechanism. 3. FrameNet is a lexical resource that illustrates how language can be used to refer to frames, which are abstract definitions of concepts, and their frame elements. It is not intended to be a formalism for deep semantic interpretation. The FrameNet annotations show the frame elements of frames (e.g. the goal frame element of the SELF MOTION frame) being filled with text passages (e.g. into the garden) rather than with concrete semantic objects (e.g. an 'instance' of a LOCALE BY USE frame evoked by garden). Because such objects are needed in order to fully represent the meaning of a sentence and to assert world knowledge, we introduce semantic nodes which are discourse referents of lexical items (whereas frames describe their meanings). In this paper, we present VigNet, a resource which extends FrameNet to incorporate world and graphical knowledge. We achieve this goal by addressing the three issues above. We first extend frames by adding more information to them (specifically, about decomposition relevant to graphical grounding and more precise selectional restrictions). We call a frame with graphical information a vignette. We then extend the structure defined by FrameNet by adding new frames and vignettes, for example for wash an apple. The result we call VigNet. Finally, we extend VigNet with a system of nodes which instantiate frames; these nodes we call semantic nodes. They get their meaning only from the frames they instantiate. All three extensions are conservative extensions of frames and FrameNet. The semantic theory that VigNet instantiates we call Vignette Semantics and we believe it to be a conservative extension (and thus in the spirit of) frame semantics. This paper is structured as follows. In Section 2, we review frame semantics and FrameNet. Section 3 presents a more detailed description of VigNet, and we provide examples in Section 4. Since VigNet is intended to be used in a large-coverage system, the population of VigNet with knowledge is a crucial issue which we address in Section 5. We discuss related work in Section 6 and conclude in Section 7. Frame Semantics and FrameNet Frame Semantics (FS; Fillmore (1982)) is based on the idea that the meaning of a word can only be fully understood in context of the entire conceptual structure surrounding it, called the word's frame. When the meaning of a word is evoked in a hearer's mind all related concepts are activated simultaneously and we can rely on this structure to transfer information in a conversation. Frames can describe states-ofaffairs, events or complex objects. Each frame contains a set of specific frame elements (FEs), which are labeled semantic argument slots describing participants in the frame. For instance, the word buy evokes the frame for a commercial transaction scenario, which includes a buyer and a seller that exchange money for goods. A speaker is aware of what typical buyers, sellers, and goods are. He may also have a mental prototype of the visual scenario itself (e.g. standing at a counter in a store). In FS the role of syntactic theory and the lexicon is to explain how the syntactic dependents of a word that realizes a frame (i.e. arguments and adjuncts) are mapped to frame elements via valence patterns. FrameNet (FN; Baker et al. (1998), Ruppenhofer et al. (2010)) is a lexical resource based on FS. Frames in FN (around 1000) 1 are defined in terms of their frame elements, relations to other frames and semantic types of FEs. Beyond this, the meaning of the frame (how the FEs are related to each other) is only described in natural language. FN contains about 11,800 lexical units, which are pairings of words and frames. These come with annotated example sentences (about 150,000) to illustrate their valence patterns. FN contains a network of directed frame-to-frame relations. In the INHERI-TANCE relation a child-frame inherits all semantic properties from the superframe. The frame relations SUBFRAME and PRECEDES refer to sub-events and events following in temporal order respectively. The parent frame's FEs are mapped to the child's FEs. For instance CAUSE TO WAKE inherits from TRANSITIVE ACTION and its sleeper FE maps to agent. Other relations include PERSPEC-TIVE ON, CAUSATIVE OF, and INCHOATIVE OF. Frame relations captures important semantic facts about frames. For instance the hierarchical organization of INHERITANCE allows to view an event on varying levels of specificity. Finally, FN contains a small ontology of semantic types for frame elements, which can be interpreted as selectional restrictions (e.g. an agent frame element must be filled by a sentient being). Vignette Semantics In Section 1, we motivated VigNet by the need for a resource that allows us to relate language to a grounded semantics, where for us the graphical representation is a stand-in for grounding. We described three reasons for extending FrameNet to Vi-gNet: we need more meaning in a frame, we need more frames and more types of frames, and we need to instantiate frames in a clean manner. We discuss these refinements in more detail in this section. 1 Numbers refer to FrameNet 1.5 • Vignettes are frames that are decomposed into graphical primitives and can be visualized. Like other fames they are motivated by frame semantics; they correspond to a conceptual structure evoked by the lexical units which are associated with it. • VigNet includes individual frames for each (content) lexical item. This provides finergrained semantics than given with FrameNet frames themselves. These lexically-coupled frames leverage the existing structure of their parent frames. For example, the SELF MOTION frame contains lexical items for run and swim which have very different meaning even though they share the same frame and FEs (such as SOURCE, GOAL, and PATH). We therefore define frames for RUN and SWIM which inherit from SELF MOTION. We assume also that frames and lexical items that are missing from FrameNet are defined and linked to the rest of FrameNet as needed. • Even more specific frames are created to represent composed vignettes. These are vignettes that ground meaning in different ways than the primitive vignette that they specialize. The only motivation for their existence is the graphical grounding. For example, we cannot determine how to represent washing an apple from the knowledge of how to represent generic washing and an apple. So we define a new vignette specifically for washing a small fruit. From the point of view of lexical semantics, it uses two lexical items (wash and apple) and their interpretation, but for us, since we are interested in grounding, it is a single vignette. Note that it is not necessary to create specific vignettes for every concrete verb/argument combination. Because vignettes are visually inspired relatively few general vignettes (e.g. manipulate an object on a fixture) suffices to visualize many possible scenarios. • A new type of frame-to-frame relation, which we call SUBFRAME-PARALLEL is used to decompose vignettes into a set of more primitive semantic relations between their arguments. Unlike FrameNet's SUBFRAME relation which represents temporally sequential subframes, in SUBFRAME-PARALLEL, the subframes are all active at the same time, provide a conceptual and spatial decomposition of the frame, and can serve as spatial constraints on the frame elements. A frame is called a vignette if it can be decomposed into graphical primitives using SUBFRAME-PARALLEL relations. For instance in the vignette WASH-SMALL-OBJ for washing a small object in a sink, the washer has to be in front of the sink. We assert a SUBFRAME-PARALLEL relation between WASH-SMALL-OBJ and FRONTOF, mapping the washer FE to the figure FE and sink to ground. • FrameNet has a very limited number of semantic types that are used to restrict the values of FEs. Vignette semantics uses selectional restrictions to differentiate between vignettes that have the same parent. For example, the vignette invoked for washing a small object in a sink would restrict the semantic type of the theme (the entity being washed) to anything small, or, more generally, to any object that is washed in this way (apples, hard-boiled eggs, etc). The vignette used for washing a vehicle in a driveway with a hose would restrict its theme to some set of large objects or vehicle types. Selectional restrictions are asserted using the same mechanism as decompositions. • As mentioned in Section 1, in FrameNet annotations frame elements (FEs) are filled with text spans. Therefore, while frame semantics in general is a deep semantic theory, FrameNet annotations only represent shallow semantics and it is not immediately obvious how FrameNet can be used to build a full semantic representations of a sentence. In Vignette semantics, when a frame is evoked by a lexical item, it is instantiated as a semantic node. Its FEs are then bound not to subphrases, but to semantic nodes which are the instantiations of the frames evoked by those subphrases. Section 3.1 investigates semantic nodes in more detail. Section 3.2 illustrates different types of vignettes (objects, actions, locations) and how they are defined using the SUBFRAME PARALLEL relation. In Section 3.3 we discuss selectional restrictions. Semantic Nodes and Relational Knowledge The intuition behind semantic nodes is that they represent objects, events or situations. They can also represent plurals or generics. For instance we could have semantic node city, denoting the class of cities and a semantic node paris, that denotes the city Paris. Note that there is also a frame CITY and a frame PARIS that contain the conceptual structure associated with the words city and Paris. Frames represent the linguistic and the conceptual aspect of knowledge; the intensional meaning of a word. They provide knowledge to answer questions such as "What is an apple?" or "How do you wash an apple?". In contrast, semantic nodes are extensional, i.e. denotations. They represent the knowledge to answer questions such as "In what season are apples harvested?" or "How did Percy wash that apple just now?". As mentioned above semantic nodes allow us to build full meaning representations of entire sentences in discourse. Therefore, while frame definitions are fixed, semantic nodes can be added dynamically during discourse understanding or generation to model the instances of frames that language is evoking. We call such nodes temporary semantic nodes. They they are closely related to the discourse referents of Discourse Representation Theory (Kamp, 1981) and related concepts in other theories. In contrast, persistent semantic nodes are used to store world knowledge which is distinct from the conceptual knowledge encoded within frames and their relations; for example, the frame for moon will not encode the fact that the moon's circumference is 6,790 miles, but we may record that using a knowledge based of external assertions semantic nodes are given their meaning by corresponding frames (CIR-CUMFERENCE, MILE, etc.). A temporary semantic node can become persistent by being retained in the knowledge base. Vignette Types and their Decomposition A vignette is a frame in the FrameNet sense that is decomposed to a set of more primitive frames using the SUBFRAME-PARALLEL frame-to-frame relation. The frame elements (FEs) of a vignette are defined as in FrameNet, except that our grounding in the graphical representation gives us a new, strong criterion to choose what the FEs are: they are the objects necessarily involved in the visual scene associated with that vignette. The subframes represent the spatial and other relations between the FEs. The resulting semantic relations specify how the scene elements are spatially arranged. This mechanism covers several different cases. For actions, we conceptually freeze the action in time, much as in a comic book panel, and represent it in a vignette with a set of objects, spatial relations between those objects, and poses characteristic for the humans (and other pliable beings) involved in that action. Action vignettes will typically be specialized to composed vignettes, so that the applicability of different vignettes with the same parent frame will depend on the values of the FEs of the parent. In the process of creating composed vignettes, FEs are often added because additional objects are required to play auxiliary roles. As a result, the FEs of an action vignette are the union of the semantic roles of the important participants and props involved in that enactment of the action with the FEs of the parent frame. For instance the following vignette describes one concrete way of washing a small fruit. Note that we have included a new FE sink which is not motivated in the frame WASH. 2 Note also that this vignette also contains a selectional restriction on its theme, which we will discuss in the next subsection and which is not shown here. WASH-SMALL-FRUIT(washer, theme, sink) In this notation the head row contains the vignette name and its FEs in parentheses. For readability we will often omit FEs that are part of the vignette but not restricted or used in any mentioned relation. The lower box contains the vignette decomposition and implicitly specifies SUBFRAME-PARALLEL frameto-frame relations. In the decomposition of a vignette V we use the notation F(a:b, · · · ) to indicate that the FE a of frame F is mapped to the FE b of V. When V is instantiated the semantic node binding to a must also be able to bind to b in F. Locations are represented by vignettes which express constraints between a set of objects characteristic for the given location. The FEs of location vignettes include these constituent objects. For example, one type of living room (of many possible ones) might contain a couch, a coffee table, and a fireplace in a certain arrangement. Even ordinary physical objects will have certain characteristic parts with size, shape, and spatial relations that can be expressed by vignettes. For example, an object type such as a kind of stop sign can be defined as a two-foot-wide, red, hexagonal metal sheet displaying the word "STOP" positioned on the top of a 6 foot high post. In addition, many real-world objects do not correspond to lexical items but are elaborations on them or combinations. These sublexical entities can be represented by vignettes as well. For example, one such 3D object in our text-to-scene system is a goat head mounted on a piece of wood. This object is represented by a vignette with two FEs (ghead, gwood) representing the goat's head and the wood. The vignette decomposes into ON(ghead, gwood). While there can be many vignettes for a single lexical item, representing the many ways a location, action, or object can be constituted, vignettes need not be specialized for every particular situation and can be more or less general. In one exteme creating vignettes for every verb/argument combination would clearly lead to a combinatorial explosion and is not feasible. In the other extreme we can define rather general vignettes. For example, a vignette USE-TOOL for using a tool on a theme can be represented by the user GRASPING the tool and REACH-ING towards the theme. These vignettes can be used in decompositions of more concrete vignettes (e.g. HAMMER-NAIL-INTO-WALL). They can also be used directly if no other more concrete vignette can be applied (because it does not exist or its selectional restrictions cannot be satisfied). In this way by defining a small set of such vignettes we can visualize approximate scenes for a large number of descriptions. Selectional Restrictions on Frame Elements To define a frame we need to specify selectional restrictions on the semantic type of its FEs. Instead of relying on a fixed inventory of semantic types, we assert conceptual knowledge and external assertions over persistent semantic types. This allows us to use VigNet's large set of frames to represent such knowledge. For example, an apple can be defined as a small round fruit. APPLE(self) SHAPEOF(figure:self, shape:spherical) SIZEOF(figure:self, size:small) APPLE is simply a frame that contains a self FE, which allows us to make assertions about the concept (i.e. about any semantic node bound to the self FE). Frame elements of this type are not unusual in FrameNet, where they are mainly used for frames containing common nouns (for instance the Substance FE contains a substance FE). In Vi-gNet we implicitly use self in all frames, including frames describing situations and events. We use the same mechanism to define specialized compound vignettes such as WASH SMALL FRUIT. We extend WASH in the following way to restrict it to small fruits (we abreviate F(self:a) as a=F for readability). Examples In this section we give further examples of visual action vignettes for the verb wash. The selectional restrictions and graphical decomposition of these vignettes vary depending on the type of object being washed. The first example shows a vignette for washing a vehicle. WASH-VEHICLE(washer, theme, instr, location) washer=PERSON, theme=VEHICLE, instr=HOSE, location=DRIVEWAY ONSURFACE(figure:theme, ground:location) FRONTOF(figure:washer, ground:theme) FACING(figure:washer, ground:theme) GRASP(grasper:washer, theme:instrument) AIM(aimer:washer, theme:instr, target:theme) The following two vignettes represent a case where the object being washed alone does not determine which vignette to apply. If the instrument is unspecified one or the other could be used. We illustrate one option in figure 1 (right). WASH-FLOOR-W-SPONGE(washer,theme,instr) washer=PERSON, theme=FLOOR, instr=SPONGE KNEELING(agent:washer), GRASP(grasper:washer, theme:instr), REACH(reacher:washer, target:theme) WASH-FLOOR-W-MOP(washer, theme, instr) washer=PERSON, theme=FLOOR, instr=MOP GRASP(grasper:washer, theme:instr), REACHWITH(reacher:washer, target:theme, instr:instr) It is easy to come up with other concrete vignettes for wash (washing windows, babies, hands, dishes...). As mentioned in section 3.2 more general vignettes can be defined for very broad object classes. In choosing vignettes, the most specific will be used (looking at type matching hierarchies), so general vignettes will only be chosen when more specific ones are unavailable. The following generic vignette describes washing any large object. WASH-LARGE-OBJECT(washer, theme instrument) washer=PERSON, theme=OBJECT, instrument=SPONGE, SIZEOF(figure:theme, size:large) FACING(figure:washer, ground:theme) GRASP(grasper:washer, theme:instrument) REACH(reacher:washer, target:theme) In our final example, a vignette for picking fruit uses the following assertion of world knowledge about particular types of fruit and the trees they come from: SOURCE-OF(theme:x, source:y), APPLE(self:x), APPLETREE(self:y) In matching the vignette to the verb frame and its arguments, the source frame element is bound to the type of tree for the given theme (fruit). VigNet We are developing VigNet as a general purpose resource, but with the specific goal of using it in textto-scene generation. In this section we first describe various methods to populate VigNet. We then sketch how we create graphical representations from Vi-gNet meaning representations. Populating VigNet VigNet is being populated using several approaches: • Amazon Mechanical Turk is being used to acquire scene elements for location and action vignettes as well as the spatial relations among those elements. For locations, Turkers are shown representative pictures of different locations as well as variants of similar locations, thereby providing distinct vignettes for each location. We also use Mechanical Turk to acquire general purpose relational information for objects and actions such as default locations, materials, contents, and parts. • We extract relations such as typical locations for actions from corpora based on co-occurance patterns of location and action terms. This is based on ideas described in (Sproat, 2001). We also rely on corpora to induce new lexical units and selectional preferences. • A large set of semantic nodes and frames for nouns has been imported from the noun lexicon of the WordsEye text-to-scene system (Coyne and Sproat, 2001). This lexicon currently contains 15,000 lexical items and is tied to a library of 2,200 3D objects and 10,000 images Semantic relations between these nodes include parthood, containment, size, style (e.g. antique or modern), overall shape, material, as well as spatial tags denoting important spatial regions on the object. We also import graphicallyoriented vignettes from WordsEye. These are used to capture the meaning of sub-lexical 3D objects such as the mounted goat head described earlier. • Finally, we intend to use WordsEye itself to allow users to visualize vignettes as they define them, as a way to improve vignette accuracy and relevancy to the actual use of the system. While the population of VigNet is not the focus of this paper, it is our goal to create a usable resource that can be populated with a reasonable amount of effort. We note that opposed to resources like FrameNet that require skilled lexicographers, we only need simple visual annotation that can easily be done by untrained Mechanical Turkers. In addition, as described in section 3.2, vignettes defined at more abstract levels of the frame hierarchy can be used and composed to cover large numbers of frames in a plausible manner. This allows more specific vignettes to be defined where the differences are most significant. VigNet is is focused on visually-oriented language involving tangible objects. However, abstract, process-oriented language and relations such as negation can be depicted iconically with general vignettes. Examples of these can be seen in the figurative and metaphorical depictions shown in (Coyne and Sproat, 2001). Using VigNet in Text-to-Scene Generation To compose a scene from text input such as the man is washing the apple it is necessary to parse the sentence into a semantic representation (evoking frames for each content word) and to then resolve the language-level semantics to a set of graphical entities and relations. To create a low-level graphical representation all frame elements need to be filled with appropriate semantic nodes. Frames support the selection of these nodes by specifying constraints on them using selectional restrictions. The SUBFRAME-PARALLEL decomposition of vignettes then ultimately relates these nodes using elementary spatial vignettes (FRONTOF, ON, ...). Note that it is possible to describe scenes directly using these vignettes (such as The man is in front of the sink. He is holding an apple.), as was used to create the mock-ups in figure 1. Vignettes can be directly applied or composed together. Composing vignettes involves unifying their frame elements. For example, in washing an apple, the WASH-SMALL-FRUIT vignette uses a sink. From world knowledge we know (via instances of the TYPICAL-LOCATION frame) that washing food typically takes place in the KITCHEN. To create a scene we compose the two vignettes together by unifying the sink in the location vignette with the sink in the action vignette. Related Work The grounding of natural language to graphical relations has been investigated in very early text-toscene systems (Boberg, 1972), (Simmons, 1975), (Kahn, 1979), (Adorni et al., 1984), and then later in Put (Clay and Wilhelms, 1996), and WordsEye (Coyne and Sproat, 2001). Other systems, such as CarSim (Dupuy et al., 2001), Jack (Badler et al., 1998), and CONFUCIUS (Ma and McKevitt, 2006) target animation and virtual environments rather than scene construction. A graphically grounded lexical-semantic resource such as VigNet would be of use to these and related domains. The concept of vignettes as graphical realizations of more general frames was introduced in (Coyne et al., 2010). In addition to FrameNet, much work has been done in developing theories and resources for lexical semantics and common-sense knowledge. Verb-Net (Kipper et al., 2000) focuses on verb subcat patterns grouped by Levin verb classes (Levin, 1993), but also grounds verb semantics into a small number of causal primitives representing temporal constraints tied to causality and state changes. VerbNet lacks the ability to compose semantic constraints or use arbitrary semantic relations in those constraints. Conceptual Dependency theory (Schank and Abelson, 1977) specifies a small number of state-change primitives into which all verbs are reduced. Event Logic (Siskind, 1995) decomposes ac-tions into intervals describing state changes and allows visual grounding by specifying truth conditions for a small set of spatial primitives (a similar formalism is used by Ma and McKevitt (2006)). (Bailey et al., 1998) and related work proposes a representation in many ways similar to ours, in which lexical items are paired with a detailed specification of actions in terms of elementary body poses and movements. In contrast to these temporallyoriented approaches, VigNet grounds semantics in spatial constraints active at a single moment in time. This allows for and emphasizes contextual reasoning rather than causal reasoning. In addition, VigNet emphasizes a holistic frame semantic perspective, rather than emphasizing decomposition alone. Several resources for common-sense knowledge exist or have been proposed. In OpenMind and ConceptNet (Havasi et al., 2007) online crowd-sourcing is used to collect a large set of common-sense assertions. These assertions are normalized into a set of a couple dozen relations. The Cyc project is using the web to augment its large ontology and knowledge base of common sense knowledge (Matuszek et al., 2005). PRAXICON (Pastra, 2008) is a grounded conceptual resources that integrates motor-sensoric, visual, pragmatic and lexical knowledge (via WordNet). It targets the embodied robotics community and does not directly focus on scene generation. It also focuses on individual lexical items, while VigNet, like FrameNet, takes syntactic context into account. Conclusion We have described a new semantic paradigm that we call vignette semantics. Vignettes are extensions of FrameNet frames and represent the specific ways in which semantic frames can be realized in the world. Mapping frames to vignettes involves translating between high-level frame semantics and the lowerlevel relations used to compose a scene. Knowledge about objects, both in terms of their semantic types and the affordances they provide is used to make that translation. FrameNet frames, coupled with semantic nodes representing entity classes, provide a powerful relational framework to express such knowledge. We are developing a new resource VigNet which will implement this framework and be used in our text-to-scene generation system.
2014-07-01T00:00:00.000Z
2011-06-23T00:00:00.000
{ "year": 2011, "sha1": "93d204fe8589b4bd6ae736c5bf7f2cd1b124b0fd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "f97784ca5228596638cecc8f783258e8330192bf", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
219213229
pes2o/s2orc
v3-fos-license
The Major Extracellular Protease of the Nosocomial PathogenStenotrophomonas maltophilia Stenotrophomonas maltophilia is increasingly emerging as a multiresistant pathogen in the hospital environment. In immunosuppressed patients, these bacteria may cause severe infections associated with tissue lesions such as pulmonary hemorrhage. This suggests proteolysis as a possible pathogenic mechanism in these infections. This study describes a protease with broad specificity secreted by S. maltophilia. The gene, termed StmPr1, codes for a 63-kDa precursor that is processed to the mature protein of 47 kDa. The enzyme is an alkaline serine protease that, by sequence homology and enzymic properties, can be further classified as a new member of the family of subtilases. It differs from the classic subtilisins in molecular size, in substrate specificity, and probably in the architecture of the active site. TheStmPr1 protease is able to degrade several human proteins from serum and connective tissue. Furthermore, pan-protease inhibitors such as α1-antitrypsin and α2-macroglobulin were unable to abolish the activity of the bacterial protease. The data support the interpretation that the extracellular protease of S. maltophilia functions as a pathogenic factor and thus could serve as a target for the development of therapeutic agents. Stenotrophomonas maltophilia, formerly referred to as Xanthomonas maltophilia or Pseudomonas maltophilia (1,2), is an aerobic nonfermentative Gram-negative bacterium of widespread occurrence. For healthy humans, it is regarded as an opportunistic germ; it has been implicated in a variety of infections without distinctive clinical features (for a review, see Ref. 3). However, in immune-compromised patients, particularly those with bone marrow aplasia or receiving intensive chemotherapy, cases of fulminant hemorrhagic pneumonia have been reported, even with fatal outcome (4 -6). In patients not surviving infections with S. maltophilia, histological in-spection of the lung tissue revealed massive bleeding caused by damage to the lung epithelium (4). There are further reports demonstrating involvement of this bacterium in massive hemorrhagic processes of the small intestine and the subclavian artery accompanied by severe lesions of the tissue (5,6). These observations strongly suggest the participation of proteolytic activity, produced by the bacteria, which may damage the infected tissue. Indeed, it is known that members of the Pseudomonaceae express and secrete a variety of proteases (cf. Ref. 7). Whereas the primary function of these enzymes is to provide a source of free amino acids for bacterial survival and growth, there is accumulating evidence that bacterial proteases may play a pathogenic role in the infected host by involvement in tissue invasion and destruction, evasion of host defenses, and modulation of the host immune system (8). The broad administration of antibiotics currently applied in cases of intensive care patients leads to selection of multiresistant S. maltophilia strains. Consequently, these bacteria are found with increasing frequency in the hospital environment. Because of the known multiresistance of this germ toward conventional antibiotics (for a review, see Ref. 9), bacterial proteases involved in the pathogenesis of human diseases are potential targets for specific drug development. This prompted us to test cultures of S. maltophilia obtained from patient material for the presence of proteolytic activity. Indeed, a highly active protease was detected as a major secretion product of the isolated bacteria. This study describes the purification, cloning, and characterization of the S. maltophilia extracellular protease. Purification of the Protease-Cell-free supernatant (12.5 liters) was obtained from S. maltophilia cultures by centrifugation at 4°C and mixed with 80 ml of DE-52 (Whatman) cellulose equilibrated with 10 mM Tris/HCl buffer, pH 7.4, and the mixture was stirred overnight at 4°C. The matrix was then collected by sedimentation, transferred into a column, and washed with 10 mM Tris/HCl buffer, pH 7.4. Protein fractions were eluted by a linear gradient of 0 -500 mM NaCl in the same buffer at a flow rate of 2 ml/min. 30 fractions of 24 ml were collected and assayed for proteolytic activity (see below). A single peak of activity was detected; the respective fractions were pooled and concentrated by ultrafiltration (Amicon YM 10 membrane) at 4°C to a final volume of 4 ml. This sample was divided into two aliquots, and each was * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM fractionated at a flow rate of 1 ml/min over a 310-ml column of Fractogel EMD BioSec 650 (Merck) equilibrated with phosphate-buffered saline. Fractions of 6 ml were collected, and the two fractions containing most of the proteolytic activity were pooled and served to characterize the protease. When this purified preparation was compared with the crude bacterial supernatant, similar results were obtained for enzyme action (kinetic parameters, stability, inhibitor pattern, and salt dependence), indicating that the isolated protease is intact and represents the major, if not only, protease produced by the bacteria. Determination of StmPr1 Protein-The protein concentration in preparations of the native StmPr1 protease was determined on SDSpolyacrylamide gels as described previously (40). Purified recombinant StmPr1 protein, calibrated by the Biuret method (11), served as a standard. The results were comparable with values calculated from the optical density at 280 nm (mg/ml StmPr1 protease ϭ 1.30 ϫ A 280 ). Electrophoresis of Proteins-SDS-PAGE was performed as described in Ref. 12. Protein-containing samples were denatured with 10% trichloric acid before electrophoresis. Without this pretreatment, additional bands of lower molecular mass appeared, obviously due to selfdigestion of the protease. Protein precipitates were collected by centrifugation and washed with methanol to remove residual trichloric acid. For autofluorography with a covalent inhibitor specific for serine proteases, samples were incubated with 5 Ci of [1,3-3 H]diisopropylfluoro-phosphate (PerkinElmer Life Sciences; 8.4 Ci/mmol) for 2 h at 37°C, precipitated with 10% trichloric acid, and subjected to SDS-PAGE. Polyacrylamide gels were fixed with 10% acetic acid/30% methanol, equilibrated first with water and then with 1 M sodium salicylate, dried, and exposed to Kodak X-Omat film for 90 h at Ϫ80°C. Enzyme Assays-For the initial detection of proteolytic activity in bacterial supernatants, a microassay using the nonspecific chromogenic substrate azoalbumin (Sigma) was performed as described in Ref. 13. In all other cases, a substrate specific for serine proteases was used (0.5 mM Suc-Ala-Ala-Pro-Phe-pNA, unless otherwise stated). Hydrolysis was allowed to occur in 200 l of 20 mM sodium phosphate, pH 9.0, containing 400 mM NaCl at 37°C. The amount of released p-nitroaniline within initial time intervals was measured at 405 nm (⑀ 405 ϭ 9600 M Ϫ1 cm Ϫ1 ). For determination of the IC 50 of protease inhibitors (antipain and chymostatin), assays contained 1.4 mM (K m ) of the substrate Suc-Ala-Ala-Pro-Phe-pNA and inhibitor over a wide range of concentration. The IC 50 is obtained as the constant b in a nonlinear regression analysis of the function (a/(b ϩ10 x )) when the reaction velocity is plotted versus log 10 of the inhibitor concentration (x). Kinetic experiments with various synthetic peptide p-nitroanilide substrates were carried out in 100 mM Tris/HCl buffer, pH 8.2, at 25°C and in the presence of 5% dimethylformamide. The enzyme concentration was usually in the range of 1.95 ϫ 10 Ϫ8 to 9.12 ϫ 10 Ϫ9 M, and the concentration of the substrate varied between 1.6 ϫ 10 Ϫ3 and 1.2 ϫ 10 Ϫ4 M. Kinetic parameters were calculated from initial rate measurements of substrate hydrolysis using a nonlinear regression analysis based on the function (V max * x/(K m ϩ x)), with x ϭ the concentration of substrate. Protein Sequencing-After SDS-PAGE, the protein was blotted onto polyvinylidene difluoride membranes (Immobilon P; Millipore) and stained with Coomassie Brilliant Blue R-250. The excised band was sequenced by standard Edman degradation on an automated sequencer (Applied Biosystems 476A). To obtain internal sequence information, the Coomassie Brilliant Blue R-250-stained protein band was cut out of the SDS gel and in-gel digested with the endoproteases Lys-C or Asp-N (Roche Molecular Biochemicals) in 50 mM Tris/HCl, pH 8.5, containing 1 mM EDTA (for digestion with Lys-C) or 50 mM Tris/HCl, pH 8.0 (for digestion with Asp-N), at 37°C overnight. The peptides obtained were separated by reverse phase HPLC 1 on a Vydac C4 column (250 ϫ 2.1 mm) at a flow rate of 200 l/min. The following gradient was applied: 2-80% B over a 50-min period (solvent A, 0.1% trifluoroacetic acid in water; solvent B, 0.085% trifluoroacetic acid in 70% acetonitrile). The isolated peptides were sequenced by Edman degradation following standard procedures. Cloning of the StmPr1 Gene-DNA oligomers were synthesized complementary to the amino-terminal sequence PYYQQYQ and to the reverse complement of the sequence APAAMRT obtained by digestion of the purified protease with the endoprotease Lys-C (see above). Using these primers (40 pmol each), amplification of chromosomal DNA (200 ng) from S. maltophilia with Taq polymerase (Qiagen) yielded a DNA fragment of 930 bp, which was sequenced (Applied Biosystems 377). The sequence showed homology to known protease sequences and served to design gene-specific primers. The rest of the upstream and downstream portions of the gene were cloned by alternate application of inverse PCR (14) using the EcoRII and HinfI restriction sites and PCR using as template dA-tailed fragments of genomic DNA generated by AatII, PstI, or SphI digestion, and one gene-specific oligonucleotide plus poly-dT as primers. A final PCR product was obtained using the Expand Long Template PCR System (Roche Molecular Biochemicals), and primers comprising the identified start codon and stop codon, respectively, were sequenced, cloned into the pGem-T Easy vector (Promega), and resequenced for verification. The sequence of the StmPr1 gene has been deposited in the European Molecular Biology Laboratory nucleotide sequence data base; the accession number is AJ291488. Enzyme Hydrolysis of the Oxidized Insulin B Chain-Hydrolysis of the oxidized insulin B chain (Sigma) was performed in 50 mM Tris/HCl buffer, pH 8.0, at room temperature for 10 min and for 4 h. The reaction mixture (2 ml) contained the enzyme and the substrate in a ratio of 1:200. The peptides obtained after the enzymatic hydrolysis were separated by reverse phase HPLC on a Vydac C18 column (125 ϫ 2.1 mm) at a flow rate of 200 l/min, and the following gradient was applied: 2-65% B over a 50-min period (solvent A, 0.1% trifluoroacetic acid in water; solvent B, 0.085% trifluoroacetic acid in 70% acetonitrile). The obtained peptides were identified by mass spectrometry on a hybrid tandem mass spectrometer (Qtof II; Micromass) equipped with a nanoelectrospray ion source. 10 l of the collected fractions were vacuum dried and redissolved in 5 l of 60% methanol/5% formic acid. 1 l of this solution was transferred into a gold-coated nanoelectrospray needle (Micromass). From the masses of the peptides determined, a tentative assignment to fragments of the insulin B chain was derived. The assignment was confirmed by subsequent tandem mass spectrometric measurements of the peptide fragmentation pattern or, in some cases, by Edman degradation. Detection of Proteolytic Activity in Bacterial Cultures A culture of S. maltophilia was grown from a specimen of an immunocompromised patient. Proteolytic activity was detected in the cell-free growth medium of the bacterial culture using azo-albumin as an unspecific chromogenic substrate (data not shown). To optimize bacterial cultures as a source for purification of the putative protease, the production of the enzyme during culture growth was measured. As shown in Fig. 1, the proteolytic activity is hardly detectable in the early stages of the culture; rather, protease production is up-regulated only toward the end of the exponential phase of the growth curve. Proteolytic activity reached a maximum after about 22 h and remained unchanged for at least 3 days. Protease Purification A culture supernatant of S. maltophilia served as a source for the protease isolation. Adsorption on an anion exchange resin was applied to concentrate and separate proteins from the bacterial broth; elution by a salt gradient yielded a single peak of proteolytic activity, which was then further fractionated by gel filtration. SDS gel electrophoresis of the proteasecontaining fraction revealed one major band of 47-kDa apparent molecular mass ( Fig. 2A). Comparison with the electrophoretic pattern of the crude bacterial supernatant indicated that the 47-kDa protein represents the major secretory product of S. maltophilia. Amino-terminal sequencing of the 47-kDa band yielded the sequence LAPNDPYYQQ, which turned out to be absent from protein sequence data bases. The sequence, however, showed homology with the amino termini of several known bacterial proteases, the closest of which is a serine protease from Dichelobacter nodosus (15), a member of the family of subtilisin-like proteases (cf. Ref. 7). Covalent coupling with the radioactive inhibitor [ 3 H]difluorophosphate confirmed that the 47-kDa protein of S. maltophilia is a serine protease (Fig. 2B). When the crude bacterial supernatant was allowed to react with the inhibitor, autoradio-graphs also showed mainly the 47-kDa band; the faint labeling of a lower molecular mass band may indicate a degradation product or the presence of another serine protease in trace amounts. Thus, the 47-kDa protein seems to be the enzyme mainly responsible for the extracellular proteolytic activity of S. maltophilia. Sequence Determination by Molecular Cloning To analyze the sequence of the protease and determine its relationship to other proteases, the gene was cloned by PCR techniques. Sequences from the amino terminus of the purified protein and from an internal peptide (PLAPAAMRT) generated by Lys-C digestion served to design degenerate primers. Using genomic DNA of S. maltophilia as a template, a 930-bp amplified fragment was obtained. Sequencing of the missing carboxylterminal portion of the gene required additional steps of 3Јextension: PCR using poly(A)-tailed fragments of genomic DNA as PCR template and inverse PCR from highly diluted DNA fragments were applied until a stop codon was found (Fig. 3). Because it is common for most prokaryotic extracellular proteases to be produced as larger precursor proteins (cf. Ref. 7), 5Ј-extension of the DNA sequence was performed to obtain the sequence of the entire gene. Applying the same techniques for extension as outlined above, a sequence was obtained that contained a stop codon within the reading frame and only one ATG coding for a methionine in position Ϫ132. However, two points argue against this ATG coding for the translation initiation of the protease precursor: (i) it is not preceded by a typical Shine-Dalgarno ribosome binding site, and (ii) the sequence 3Ј to this ATG does not predict a signal peptide typical for Gramnegative bacteria (16). Therefore, we assume that, following an alternative bacterial codon usage, a GTG codes for translation initiation, resulting in a methionine in position Ϫ150. In this case, the synthax for both Shine-Dalgarno and signal sequences would be met (Fig. 3). Further evidence for this GTG to function as a start codon for the protease precursor came from the recombinant expression of the gene. When the DNA starting with the GTG in question (and not with ATG in position Ϫ132) was expressed in Escherichia coli, the protein was correctly processed, resulting in the mature protease with full enzymatic activity (data not shown). The DNA sequence of the gene was further established, and the amino-terminal sequences of the processed recombinant protein and of the native protease were found to be identical. Moreover, antibodies generated against the native protein also recognized the recombinant gene product (data not shown). Taken together, the open reading frame encodes a protein with a deduced molecular mass of 63.0 kDa, corresponding to 618 amino acids in length (Fig. 3). The 27-residue stretch of the amino terminus was predicted to be the signal peptide containing a potential signal peptidase cleavage site (16) between Ala Ϫ124 and Ala Ϫ123 . Following the putative signal sequence and preceding the amino terminus of the mature protein, there is a pro-region of 123 residues. Finally, the sequence between the amino terminus, identified in the native protein, and the carboxyl terminus as indicated by the stop codon corresponds to a protein that encompasses 467 amino acids with a theoretical pI ϭ 4.91 and a calculated molecular mass of 47,446 Da. The S. maltophilia protease gene was termed StmPr1 (European Molecular Biology Laboratory accession number AJ291488); the term takes into account that another protease gene (StmPr2) 2 was detected in S. maltophilia during preparation of this manuscript. Comparison of the StmPr1 protease sequence with those of known bacterial serine proteases confirmed its relation to the subtilisin family of proteases (cf. Ref. 7). Alignment of the sequences indicated that Asp 42 , His 105 , and Ser 289 form the putative catalytic triad characteristic for serine proteases. In the active site region, there is considerable homology with other subtilisins; in Fig. 3, conserved residues that are shared with subtilisin BPNЈ (17) and proteinase K (18) are marked with open boxes as typical representatives of the "classic" subtilisins. Nevertheless, the StmPr1 protease sequence reveals significant structural differences from these related enzymes due to large 2 S. Windhorst and W. Weber, manuscript in preparation. inserts adjacent to the catalytic His 105 and Ser 289 . Therefore, compared with the catalytic triad formed by Asp 32 , His 64 , and Ser 221 in the typical members of the subtilisin family, the architecture of the active site should be different in the StmPr1 pro-tease. In addition, this enzyme has a longer carboxyl-terminal extension beyond the active site, which, together with the inserts in the catalytic region, makes the entire sequence almost 100 residues longer. With these structural properties, the StmPr1 protease is similar to the extracellular proteases of Xanthomonas campestris (19), D. nodosus (15), and Alteromonas sp. (20); the homology with these proteases is 49%, 40%, and 38% identity, respectively, for the mature proteins. On the other hand, there is only low homology within the region of the carboxyl-terminal extensions, and no homology can be seen between the prosequences. The sequence homology between the StmPr1 protease and the classic subtilisins is also lower, e.g. 23% identity with proteinase K (18) within the region of the mature enzymes. Properties of the Enzyme In view of the sequence differences between the well-characterized subtilisins and the StmPr1 protease, it was important to analyze the enzymatic activity of the new protease in detail. The StmPr1 protease hydrolyzes the widely used chromogenic substrate Suc-Ala-Ala-Pro-Phe-pNA with a K m of 1.4 mM. This substrate was used for characterization of the enzyme purified from the native source. Effect of pH The enzyme activity of the purified StmPr1 protease was measured in the pH range 4 -11 (Fig. 4A) and showed a typical bell shape. The optimum pH was 9.0, classifying the StmPr1 protein as an alkaline protease. Pre-exposure of the protease to extreme pH (0.1 M acetic acid, pH 3) for 10 min on ice resulted in a 68% loss of enzyme activity. Modulation of Enzyme Activity A study of the salt requirement for enzyme activity was conducted by assaying the enzyme at pH 9. Raising the final NaCl concentration to 0.4 M increased activity about 4-fold (Fig. 4B). No further increase in enzyme activity was observed at higher salt concentrations (1 M NaCl was the maximum concentration tested). The stimulating effect of NaCl has also been reported for several other proteases of the subtilisin family (21). Enzyme activity was also found to be stimulated by calcium, which was effective at low concentrations: a maximum 3.5-fold increase in activity was observed at 50 mM calcium chloride. The effects of Na ϩ and Ca 2ϩ were not additive. Ca 2ϩ can be replaced by Mg 2ϩ to achieve the same activating effect (data not shown). Thus, the StmPr1 protease is an enzyme dependent on bivalent metal ions; the strong activation effect of the cations can be explained by a conformational change leading to a catalytically more active conformation. Na ϩ possibly binds to the same site and may substitute for Ca 2ϩ at higher concentrations. A remarkable property of the StmPr1 protease is its relative stability toward the anionic detergent sodium dodecyl sulfate (Fig. 4C). The enzyme preserved 85% of its activity in the presence of 0.1% detergent, and even at a concentration of 1% dodecyl sulfate, 45% of proteolytic activity was retained. Similar results indicating a particular conformational stability have been reported for some, mainly microbial, proteases (cf. Ref. 22). By contrast, the mammalian serine protease chymotrypsin tested under the same conditions completely lost its activity at a concentration of 0.1% dodecyl sulfate. To get more information on the type of protease, the effect of a series of protease inhibitors on StmPr1 enzyme activity was tested ( Table I). The enzyme was effectively inhibited by antipain, chymostatin, and phenylmethylsulfonyl fluoride, whereas other serine protease inhibitors such as leupeptin, TLCK, and TPCK were not effective. The lack of inhibitory activity of TPCK is in contrast to the reported effect of this compound on subtilisin (21,23). The StmPr1 protease is not inhibited by EDTA, presumably because the calcium bound to the enzyme cannot be chelated, and the protein remains structurally unaffected. This result demonstrates that metal ions are not directly involved in the catalytic mechanism, which is characteristic for subtilisins and other serine proteases. Of interest with respect to the pathogenic potential of the StmPr1 protease was the observation that human plasma protease inhibitors ␣ 1 -antitrypsin and ␣ 2 -macroglobulin could not abolish the proteolytic activity of the enzyme; as shown below (Fig. 6), these two polypeptide inhibitors themselves are subject to proteolytic digestion through the bacterial protease. The properties of the StmPr1 protease clearly show that this enzyme is different from proteases isolated from P. maltophilia in 1975 (24) and in 1985 (25). These enzymes are strongly inhibited by EDTA, whereas antipain, which is a potent inhibitor of the StmPr1 protease, was found to be ineffective (25). Moreover, both enzymes differ in molecular size from the protein reported here. Obviously, at that time, P. maltophilia was a heterogenous species due to other differentiation criteria ap-plied. Therefore, these observations strongly suggest that the bacteria used at that time are not identical with the strain of S. maltophilia that served as a source of the StmPr1 protease. Substrate Specificity In view of the pathogenic effect that the StmPr1 protease may exert in infected patients, the substrate specificity of this enzyme was studied in detail. This is a prerequisite for the development of specific inhibitors to be tested as therapeutic agents. Proteolytic Activity toward the Oxidized Insulin B Chain-Proteolytic specificity of the StmPr1 protease was determined using the oxidized insulin B chain as a substrate with a known sequence. The proteolytic fragments were analyzed by HPLC and mass spectroscopy. A total of eight bonds were cleaved (Fig. 5). The results characterized the protease as an endopeptidase with broad specificity. StmPr1 protease attacks peptide bonds comprising the carboxylic groups of both hydrophobic and hydrophilic residues. Comparison with the alkaline protease from D. nodosus and with subtilisin BPNЈ showed that none of the specificity patterns is identical with that of StmPr1 protease. TABLE II Kinetic constants for the StmPr1 proteinase-catalyzed hydrolysis of peptidyl substrates Reactions were performed as described under "Experimental Procedures." The various substrates were assayed in two separate determinations yielding similar results. K m and k cat parameters were calculated from one representative saturation curve with standard errors resulting from the nonlinear regression analysis. Errors of the quotient k cat /K m are presented as root of the variance calculated according to 30. P1-P4 Specificity-P1 specificity of StmPr1 protease was investigated with a series of 10 tetrapeptide 4-nitroanilides in which only the amino acid residue in position P1 was varied (Table II). Determination of the specificity constant k cat /K m showed a strong preference for the positively charged side chain of lysine. The high efficiency of the enzyme is derived from both greater binding (lower K m ) and increased turnover (higher k cat ). The enzyme efficiently hydrolyzed substrates containing aromatic or aliphatic groups in position P1, but with a lower efficiency. The S1 subsite accepted the negatively charged side chains of glutamic and aspartic acid very poorly. The following order of specificity, characterized by the ratio k cat /K m , was observed. This order differs considerably from those of other subtilases like BPNЈ, Savinase, Esperase, and so forth (see Refs. 26 and 27 and the references therein). The results of the kinetic investigations described here lead to the conclusions below. (i) The S1 subsite of StmPr1 protease is negatively charged. It can be supposed that a carboxylic group(s) is (are) located in this site. This can explain the high affinity for the side chain of lysine and the very low efficiency toward substrates with aspartic or glutamic acid in P1. S1 can also accommodate residues containing nonpolar side chains but with lower affinity. (ii) Most probably, S1 is a deep and narrow "cavity" at the bottom of which negatively charged group(s) is (are) located. The best interaction is realized with the side chain of lysine (four methylene groups). Shortening of the chain by one CH 2 group, as in the case of ornitine, drastically decreased the catalytic efficiency. The distorted binding of the bulky side chain of valine seems to be due to a steric repulsion and the narrow entrance of the cavity. (iii) In general, the StmPr1 protease exhibits a mixed type of P1 specificity (trypsin-like and, to a lesser extent, subtilisinlike activity). This is unusual for subtilases. The subsite S2 prefers Pro instead of Leu in position P2 (Table II). The low efficiency toward Suc-Phe-Leu-Phe-pNA is due to a decreased turnover number. Some subtilases, like Esperase, Savinase, and subtilisin BPNЈ exhibit an opposite preference (26,27). StmPr1 protease efficiently hydrolyzes tetrapeptide p-nitroanilides with different P3 residues. The enzyme definitely prefers Leu and Gly in position P3 (Table II). The absence of a considerable discrimination between the P3 residues with different nature can be explained by a location of the subsite S3 at or near to the surface of the protein globule. The following decreasing order of P3 specificity was observed: Leu Ͼ Gly Ͼ Phe ϭ Ala Ͼ Glu. This order is completely different from those of other proteases of this family. Subtilases exhibit a preference for the aromatic group of Phe in P4 because hydrophobic forces predominate in the S4-P4 interactions. As a result, Suc-Phe-Ala-Ala-Phe-Phe-pNA is one of the most favorable substrates for this group of proteases. However, the catalytic efficiency of the StmPr1 protease is 2 orders of magnitude lower than those of typical subtilases (Table II and Ref. 26). The low efficiency is due mainly to a decreased turnover number. This result again demonstrates the specific active site structure of the investigated protease, which is somewhat different from those of typical subtilases. Reactivity toward Relevant Human Proteins After having demonstrated with synthetic substrates the broad specificity of StmPr1 protease, it was important to test some human proteins that could be substrates in vivo. As shown in Fig. 6, the enzyme degrades protein components of connective tissue such as collagen and fibronectin. This property of the bacterial protease may contribute to the tissue destruction seen in infected patients. Also, the serum component fibrinogen was completely degraded, indicating that the StmPr1 protease may interfere with the process of blood clotting. It has been shown above that the physiological protease inhibitors ␣ 1 -antitrypsin and ␣ 2 -macroglobulin present in serum at high concentrations are unable to abolish the StmPr1 proteolytic activity; Fig. 6, E and F, now demonstrates that these protein inhibitors, too, are subject to degradation. It is noteworthy that when immunoglobulin G was incubated with StmPr1 protease, the heavy chain appears to be cleaved at a specific site, giving raise to two smaller fragments; Fig. 6F shows the result obtained with a mouse monoclonal IgG1. Polyclonal IgG from human serum principally yielded the same result but with more diffuse bands (data not shown) due to the heterogeneity of the immunoglobulin fraction. Taken together, the StmPr1 protease appears to be associated not only with tissue destruction but may also possess the ability to inactivate components of the host defense mechanism. Cell-damaging Activity To verify the biological significance of the obtained data, cultures of human fibroblasts were exposed to supernatants of S. maltophilia (Fig. 7). After application of the cell-free bacterial medium, significant changes in cell morphology were observed: the cell layer partially condensed, forming cell-free areas, and finally detached from the culture plate. The same cell-damaging effect was achieved by addition of the purified StmPr1 protein to the fibroblast culture (Fig. 7C). The destructive effect of both bacterial supernatant and purified enzyme could be prevented by preincubation with chymostatin, which has been shown above to be a potent inhibitor of the protease. This experiment demonstrates that secretions of S. maltophilia are able to destroy living cells and that the StmPr1 protease is the major factor responsible for this effect. Therefore, it seems likely that the tissue lesions seen in infected patients are a consequence of StmPr1 action. Conclusions A new bacterial protease with broad specificity has been characterized that is important from both a biochemical and a medical point of view. The sequence and the enzymic properties demonstrate that this protease is a new member of the family of subtilases. It differs from the classic subtilisins by its larger molecular size and, presumably, by the architecture of the catalytic site. In this respect, the StmPr1 protease is homologous to the extracellular proteases of X. campestris, a plant pathogen causing black rot in crucifers, and D. nodosus, which is the causative pathogen of ovine foot rot, a disease characterized by separation of the hoof from the epidermal tissue. In both cases, the pathological situation seems to be associated with proteolytic tissue damage. Consequently, the StmPr1 protease is likely to function as a pathogenic factor as well. Broad spectrum antibiotic treatments causing bacterial selection combined with the multiresistance of S. maltophilia force the development of new therapeutic strategies. A possible approach to this problem is to interfere with the pathogenic mechanisms of the bacteria (in the case of S. maltophilia, to suppress protease-mediated tissue invasion and destruction). In this context, inhibitors of the StmPr1 protease should be of therapeutic value. It seems important, however, that such inhibitors do not affect host proteases. Fortunately, little structural relationship seems to exist between the prokaryotic and eukaryotic proteases, despite similar mechanisms of action (cf. Ref. 8). Therefore, it should be possible to design inhibitors with the required specificity. The development of such discriminating inhibitors is not without precedence: human immunodeficiency virus protease inhibitors, designed on the basis of crystal structures of the target protein, have been successfully introduced into therapy of AIDS. The data presented here should pave the way toward determination of the StmPr1 protease structure. Crystallization of the protein will be facilitated (cf. Ref. 28) by complexing with inhibitor molecules as developed on the basis of the enzyme kinetics presented.
2019-03-30T13:12:22.143Z
2002-03-29T00:00:00.000
{ "year": 2002, "sha1": "c2b162e7ef73aa414bbeb51cdc79f01d402225af", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/277/13/11042.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "e85f3c09b67c2d3a735d5b00d887f52da04e1ccf", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
12305434
pes2o/s2orc
v3-fos-license
Genetic structure and core collection of the World Olive Germplasm Bank of Marrakech: towards the optimised management and use of Mediterranean olive genetic resources The conservation of cultivated plants in ex-situ collections is essential for the optimal management and use of their genetic resources. For the olive tree, two world germplasm banks (OWGB) are presently established, in Córdoba (Spain) and Marrakech (Morocco). This latter was recently founded and includes 561 accessions from 14 Mediterranean countries. Using 12 nuclear microsatellites (SSRs) and three chloroplast DNA markers, this collection was characterised to examine the structure of the genetic diversity and propose a set of olive accessions encompassing the whole Mediterranean allelic diversity range. We identified 505 SSR profiles based on a total of 210 alleles. Based on these markers, the genetic diversity was similar to that of cultivars and wild olives which were previously characterised in another study indicating that OWGB Marrakech is representative of Mediterranean olive germplasm. Using a model-based Bayesian clustering method and principal components analysis, this OWGB was structured into three main gene pools corresponding to eastern, central and western parts of the Mediterranean Basin. We proposed 10 cores of 67 accessions capturing all detected alleles and 10 cores of 58 accessions capturing the 186 alleles observed more than once. In each of the 10 cores, a set of 40 accessions was identical, whereas the remaining accessions were different, indicating the need to include complementary criteria such as phenotypic adaptive and agronomic traits. Our study generated a molecular database for the entire OWGB Marrakech that may be used to optimise a strategy for the management of olive genetic resources and their use for subsequent genetic and genomic olive breeding. Electronic supplementary material The online version of this article (doi:10.1007/s10709-011-9608-7) contains supplementary material, which is available to authorized users. Introduction The olive tree (Olea europaea L.) is one of the most important fruit crops of the Mediterranean area (Zohary and Spiegel-Roy 1975). Palynological and anthracological (fossil charcoal) studies have shown that wild olive populations were present in eastern and western Mediterranean zones before the Neolithic (Terral et al. 2004;Carrión et al. 2010). Early domesticated forms were probably disseminated during successive human migrations (especially from east to west) throughout the Mediterranean Basin, but olive selection from local western populations has also been revealed by genetic studies (Besnard et al. 2002;Breton et al. 2006;Khadari et al. 2008). Today, the area devoted to olive growing worldwide is estimated at 8.8 Mha (IOC 2007). It is one of the most economically important trees in Mediterranean areas, largely due to its multiple uses (e.g. oil, canned fruit, wood, ornamental uses, medicinal applications). Over 2,000 cultivars have been described, exhibiting significant levels of variation in oil content, fruit size and adaptation to local environmental conditions (Bartolini et al. 1998), but this number is probably underestimated since there is a lack of information on minor local varieties (Cantini et al. 1999). Olive growing is undergoing a sharp transition from traditional to modern orchards, with a reduced number of main varieties featuring interesting agronomic traits such as yield, oil quality, and adaptive traits related to biotic and abiotic stresses. For instance, the ''Picual'' and ''Arbequina'' varieties have been massively planted over the last two decades in Andalusia and Catalonia, respectively (Belaj et al. 2010). A similar trend was noted in Portugal, with the main cultivar ''Galega'' grown in about 80% of the olive groves (Gemas et al. 2004), and in Morocco where ''Picholine Marocaine'' is the dominant cultivar throughout the country (Khadari et al. 2008). Hence, despite the high initial varietal diversity, a recent trend towards establishing modern orchards based on the most productive cultivars leads to high erosion of this germplasm. Several olive germplasm collections have been created at national and regional levels to manage ex-situ olive genetic resources for conservation purposes and eventual use in subsequent breeding programs (Bartolini 2008). The first major attempt to conserve and characterise the most important cultivars from all olive growing countries led to the establishment of the World Olive Germplasm Bank in Córdoba, Spain (OWGB Córdoba). This bank was initiated by FAO-INIA in 1970, with the contribution of the International Olive Oil Council (IOOC; Caballero et al. 2006). It includes Spanish cultivars that were collected by Barranco and Rallo (2000) and varieties originating from other Mediterranean countries. OWGB Córdoba has served for many studies using morphological descriptors (Caballero and del Río 2002) and molecular markers such as random amplified polymorphism DNA (RAPDs) and single sequence repeat (SSRs; Belaj et al. 2003Belaj et al. , 2004. Over the course of the RESGEN project, numerous surveys and studies on conservation and characterisation have been conducted by each of the following 15 participating countries: Algeria, Croatia, Cyprus, Egypt, France, Greece, Israel, Italy, Morocco, Portugal, Serbia-Montenegro, Slovenia, Spain, Syria, Tunisia (Caballero et al. 2006). These partners have completed their sampling by collecting local olive cultivars based on a morphological characterisation. As genetic redundancies, homonymy and synonymy cases are common problems in the management of ex-situ collections (Engels and Visser 2003), morphological description has been complemented by the use of molecular markers. Recently, the development of SSRs in olive (e.g. Sefc et al. 2000;de La Rosa et al. 2002) has significantly enhanced the possibility of individual olive cultivar identification. After testing 37 SSR loci for their reproducibility and discriminating power in four independent laboratories, Baldoni et al. (2009) proposed a consensus list of SSRs for genotyping of cultivated olive. More recently, a second international germplasm bank was set up at the experimental orchard of Tessaout [National Institute of Agronomic Research, Marrakech, Morocco; (OWGB Marrakech)] in 2003. Compared to OWGB Córdoba, OWGB Marrakech was set up in a different scientific context with more knowledge available about the plant material. To optimise olive germplasm sampling, local genetic resources had been characterised in several Mediterranean countries using standardised morphological descriptors. The bank was established by introducing previously characterised genetic resources from each Mediterranean area. For some partner countries (e.g. Spain; Barranco and Rallo 2000), a set of accessions representative of the local diversity was proposed following morphological or molecular characterisation. In 2010, OWGB Marrakech included 561 accessions originating from 14 Mediterranean countries and further introduction of additional olive germplasm is ongoing. To optimise the management and use of the large olive ex-situ collections, it is essential to select a sub-sample of accessions, so-called core collections, displaying the overall genetic diversity and phenotypic variability, as first proposed by Frankel and Brown (1984). Several strategies have been proposed to facilitate the construction of core collections, which can be classified into two groups according to the allocation methods. The first one is based on maximising the variability, including the M-Method strategy developed by Schoen and Brown (1993) and implemented in the MSTRAT software package (Gouesnard et al. 2001); and the second group, known as the stratified method, is based on similarity clustering (Escribano et al. 2008). Among numerous potential applications, core collections can be used as a first step in genetic association studies for detecting quantitative trait loci (QTLs) related to agromonic traits (Barnaud et al. 2006;Le Cunff et al. 2008;Aranzana et al. 2010). Several molecular characterisation studies have focused on olive germplasm at local or national levels (Belaj et al. 2004;Banilas et al. 2003), but also at the scale of the Mediterranean Basin (Besnard et al. 2001a;Sarri et al. 2006). Olive cultivars analysed in these latter studies were considered as representative of local or Mediterranean genetic diversity, but to date no genetic studies of the entire OWGB have been published. Different aspects have been addressed in previous studies and, for instance, an analysis of genetic relationships among Mediterranean olive cultivars revealed a correlation between genetic structure and geographical origin of cultivars, suggesting multilocal olive selection from different genetic pools (Besnard et al. 2001a). More recently, Sarri et al. (2006) conducted a study of genetic relationships based on SSRs among 118 cultivars sampled in several Mediterranean countries and showed that Mediterranean olive germplasm was structured into three main gene pools, corresponding to the western, central and eastern Mediterranean regions. Studies on genetic structure have also been conducted on wild olive trees (or oleasters) or for investigating genetic relationships between wild and cultivated olives (Besnard et al. 2002(Besnard et al. , 2007Breton et al. 2006;Belaj et al. 2010). The purpose of this study was to investigate the genetic structure of the entire OWGB Marrakech, including all 561 accessions, using both nuclear SSRs and chloroplast DNA markers. The genetic structure of Mediterranean olive germplasm was investigated using model-based Bayesian clustering method to assign individuals into defined gene pools according to genetic and geographic criteria. This study also provides critical baseline information for the development of core collections to maximise the representativeness of olive genetic diversity. Our results represent an essential step towards optimised conservation of olive genetic resources and subsequently for genetic association studies to detect QTLs of adaptive and agronomic interest. Plant material A total of 561 accessions maintained in the ex-situ World Olive Germplasm Bank at the experimental orchard of Tessaout (OWGB, INRA Marrakech, Morocco) were analysed. These accessions were derived from 14 olivegrowing countries: Algeria (43 accessions), Croatia (16), Cyprus (28), Egypt (19), France (12), Greece (13), Italy (167), Lebanon (16), Morocco (40), Portugal (14), Slovenia (9), Spain (89), Syria (71) and Tunisia (24) Table S1). OWGB Marrakech was set up following the ResGen project, which included most Mediterranean olive-growing countries. Olive genetic resources for each partner Mediterranean country have been characterised using standardised morphological descriptors. Hence, OWGB Marrakech included different national olive germplasm previously characterised using morphological descriptors alone, or both morphological descriptors and molecular markers. The composition of national olive genetic resources refers to the number of accessions, the representativeness of ancient and modern olive orchards in different areas and agro-ecosystems, and the ratio between main, minor and local cultivars, as supervised by each national institute in charge of their olive genetic resource management. According to a recent study (Sarri et al. 2006), cultivated olive accessions were clustered into three main gene pools respectively distributed in the eastern, central and western Mediterranean Basin. Based on this study, the accessions analysed here are classified into three distinct regional groups: (1) a western Mediterranean group, including accessions originating from Morocco, Portugal, and Spain; (2) a central Mediterranean group, with accessions from Algeria, Tunisia, France, Italy, Slovenia, Croatia, and Greece; and (3) an eastern Mediterranean group, including accessions from Egypt, Cyprus, Lebanon, and Syria (Table 1). DNA preparation and genotyping procedure Genomic DNA was extracted from 100 mg of fresh leaf tissue, as described in Khadari et al. (2008). DNA quality was checked on 2% agarose gel and the DNA concentration was estimated using spectrofluorometry. Genetic diversity and multivariate analysis Based on the scored-size SSR allele dataset, we computed the following genetic diversity parameters: for each SSR locus, the number of alleles (N a ) and the observed heterozygosity (H O ) were calculated using Genetix 4.05 (Belkhir et al. 2004). The probability of identity PI (Paetkau et al. 1995) was computed using IDENTITY 4.0 (Wagner and Sefc 1999). The discriminating power (D) was computed as defined by Tessier et al. (1999): where p i is the frequency of the ith molecular profile revealed at locus j, and N is the number of identified genotypes. Genetic relationships among the single genetic profiles (i.e. 505 genotypes were distinguished based on SSRs, see below) were studied via principal component analysis (PCA) using the Genalex 6 program (Peakall and Smouse 2005). Genotypes were plotted on the first two principal axes to visualise genetic affinities. Bayesian model-based clustering approach To identify the genetic structure in Mediterranean olive germplasm, a model-based analysis was performed using STRUCTURE 2.2.3 (Pritchard et al. 2000) based on SSR data. This program assumes Hardy-Weinberg equilibrium and linkage equilibrium within clusters. The analysis was done without prior information concerning the geographic origin of the accessions. The STRUCTURE algorithm was run using the ''admixture model'', assuming a ''correlation among allele frequencies'', with 10 independent replicate runs per K value (number of clusters) ranging from 1 to 10. Each run involved a burning period of 100,000 iterations, and a post burning simulation length of 1,000,000. Validation of the most likely number of clusters K was performed using the statistics proposed by Evanno et al. (2005). Q-matrix values for individual runs for each K were analysed by the CLUMPP 1.1 program (Jakobsson and Rosenberg 2007). Matrixes of individuals are represented as colored histograms of q values were constructed using DISTRUCT 1.1 (Rosenberg 2004). Core collection sampling The M-strategy (Maximisation strategy) proposed by Schoen and Brown (1993) and implemented in the MSTRAT software (Gouesnard et al. 2001) was used to generate core olive collections that maximise the number of observed alleles in the data set. The M-strategy consists in searching, among all potential core collections, for the best sample size that can capture all observed alleles with the highest genetic diversity score. After having determined the optimal size of the core subsets, 1,000 core collections were generated independently using the redundancy option with 10 independent runs and 1,000 iterations. Based on the 1,000 core collections obtained, we selected the reference core collection according to its Nei diversity index (Nei 1987) as first criterion, and then on the basis of its composition (representativeness among three gene pools and maternal lineages of accessions). Core subsets were constructed using all the observed alleles and without private alleles (observed once) to limit the impact of the later on genetic structure and on linkage disequilibrium (LD; Barnaud et al. 2006;Aranzana et al. 2010). The distribution of the selected core accessions was plotted on the first two PCA axes. SSR polymorphism Based on SSR data (i.e. 210 alleles for the 12 loci; Table 2), the analysis of 561 accessions revealed 505 genotypes. The number of alleles detected per locus ranged from 8 at the DCA15 locus to 30 at the DCA04 locus. The number of single genotypes identified per locus ranged from 14 to 95, with an average of 50.4 genotypes. Frequencies observed for the 210 alleles ranged from 0.001 to 0.71, with an average of 0.05. There were 151 alleles (71.9%) with frequencies equal to or less than 5%, which were considered as rare alleles (Supplementary file, Table S2). The reliability of the 24 alleles observed once was also checked by examining their occurrence in Mediterranean oleaster populations (unpublished data). The observed (Table 2). Based on the 12 SSR loci, the cumulative probability identity was 2.55 9 10 -14 , indicating that the probability of two randomly sampled olive trees having the same genotype was extremely low. Table S3). In the 127,260 pairwise comparisons among the 505 identified genotypes, only 366 comparisons (0.28%) represented closely related genotypes that differed by one to seven dissimilar alleles, whereas the remaining pairwise genotypes were distinguished by 8-44 dissimilar alleles (Supplementary file, Fig. S1). The highest SSR dissimilarity (44 dissimilar SSR alleles) was noted in only one genotype pair, i.e. ''Souidi'' from Algeria and ''Baladi'' from Lebanon. Genetic structure of Mediterranean olive tree accessions Using model-based Bayesian clustering, the genetic structure of Mediterranean olive genotypes was examined under the models with K = 2-6 clusters (Fig. 1). According to the K = 2 model, most olive accessions from Morocco, Portugal and Spain were distinguished from other Mediterranean olive trees. At K = 3, cultivars from France, Algeria, Tunisia, Italy, Slovenia, Croatia and Greece (central Mediterranean group) were mostly assigned to cluster 3 and distinguished from cluster 2, which mostly included accessions from the eastern Mediterranean region. Actually, cluster 3 also included several accessions with mixed inferred ancestry from the western and eastern Mediterranean gene pools (Table 3, Fig. 1, and Supplementary file Table S4). At K = 4-6, the Mediterranean olive germplasm, structured into three gene pools, i.e. eastern, western and central Mediterranean groups, was not modified since the fourth, fifth and sixth inferred ancestry gene pools were detected mainly in accessions of the central Mediterranean group (Fig. 1). Based on the highest DK and H 0 values, K = 3 appeared to be the best model for olive genetic structure ( Fig. 1; and see Supplementary file, Fig. S2), supporting the existence of the three gene pools described above. For groups 1 and 3, most genotypes were classified into one cluster based on the shared ancestry values, which were higher than 0.80; while for group 2, most genotypes were admixed (Fig. 2, Table 3, and Supplementary file Table S4). The three groups defined by model-based Bayesian clustering were plotted on the two first PCA axes (Fig. 3). The western and eastern groups were distinguished by both of the axes, while the central Mediterranean group was in an intermediate position, as shown by the admixed inferred ancestry origins of their genotypes (Fig. 2). Construction of nested core collections maximising diversity When comparing the two strategies to capture the maximum genetic diversity, the sampling efficiency of the M-strategy was always superior to a random strategy and the relative efficiency was highest for small-size samples (Fig. 4). Based on the M-strategy, the total allelic diversity could be captured with 67 genotypes (Fig. 4A). After exclusion of alleles observed only once, a minimum of 58 genotypes was necessary to capture the allelic diversity (based on 186 alleles; Fig. 4B). Under the optimal size of a core collection (67 accessions) capturing all 210 alleles, 10 cores (G01 to G10) obtained by the M-strategy were classified according to their observed heterozygosity and Nei's index diversity. The genetic diversity of the best 10 cores ranged from 0.77 to 0.78 for the observed heterozygosity and from 9.55 to 9.73 for the Nei's index diversity (Table 4). Each of the 10 cores (G01 to G10; Table 4) consisted of accessions from the western Mediterranean (group 1;14.40 ± 2.17), central Mediterranean (group 2; 38.10 ± 2.68) and eastern Mediterranean gene pools (group 3; 14.5 ± 1.90; Table 4). Within each of the 10 cores, 60-63 accessions displayed cpDNA lineage E1, while western cpDNA lineages E2 and E3 were both represented by two to four accessions (Table 4). To illustrate the representativeness of the reference core collection, the position of the G01_67 core accessions is presented on the first two axes of the PCA (Fig. 3). Under the optimal size of a core collection (58 accessions) capturing alleles scored at least twice (186 alleles), the genetic diversity of the best 10 cores ranged from 0.75 to 0.77 for the observed heterozygosity and from 9.55 to 9.73 for the Nei's index diversity (Supplementary file; Table S5). Discussion Is the ex-situ OWGB Marrakech representative of the Mediterranean olive germplasm? All olive accessions analysed in this study are maintained in an ex-situ collection considered as being representative of Mediterranean olive germplasm since it was set up with genetic resources from 14 Mediterranean countries. Each national institute in charge of olive genetic resource management has defined the composition of their national germplasm. However, sampling criteria such as the number of accessions, the representativeness of ancient and modern olive orchards in different areas and agro-ecosystems, and the ratio between main, minor and local cultivars, have not been uniformly used by the different Mediterranean partner countries (Caballero et al. 2006). Because of this strategy for setting up the studied ex-situ collection, we noted discrepancies in the composition of different national olive genetic resource sets, e.g. Spanish and Italian germplasm included 89 and 167 accessions, respectively, while Moroccan germplasm included only 40 accessions. Despite such discrepancies, is the ex-situ OWGB Marrakech representative of Mediterranean olive germplasm? We observed no significant independence between the number of alleles detected in 118 Mediterranean cultivars (Sarri et al. 2006) and the allelic richness in OWGB Marrakech, which was computed for 118 individuals using the standardised G value (see Supplementary file; Table S6). Fig. 1 Inferred population structure for K = 2 to K = 6 as the presumed number of subpopulations within the Mediterranean ex situ collection, including 561 accessions classified into 505 multilocus SSR profiles. CLUMPP H' (Jakobsson and Rosenberg 2007) represents the similarity coefficient between runs for each K, and DK represents the ad-hoc measure of (Evanno et al. 2005) Further, we noted 24 alleles (11.4%) detected once and 151 alleles (71.9%) were considered as rare. Compared to wild olives sampled around the Mediterranean Basin and genotyped by Breton et al. (2006), the number of alleles at eight SSR loci (that were shared with the present study) was not significantly different from the allelic richness observed in the OWGB Marrakech computed for 166 individuals using the standardised G value (see Supplementary file, Table S6). Otherwise, although not statistically tested, we noted a relatively lower observed heterozygosity in Mediterranean wild olives (H O = 0.67) than in OWGB Marrakech. Hence, the genetic diversity SSR markers as Mediterranean olive characterisation tools Among the 12 SSR loci used in this study, the three loci (DCA09, DCA04, and DCA03) with the highest discriminating power were able to distinguish about 80% of the 505 defined genotypes (see Supplementary file, Table S7). This level of discrimination could be even higher without the closely related genotypes, which likely correspond to mutants of clones, as previously shown (Khadari et al. 2008;Baali-Cherif and Besnard 2005). Olive discrimination based only on these three most discriminating loci was validated by the low probability of classifying two random accessions under the identical SSR profile (PI = 2.72 9 10 -5 ). This probability decreased substantially when all 12 SSR loci used were taken into account (PI = 2.55 9 10 -14 ). Six of these loci were included in the best consensus set of SSR markers (Baldoni et al. 2009) that has already been used for genetic structure studies (Sarri et al. 2006), and our study again confirmed that these markers are reliable tools for olive characterisation. When considering accessions from the different Mediterranean countries, the olive cultivars were not distinguished with the same efficiency. All Spanish accessions were characterised as distinct genotypes (89 accessions corresponding to 89 distinct SSR profiles). Similar results were noted for Italian olive germplasm since 167 accessions were classified into 165 genotypes. These results may be attributed to previous characterisation, based on morphological descriptors (Barranco et al. 2005), and molecular markers such as RAPDs (Belaj et al. 2003) and SSRs (Belaj et al. 2003(Belaj et al. , 2004, before selecting accessions for introduction into OWGB Marrakech. In contrast, Syrian germplasm displayed 14 of the 26 cases of synonymy detected in OWGB Marrakech and a relatively higher number of Syrian accessions pairs displayed closely related genotypes (Supplementary file, Table S3). Beyond the fact that these related genotypes may be derived from local selection with a narrow genetic base, these observations could be explained by the scattered and partial olive characterisation (Baldoni et al. 2009). However, it is also very likely that mutations on SSR alleles have led to the distinction of very similar profiles for accessions belonging to the same original genotype, particularly when a clone is vegetatively multiplicated for a very long time (Khadari et al. 2008;Lopes et al. 2004). Even ramets of the same individual have been distinguished in relict Laperrine's olive populations, demonstrating that SSR mutations can be relatively frequent under some environmental conditions (Baali-Cherif and Besnard 2005). One can also expect that the oldest cultivars have accumulated more mutations than more recently selected cultivars, and the occurrence of ancient clones in the Near East (e.g. Zohary and Spiegel-Roy 1975) could thus explain our observations. Furthermore, some accessions from different countries were classified under the same multilocus SSR profile, e.g. the ''Zmj1'' accession from Morocco and the ''Zabarka'' accession from Croatia. Such cases of cultivar identity from different origins might be related to their dissemination to different cropping areas where growers might have given them different local names (Trujillo et al. 1995;Besnard et al. 2001b;Sarri et al. 2006). In olive germplasm collections, over the last two decades, substantial efforts have been focused on identifying these redundant accessions using morphological and molecular data (Belaj et al. 2004;Barranco et al. 2005). Mediterranean olive germplasm is structured into three main gene pools Our study showed that Mediterranean olive germplasm was structured into three main gene pools, which strongly matched three distinct geographic areas, i.e. western, central and eastern Mediterranean regions. First, we showed Fig. 4 Sampling efficiency based on the ability to capture the genetic diversity via the M-strategy (M-method) and a random strategy: A based on the total number of alleles (210), and B after exclusion of observed alleles once. In each case, the size of the proposed core collection was determined that K = 3 was the best genetic structure model, as also reported by Baldoni et al. (2009) and Sarri et al. (2006). Second, on the basis of this genetic structure, most accessions clearly clustered according to their geographic origin. Numerous genetic studies have reported genetic differentiation between western and eastern Mediterranean areas (Besnard et al. 2002(Besnard et al. , 2007Breton et al. 2006;Sarri et al. 2006). Furthermore, Mediterranean cultivars analysed by RAPD markers showed relative differentiation among Spanish and Italian varieties (Besnard et al. 2001a), and a clear distinction between Spanish varieties and those from Greece and Turkey (Owen et al. 2005). Such a genetic structure indicates a correlation between the genetic differentiation of olive trees and their geographic distribution. Despite this genetic structure, we observed dominance of the eastern maternal lineage (95% of E1 vs. only 5% of western maternal lineages E2 and E3). As previously shown in Mediterranean wild and domesticated olives (Besnard et al. 2002(Besnard et al. , 2007Breton et al. 2006), all eastern Mediterranean accessions carried E1, whereas lineages E2 and E3 were only observed in the western Mediterranean Basin, but with a relatively low frequency in cultivars (16%; Besnard et al. 2001a). These cpDNA lineages again confirm that cultivated olive has been selected from different gene pools from both eastern and western regions of the Mediterranean Basin (Besnard et al. 2001c). A set of 67 accessions was sufficient to capture the allelic diversity The purpose of core collections is to facilitate the use of germplasm by providing a set of accessions displaying the genetic diversity available in the larger collection (Brown 1989). Our approach was based exclusively on genetic criteria, by capturing most of the diversity in a small set of accessions (58) after the exclusion of alleles observed only once. This sampling could be extended to a larger set of 67 accessions to capture the total allelic diversity (210 alleles). When examining the cultivar composition in each of the 10 best core accessions, we noted that about 40 cultivars were in common, whereas the remaining were different in each core. These investigations indicated the need to include complementary criteria such as phenotypic, agronomic and adaptive traits, but also the cultivar value and their importance at historical, economic and sociocultural levels to ensure optimal management. Beyond this variability, it is also essential to assess genetic structures and pedigree relationships within the core collection used for genetic association studies and for identifying QTLs related to phenotypic traits (Barnaud et al. 2006;Aranzana et al. 2010). Conclusion The present study demonstrated that the OWGB Marrakech collection gives an accurate picture of Mediterranean olive germplasm diversity. We provided a molecular database that should facilitate management of this germplasm. In addition, core collections will also be very useful for developing new breeding strategies for adaptive and agronomic traits through genome-wide association studies (Myles et al. 2009). Acknowledgments The authors would like to thank Dr. Ph. Chatelet for his kind remarks on the early version of the manuscript, and M. Latreille for her help in the genotyping. They also acknowledge Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
2014-10-01T00:00:00.000Z
2011-09-30T00:00:00.000
{ "year": 2011, "sha1": "90d1181ee63d1189a33d71672d79d36d1586e26d", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10709-011-9608-7.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "90d1181ee63d1189a33d71672d79d36d1586e26d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
264471037
pes2o/s2orc
v3-fos-license
High Prevalence of Respiratory Co-Infections and Risk Factors in COVID-19 Patients at Hospital Admission During an Epidemic Peak in China Background Recent research highlights the contribution of co-infections to elevated disease severity and mortality among COVID-19 patients. Given China’s decision to ease epidemic prevention policies in December 2022, a comprehensive exploration of the risks and characteristics of co-infections with respiratory pathogens becomes imperative. Methods We conducted a retrospective analysis of 716 COVID-19 patients admitted to a primary hospital in China. The detection of twelve respiratory pathogens was conducted using qPCR, and the potential risk factors were analyzed through Cox regression analysis. Results Within this cohort, 76.82% of cases exhibited co-infection involving eleven distinct pathogens. Among these, bacterial co-infections were observed in 74% of cases, with Streptococcus pneumoniae and Haemophilus influenzae emerging as the most prevalent bacterial co-infection agents. Additionally, 15% of cases presented with viral co-infections, predominantly involving influenza A virus and respiratory syncytial virus. Nevertheless, our investigation suggested that there might be some inappropriate antibiotic use in treatments. Furthermore, risk analysis unveiled dyspnea, hypoproteinemia, low lymphocyte counts, and co-infection with Mycoplasma pneumoniae as prominent risk factors for COVID-19 inpatients. Conclusion Our findings underscore a significant occurrence of co-infections among COVID-19 patients during the epidemic, emphasizing the need for enhanced antibiotic stewardship. Effective management strategies should encompass respiratory status, nutritional aspects, and vigilance towards co-infections involving M. pneumoniae during COVID-19 treatment. This study underscores the significance of comprehensive management protocols to address the multifaceted challenges presented by co-infections in COVID-19 patients. Introduction Since the first report of severe and critical acute respiratory syndrome coronavirus-2 (SARS-Cov-2) in December of 2019 in Wuhan, Hubei, China, it has rapidly spread to all regions of the world. 1 As of now, there have been over 7.66 billion confirmed SARS-CoV-2 infections and at least 6.93 million reported deaths.(Last update: 16 May 2023, https://www.who.int/emergencies/diseases/novel-coronavirus-2019).The World Health Organization (WHO) declared the end of COVID-19's emergency phase on May 5, 2023.However, due to insufficient vaccination rates, immune escape caused by mutations of SARS-CoV-2, and the imbalance of medical resources in different regions, the coronavirus disease 2019 (COVID-19) is expected to remain a significant threat to human health for an extended period in the future. 2receding viral infections, such as influenza and COVID-19, can impair antibacterial immunity, providing an opportunity for other colonizing pathogens, such as Streptococcus pneumoniae, Klebsiella pneumonia, to cause life-threatening second bacterial/viral pneumonias. 3The proportions of co-infections varies across studies, regions and populations, but it can exceed 50% deceased COVID-19 patients. 4This highlights the significance of co-infections as a crucial prognostic factor among these patients.The most frequently identified bacteria co-infections in COVID-19 patients include Klebsiella pneumoniae, Streptococcus pneumoniae, Staphylococcus aureus, Haemophilus influenzae. 5,6The most common co-infecting viruses are influenza A virus (FluA), influenza B virus (FluB), respiratory syncytial virus (RSV) and coronavirus. 5,6ntibiotics are considered the most effective means of treating bacterial infections.However, there are concerns that the current epidemic has increased the risk of antibiotic abuse, potentially leading to an increased risk of drug-resistant bacterial infections. 7Although some reports on bacterial and viral co-infection exist, 6,8 further investigation and research are still necessary and urgently needed in China, where the epidemic peak occurred after the adjustment of epidemic prevention policies in December, 2022.Understanding the occurrence of co-infection, evaluating the rationality of antibiotic use, and analyzing risk indications and risk factors in COVID-19 treatment are of paramount significance for the scientific management of COVID-19 in the future. This study conducted a retrospective analysis of the clinical features of 716 COVID-19 inpatients in Guangxi province, China, during December 2022 to January 2023.Throat swabs were used for detecting the presence of 12 respiratory pathogens, which are most common and clinically significant in local hospital, through quantitative real-time polymerase chain reaction PCR (qPCR).Additionally, the study analyzed risk indicators that significantly influenced patient prognosis. Clinical Data Collection This study was conducted at the Affiliated Hospital of Youjiang Medical University for Nationalities in Guangxi, China, which is one of the important medical institutions in Southwest China.From December142022to January 30 2023, the COVID-19 patients admitted with COVID-19 were included in this study.The diagnosis and classification of COVID-19 followed the guidelines of the Diagnosis and treatment plan for SARS-Cov2 infection (10th Edition). 9Patient information, clinical manifestations, laboratory data, and treatment details were collected.All procedures involving human materials and electronic medical information were approved by the Medical Ethics Committee of Affiliated Hospital of Youjiang Medical University for Nationalities (Approval number: YYFY-LL-2023-119).Because all clinical data of hospitalized COVID-19 patients data used were fully anonymized, and no personal identifiers were accessible to the research team, the requirement for consent was waived. Statistical Analysis The categorical variables were shown as numbers (percentages) and the continuous variables were shown as medians (interquartile ranges [IQRs]).Wilcoxon rank-sum test, Pearson's Chi-squared test, and Fisher's exact test were used to compare the distribution of categorical and continuous variables between the cohorts where appropriate.Chi square test was used to evaluate the correlation between categorical variables and Cramer's V coefficient was used to assess the strength of association.The visualization of intersecting co-infection pathogens was performed by UpSetR (v1.4.0). 10 The package StepReg (v1.4.4) and survival (v3.5.0) was used to perform univariable and multivariable Cox regression analysis.Forward steps method was used to select the major risk factors.All statistical analysis was conducted using R (v4.2.1). Antiviral Treatments and Clinical Outcomes Our hospital's antiviral treatment and clinical outcomes for COVID-19 patients is primarily based on the "Diagnosis and treatment plan for SARS-Cov2 infection (10th Edition). 9" Patients are categorized based on the severity of clinical symptoms.General treatment includes addressing the patient's energy and nutritional intake, maintaining water and electrolyte balance, and managing fever and phlegm.Antiviral treatment is mainly administered based on the patient's condition and may include medications such as Nirmatrelvir/Ritonavir tablets, Remdesivir tablets, monoclonal antibodies, and others.For severe and critical patients, timely organ function support is provided.Patients are categorized based on clinical indicators into four groups: mild, moderate, severe, and critical.Mild cases primarily exhibit upper respiratory tract infections.Patients with a continuous high fever for three days along with cough and shortness of breath, but a respiratory rate of <30 breaths per minute and resting oxygen saturation >93%, are diagnosed as moderate.When the respiratory rate exceeds 30 breaths per minute, oxygen saturation falls below 93%, and there is progressive exacerbation of pulmonary lesions, the diagnosis is severe.Critical cases are diagnosed when respiratory failure, shock, or other organ dysfunction occurs.Patients can be considered for discharge when their vital signs stabilize, and there is a significant improvement in pulmonary lesions, allowing for a transition to oral medication treatment. Characteristics of COVID-19 Patients As shown in Table 1, this study included a total of 716 COVID-19 patients who were hospitalized at Youjiang Affiliated Hospital for Nationalities during the period between December 14, 2022, and January 30, 2023.The selection of patients for the study was based on the diagnostic criteria outlined in the Diagnosis and Treatment Plan for SARS-Cov2 Infection (10th Edition). 9The data collected for analysis encompassed basic background information, information on any complications experienced by the patients, the medical interventions they received, and their clinical features at baseline.Among the 711 patients, a total of 440 (61%) were male, and the median age was 64 years old, with an interquartile range (IQR) of 49 to 73 years.Remarkably, a substantial majority of the patients, precisely 661 (92%) of them, had been vaccinated.Based on clinical typing, it was observed that 611 (65%) of the patients had moderate infections, 78 (11%) had severe infections, and 27 (3.8%)had critical infections.Over the course of the study, a total of 45 patients (6.3%) unfortunately died.In terms of hospital stay, the median length of hospitalization (LOHS) for the patients was 7 days, with an interquartile range (IQR) of 5 to 9 days.An investigation into the comorbidities exhibited by the patients revealed that the most common conditions were hypertension, affecting 184 patients (26%), and hypoproteinemia, affecting 172 patients (24%).Furthermore, an analysis of the predominant clinical signs among the patients indicated that the most common symptoms reported were cough, experienced by 524 patients (73%), and fever, reported by 217 patients (30%).In addition, we conducted a detailed analysis of the immunological characteristics among hospitalized COVID-19 patients with different disease severity grades (Table S1).The results showed that clinically key inflammatory markers, such as C-reactivate protein and D-dimer, showed a significant increase in severe and critical patients. Co-Infection with Respiratory Pathogens in COVID-19 Patients To investigate the co-infections in COVID-19 patients, we employed multiple fluorescence quantitative PCR to detect the presence of 12 crucial respiratory pathogens.The findings revealed that a substantial proportion of patients, up to 77% (550 out of 716), were affected by co-infections.Notably, we observed significant differences in the length of hospital stay and leukocyte counts between COVID-19 patients with co-infection and those without co-infection (refer to Table 1 for details).A detailed breakdown of the co-infection types showed that 529 patients (74%) had bacterial co-infections, while 108 patients (15%) had viral co-infections.Additionally, we identified 4 patients who were co-infected with Mp. Co-Infection with Patient Age and Disease Severity Age has a notable impact on human immunity status.In this study, all twelve respiratory pathogens were detected in the 0-1 year age group, while eleven of them, except M. pneumoniae, were detected in individuals aged ≥65 years.Notably, the positive rate of S. pneumoniae was close to 50% (11/23) in cases aged 1-12 years old, significantly higher than in other age groups.Moreover, the positive rates of S. aureus and FluA in cases under 18 years old were significantly higher than in older age groups (refer to Table S3).In Table 3, we present the detection of several pathogens in the different disease categories, including moderate, severe, and critical cases.However, FluB, Mp, and HRV were exclusively detected in moderate disease cases.Interestingly, the detection rates of S. pneumoniae, RSV, and P. aeruginosa were significantly higher in moderate, severe, and critical disease cases, respectively, indicating potential associations between these pathogens and disease severity.The analysis of antibiotic usage revealed that 65.5% (469 out of 716) of COVID-19 patients received antibiotic treatment in this cohort.Among the 550 cases that detected respiratory pathogens, 64.9% (357) of them received antibiotic treatment.And β-lactam antibiotics, cephalosporins, and quinolone antibiotics were the most widely used (refer to Figure S2).Surprisingly, even among cases with only virus co-infection (11 out of 21 cases) and cases without detection of any other respiratory pathogens (112 out of 166 cases), antibiotic treatment was administered to 52.4% and 67.5% of them, respectively (refer to Table S4).It is important to note that some patients might have received antibiotics for infections in other parts of the body (data not collected in this study).However, for most patients, the use of antibiotics might have been intended to prevent bacterial co-infections.This suggests that the use of antibiotics in primary hospitals may have exhibited a certain degree of irrationality. Risk Factors Analyses of SARS-Cov2 Infection Out of the 670 cases included in the risk factor analysis, 44 cases resulted in mortality, while 46 cases were excluded due to missing data.All the collectable clinical factors were included in the Cox's proportional hazards regression analysis, and forward steps method was used to select the major risk factors.The survival curve indicated significant survival differences between corresponding factor strata for dyspnea (p < 0.001), hypoproteinemia (p < 0.01), lymphocytes (p < 0.01), and the presence of M. pneumoniae infection (p < 0.001) (refer to Figure 2).Univariable and multivariable Cox regression analyses were employed to identify potential risk factors.The results of the univariable Cox regression analysis demonstrated significant associations between dyspnea, hypoproteinemia, lymphocyte count, and the presence of M. pneumoniae infection with COVID-19 case mortality (refer to Table 4).Upon adjusting for other factors, cases with dyspnea exhibited a higher risk of death compared to cases without dyspnea (adjusted HR: 2.09, 95% CI: 1.11-3.95).Similarly, individuals with hypoproteinemia had an increased risk of death, with an adjusted HR of 1.95 (95% CI: 1.03-3.66).The lymphocyte count significantly correlated with the risk of death in COVID-19 patients.Those with lymphocyte counts lower than 0.8×109 cells/L faced an increased risk of death (adjusted HR [95% CI]: 1.99 [1.06-3.75]).However, individuals with lymphocyte counts greater than 4×109 cells/L did not exhibit a significant difference in the risk of death compared to those within the normal lymphocyte count range.Remarkably, co-infection with M. pneumoniae was associated with a substantially higher risk of death compared to cases without this co-infection (adjusted HR [95% CI]: 26.03 [3.29-206.06]). Chi-square test was employed to analysis whether the dyspnea, low lymphopenia and hypoproteinemia are associated with the bacterial co-infections (refer to Table 5).The results showed that lymphopenia is associated with the co-infection of H. influenza, and hypoproteinemia is associated with the S. pneumonia, H. influenza, P. aeruginosa.This result indicates a significant correlation between the co-infection of these bacteria and two key clinical indicators, namely hypoproteinemia and lymphopenia, closely associated with COVID-19 outcomes.Furthermore, our study showed that antibiotics usage is associated with the hypoproteinemia and lymphopenia.This suggests that the use of antibiotics may Dovepress Zhu et al be of significant importance in improving clinical in patients with bacterial co-infections.In addition, the results indicate a significant correlation between C-reactive protein and co-infections with S. pneumoniae and P. aeruginosa.Finally, in line with Cox regression analysis results, no significant association was observed between antibiotic use and the risk of death. Discussion Over the past three years, China has implemented stringent, rapid, and coordinated control measures to curb the spread of SARS-CoV-2.These measures have proven effective in controlling COVID-19 outbreaks and safeguarding public health.As the epidemic situation evolves and with the increased vaccination coverage and accumulated experience in prevention and control, China has gradually relaxed its strict epidemic control strategy. 11However, these policy changes have resulted in a peak of COVID-19 infections, 12 signaling the need for continuous vigilance.Co-infection is a significant factor influencing the poor prognosis of COVID-19 patients. 13,14Despite several studies exploring co-infections with SARS-CoV-2, 5,6 there remains a dearth of research focusing on China's primary hospitals after the adjustment of COVID-19 prevention and control policies.Such research can provide valuable insights to guide effective management strategies and improve patient care in the face of changing epidemic dynamics. In this study, we conducted the detection of 12 important respiratory pathogens in hospitalized COVID-19 patients.The results revealed that more than 77% (550 out of 716) of the cases had co-infections with at least one pathogen.This finding indicates a notably higher co-infection rate in this cohort compared to the majority of previous reports.For instance, a metaanalysis of 118 studies reported a pooled prevalence of co-infection as 19% (95% CI: 14%-25%). 4Sreenath et al reported a co- infection rate of 46.5% in COVID-19 patients in India, with 13 pathogens detected. 5Several reasons contribute to the high rate of co-infection observed in our study.First, the cases included this study were moderate or severe, and such patients tend to have a relatively higher co-infection rate. 4Second, the use of a high sensitivity qPCR detection method for broad-spectrum respiratory pathogens increased the positive detection rate compared to culture or PCR targeting limited strains.Third, the high density of patients in the hospital during the peak of the epidemic might have increased inter-personal transmission.It is important to note that a Chinese cohort study reported an even higher co-infection rate of 94.2% (242 out of 257 cases), with the detection of 39 species. 6These significant differences between various studies could be attributed to the characteristics of the cohort, the spectrum of species detected, the regional health level, and other relevant factors.Bacterial co-infection has been identified as a significant influencing factor leading to poor prognosis in patients with viral pneumonia. 15Consistent with some previous studies, 6,16 we observed that S. pneumonia, Hib and K. pneumonia were the most common co-infecting bacterial pathogens that detected in this COVID-19 cohort.These three bacterial species are commonly considered as oral and upper respiratory colonizers, which may elevate the risk of lower respiratory tract infections, particularly in patients with compromised immunity.Notably, S. pneumoniae was found to be more prevalent in COVID-19 cases than in healthy controls. 17Its colonization has been associated with alterations in immune responses and an increased risk of virus acquisition. 18Hib is known to cause pneumonia or acute meningitis, particularly in infants or children under the age of five. 19Furthermore, it has been reported that co-infection with hypervirulent K. pneumoniae can lead to fatal sepsis in COVID-19 patients. 20We observed multiple co-infections with various combinations of pathogens in our study, such as S. pneumoniae -Hib (59, 8.24%), S. pneumoniae -K.pneumoniae (22, 3.07%), S. pneumoniae -Hib -K.pneumoniae (19, 2.65%), and S. pneumoniae -Hib -P.aeruginosa (9, 1.26%), among others.The elevation of C-reactive protein and D-dimer, prominent inflammatory markers, was notably higher in severe and critical COVID-19 patients (refer to Table S1).Our findings underscore a significant correlation between CRP and co-infections involving S. pneumoniae and P. aeruginosa, aligning with prior research (refer to Table 5).Elevated levels of these markers are closely associated with bacterial coinfections in COVID-19, particularly with Streptococci, and pose a significant risk for severe cardiac complications, notably myocarditis. 21The hypervirulence and multi-antibiotic resistance of these co-infected bacteria pose challenges in the treatment of COVID-19.3][24] Based on these findings, it is recommended that a multivalent pneumonia vaccine be considered as a preventive measure to protect against pneumonia caused by the common co-infected bacteria during the COVID-19 pandemic. SARS-CoV-2 co-infection with another virus, such as influenza and RSV, has been significantly associated with an increased risk of death compared to SARS-CoV-2 monoinfection. 25In this study, virus co-infection was detected in 15% of COVID-19 cases.It is reported that the implementation of COVID-19 prevention and control measures can concurrently lower the infection rate of other pathogens like influenza. 26Our study indicated that the most prevalent coinfection viruses were FluA (42, 5.87%), RSV (40, 5.59%), Adenovirus (27, 3.77%), and FluB (13, 1.82%).Complex interactions between different viral infections have been reported in the literature.For instance, Lei Bai et al found that co-infection with influenza A (FluA) enhanced SARS-CoV-2 infection, 27 while another study showed that FluA and RSV could inhibit the replication of SARS-CoV-2. 28Further research is needed to reveal the specific nature of these interactions.However, the significant risk posed by multiple viral infections cannot be ignored.Influenza vaccinations significantly reduce risk of hospitalization and death, 29,30 and vaccination remains the best means of preventing multiviral infections, particularly among high-risk groups. 31he misuse and overuse of antibiotics have long been recognized as significant public health concerns, especially in the post-COVID-19 era. 7On one side, such practices are major contributors to the emergence of antibiotic resistance, which can complicate the treatment of co-infections in COVID-19 patients.On the other side, the use of antibiotics can disrupt the balance of the intestinal microecology.The dysbiosis of gut microflora has been linked to disease severity and impaired immune response in COVID-19 patients. 32In this COVID-19 cohort, 65.5% of patients received antibiotic treatment.However, the use of antibiotics did not always align with the detection of co-infected bacterial pathogens (Table S4).In clinical practice, some patients use antibiotics as a preventive treatment measure for co-infection events, which may also have a positive impact on prognosis for COVID-19 patients.However, more accurate and efficient detection methods may help us use antibiotics more rationally, thereby minimizing their potential negative effects. Therefore, to facilitate a more judicious use of antibiotics, it is advisable to expand the utilization of PCR and other clinical diagnostic tests for key infections, including S. pneumoniae, Haemophilus, Klebsiella, Pseudomonas.This strategic approach ensures that antibiotic treatment is reserved for patients with genuine clinical indications, thereby minimizing unnecessary antibiotic exposure. Risk factor analysis was conducted to investigate the relationship between death and clinical features in COVID-19 cases.Dyspnea, a crucial clinical feature of lower respiratory tract infection and pneumonia, emerged as a significant risk factor for death based on the results of univariable and multivariable Cox regression analysis.This finding highlights the importance of prompt hospital admission for patients experiencing breathing difficulties to ensure timely and professional treatment.Furthermore, the analysis revealed that hypoproteinemia was a significant risk factor for death in COVID-19 patients.Hypoproteinemia is associated with a high metabolic state that can lead to excessive protein loss. 33The loss of proteins impairs immunological functioning and is linked to more severe disease, rapid deterioration, and higher fatality rates. 34Lymphocytes play a central role in the immune response, influencing the outcomes of bacterial and viral infections, as well as the effectiveness of vaccines. 35Our study found that 14.20% (25 out of 176) of cases with low lymphocyte counts experienced mortality, while none of the cases with high lymphocyte counts resulted in death.The adjusted hazard ratio for the cohort with low lymphocyte counts was 2.01 (1.07-3.76)compared to those within the normal interval.This highlights the significance of lymphocyte counts as a prominent clinical feature for assessing the prognosis of COVID-19.Lastly, cases with co-infection of M. pneumonia predicted a higher risk of death compared to other cases.Although a meta-analysis suggested that 42% of bacterial co-infections in COVID-19 were caused by M. pneumonia, 36 our cohort showed a lower prevalence of M. pneumonia co-infection at only 0.76% (4 out of 529).Nonetheless, M. pneumonia co-infection may exacerbate clinical symptoms and increase morbidity if left undetected or untreated. 37Therefore, active attention and timely treatment of M. pneumonia co-infections should be given during the treatment process, especially for elderly and infant patients. Dyspnea, lymphopenia and hypoproteinemia are the three risk factors to death for COVID-19 patients.More than one-third of the patients simultaneously had both lymphopenia and hypoproteinemia (refer to Figure S3), which are immunocompromised conditions, and it may be reasonable to consider prophylactic treatment with broad-spectrum antibiotics for these susceptible patients.Correlation analysis indicated that lymphopenia is associated with the coinfection of H. influenza, and hypoproteinemia is associated with the S. pneumonia, H. influenza and P. aeruginosa.This result indicates a significant correlation between the co-infection of these bacteria and two key clinical indicators, namely hypoproteinemia and lymphopenia, closely associated with COVID-19 outcomes.This also further emphasizes the importance of controlling the occurrence of concurrent infections in the treatment of COVID-19 for prognosis. Liminations This study had several limitations that warrant consideration.Firstly, the analysis was confined to identifying co-infection patterns that could be detected within the bounds of the multiplex respiratory PCR panel.The scope of pathogen detection was limited to the selection of pathogens included in the panel, potentially overlooking certain co-infections, such as fungal co-infections, that were not covered.Secondly, it's important to acknowledge that the detection of respiratory pathogens using qPCR does not conclusively confirm active infections; rather, it indicates the potential risk of co-infection.As such, the presence of pathogens might not always translate to ongoing infections, and caution is needed in interpreting these results solely as evidence of co-infection.Thirdly, differentiating whether the bacterial infections documented in this study originate from community-acquired or nosocomial sources presents a challenge.It is plausible that patients could have harbored the bacterial organisms prior to the onset of the viral infection.Fourthly, the vaccination status concerning pneumococci, Hib, and influenza among our patients is unavailable, precluding an assessment of these vaccines' influence on co-infections in COVID-19 patients.Fifthly, our study lacks data from healthy or mildly affected COVID-19 groups as controls, hindering a comprehensive analysis of PCR methods' clinical relevance in detecting COVID-19 co-infections and guiding treatment.Additionally, the detected pathogens might be associated with preexisting chronic conditions or could have been acquired within a healthcare setting.This complexity underscores the need for nuanced interpretation when attributing the origin of bacterial infections. Conclusions This study involved qPCR analysis of respiratory specimens from 716 hospitalized COVID-19 patients during a surge in infections following the relaxation of public health measures in China.We screened for common viral and bacterial respiratory pathogens and found that 76.82% (550/716) of cases were co-infected with at least one pathogen, with 44.73% (246/550) having two or more co-infecting pathogens.Among bacterial co-infections, S. pneumonia, Hib, and K. pneumonia were the most prevalent, while FluA, RSV, and Adenovirus were the most common co-infection viruses.Our study revealed discrepancies between the detection of bacterial co-infections and the prescription of antibiotics in some cases.Univariate and multivariate Cox proportional hazards regression analyses identified several covariates associated with an increased risk of death, including dyspnea, hypoproteinemia, low lymphocyte counts, and M. pneumonia co-infection.Vaccinating against coinfectious pathogens, such as pneumococcal, Hib, and influenza vaccines, constitutes an effective strategy in COVID-19 treatment.This approach helps prevent co-infections and enhances patient outcomes when health system capacities may be limited.Treatment guidelines that optimize respiratory support, nutrition, and selective antibiotic use can be beneficial, particularly for vulnerable patients with severe or complicated COVID-19.Overall, this study emphasizes the importance of understanding and managing co-infections in COVID-19 patients to enhance the quality of care and reduce mortality rates, especially in challenging healthcare situations. Figure 1 Figure 1The distribution of detected co-infection pathogens of cases in this study.The 12 pathogens were shown as rows of a matrix at the bottom, with intersections of pathogens indicated by connected filled circles, and the number of corresponding pathogens' combinations shown as a bar plot above. Figure 2 Figure 2 The outcome of COVID-19 patients under different risk factors.The Kaplan-Meier method and the Log rank test were used to compare the estimated survival according to different risk factors.Survival curves according to dyspnea (A), hypoproteinemia (B), lymphocytes (C) and the infection of M. pneumonia (D), separately. Table 1 Characteristics of Hospitalized COVID-19 Patients Included in This Table 2 Single and Multiple Co-Infections in Patients with SARS-CoV-2 Identified by the qPCR (n = 716) Notes: a n (%).b Fisher's exact test.c Pathogen/Virus refers to any respiratory pathogen/virus other than SARS-CoV-2. Table 3 Respiratory Pathogens Identified by the qPCR Among the Patients Tested in Different Disease Severity (n = 716) Table 4 Factors Associated with the Death of Hospitalized COVID-19 Patients (n=670) Note: Data was not sum to total because of missing data.
2023-10-26T15:19:05.134Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "ac32d8b519958da4d336fc65056aa8b4d0ead255", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=93709", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ec46e9726fb81d5bfcc56ae384608734a76e216", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14488277
pes2o/s2orc
v3-fos-license
Disorder and Power-law Tails of DNA Sequence Self-Alignment Concentrations in Molecular Evolution The self-alignment concentrations, $c(x)$, as functions of the length, $x$, of the identically matching maximal segments in the genomes of a variety of species, typically present power-law tails extending to the largest scales, i.e., $c(x) \propto x^{\alpha}$, with similar or apparently different negative $\alpha$s ($<-2$). The relevant fundamental processes of molecular evolution are segmental duplication and point mutation, and that recently the stick fragmentation phenomenology has been used to account the neutral evolution. However, disorder is intrinsic to the evolution system and, by freezing it in time (quenching) for the setup of a simple fragmentation model, we obtain decaying, steady-state and the general full time-dependent solutions, all $\propto x^{\alpha}$ for $x\to \infty$, which is in contrast to the only power-law solution, $x^{-3}$ for $x\to 0$ of the pure model (without disorder). %Other algebraic terms may dominate at intermediate scales, which seems to be confirmed by some species, such as rice. We also present self-alignment results showing more than one scaling regimes, consistent with the theoretical results of the existence of more than one algebraic terms which dominate at different regimes. INTRODUCTION The effects of duplication and mutation are crucial for the genome evolution dynamics and the generation of biodiversity (see, for example, Ref. [1] for an overview of theory and mathematical models along with practical examples). The dynamics of duplication must be different, compared to other processes, such as recombination, thus it is helpful to isolate its fingerprints in the genome sequences, say, by masking simple repeats [2], from the data for separate studies, say, the neutral evolution dynamics, among the various debatable considerations (cf. the recent dialog [3].) One can also try to obtain information about life, concerning disease susceptibility and paralog v.s. ortholog issue etc., from studying the duplication and mutation (see, e.g., [4,5]). Fig. 1 presents some examples of the concentration (histogram) c(x) of the maximal segments of exactly matching nucleotides as a function of the match length x, from both eucaryotic and procaryotic species. We see that the scaling exponents can be similar or apparently different for a variety of species; the power laws may not be the same for two distinct chromosomes of the same species (c.f. more details in the caption.) Massip and Arndt [7] computed and took the exponent of the repeatmasked whole genome sequence to be exactly −3, a typical value for some specific chromosomes of various eukaryotic species [8,9], and they also showed that the repetitive elements [2] greatly deteriorate the scaling law. Li et al. [10,11] recently also discovered other relevant forms of power law distributions. It thus appears to us that a reasonable model, especially that from the null hypothesis of neutral molecular evolution, should present the power-law tails at large scales as well as the different possible scaling exponents with a common or somewhat "universal" mechanism among them. Massip and Arndt [7] used an analytically tractable model with the pure fragmentation phenomenology (Koroteev and Miller [16] had done simulations with descriptive procedures containing some of the essential features, and they found later similar results [14] .) The scenario by these authors was the following: The maximal matching segments, defined by copies of nucleotide sequence that are the same but are different when extended on either ends, come from duplications subject to mutations. Duplications ("Step 1" in Fig. 2) of a single sequence produce exactly matched segments of the same length, and random mutations "break" them into matching pieces of shorter lengths (Step 2 in Fig. 2). By suitably assigning the (constant) mutation rates and the (linear) balancing of gains and losses at each scale, they obtained the fragmentation model well studied in other disciplines: The solution to the model with a constant input at a fixed scale K ('monodispersion' contains a part scaling with the exponent −3 as was also found earlier by Ben-Naim and Krapivsky [13] who also pointed out that the −3 scaling was actually a smallscale asymptote, i.e., c(x) ∝ x −3 as x → 0 (compared to the system size, say), for the "head" instead of the "tail" (x → ∞ compared to the number of alphabets, say,) of the concentration. Such 'head' versus 'tail' issue can be effectively distinct for the dynamics and thus calls for further studies. Although the notions of being 'large' compared to some small scale and being 'small' compared some large one do not directly conflict, we will show that physics of the scaling law are different. [6], of maximal matching segments in the self-alignments of the genome sequences of various species, shifted apart for better visualization, showing similar or apparently distinct scalings extending to the largest scales: All data, except those in the inset whose plot of human genome reproduces with Chromosome 1 the results of Massip and Arndt [7], are shown only for scales above 20bp below which there is no power law and the computation is also very expensive. Eucaryotic species, unlike the procaryotic ones, are in general seriously affected by simple repeats as shown by the data of homo sapiens (human) where the very small scale range is also presented to show that the masked data is basically exponential due to random matching from the finite number, now 4, of the alphabets and coagulation effects (simple equal probability assumption leads also to exponential distribution of coagulation effect, just as the random matches). Dashed lines are exact scaling laws for reference. FIG. 2: The example of duplication and mutation, and, the fragmentation and coagulation effects. Fragmentation and coagulation effects from duplication and mutation Let's call an x-length segment x-matching or x-unmatching, depending on whether it belongs to the set of matching segments of length x or not. A mutation may have two-fold effects on the change of matching segments: One obvious effect is fragmentation, i.e., an x-matching segment is broken into shorter segments and become x-unmatching: In "Step 2" of Fig. 2 "GAGGCCTATGT" fragments due to the mutations of "G" to "T" and "A" to "C" respectively; the other opposite effect, not discussed previously, is coagulation, i.e., a point mutation may unite its two sides to become a longer matching: In "Step 2" of Fig. 2 "TGT" and "AAC" coagulates due to the mutation of "T" to "G" between them. Yet another "null" effect is that the mutation turns an x-matching segment into another x-matching one, without changing the number/concentration of the x-matching segments. In general the coagulation-fragmentation effects of mutations should depend on the concentration itself and even on the detail structure of the sequences, which can be formally described by the general (stochastic) nonlinear integro-differential equation with stochastic (both in space and time) mutation rates, the coagulation-fragmentation model (see, e.g., [15] for deterministic but kinetic descriptions analyzed rigorously by mathematician already, and references therein.) A systematic derivation of the (random) coefficients in the reaction rates [15] has not been available, but we are not completely clueless. The molecular biology considerations and the accumulated vast amount wisdom about coagulation-fragmentation processes provide useful information for us to proceed tentatively. For example, given the mutation rate, the fragmentation effect may well be modeled by the conventional fragmentation model; as for the coagulation, due to the fact that the number of alphabet, 4, in genome sequences is very small compared to the total length, its effect should mostly concentrate at small scales (actually the coagulation effect and the finite-alphabet effect are not completely separatable). Thus, as an application and development, we start closely with the very classical one [7,12,13]. Disorder in genome sequences and dynamical processes One possible origin of the disorder is that there are many different segments of the same matching length; or, in other words, an x-matching set contain segments of different local (arrangement of) nucleotides pairs and/or different "ribbon" writhe, torsion and twist. These x-matching but different segments characterize mutations and/or duplications differently. The (random) environments also add to the disorder in the duplications and mutations. Since we can not or need not know exactly all the details, specific statistical distribution is applied. For this, as an idealization, one can think of a big ensemble containing a distribution of sub-ensembles, or, in other words, all the x-matching segments do not correspond to independent-identical-distribution (i.i.d.) variables but contain (infinitely) many subsets of elements corresponding to i.i.d. random variables. We have to do appropriate specification and simplification with "effective" parameters, starting with the model where concentration c(x, t) of segments of length x at time t evolves according to the following rate equation [7,12,13] ∂c(x, t) ∂t where, compared to previous studies, the new element in the model lies in the disorders of the initial condition c(x, 0), of µ and of the input f (x). The input is used to model the gross contributions from the duplications and the coagulation effects of mutations. Instead of dealing directly with the stochastic equation, we freeze the disorder in time (quenching) for a simplification. SOLUTIONS It is not completely clear whether we should best treat our present genome data as the "decaying" state, given the duplications happened a long time ago and the on-going mutations, or as the statistical steady state, balanced among the effects of duplications and mutations, or most generally a state corresponding to the time-dependent solution with initial data and time dependent inputs. The answer probably depends on the time scale we want to put our current observation in. We thus check all possibly relevant solutions. "Decaying" solution Let's start with the "decaying" case with f (x) = 0 which may correspond to the case dominated by the fragmentation process and can be accurate for large scales. Given the initial condition c(x, 0) = δ(x − K), Ziff and McGrady [7,12] found the solution for 0 < x ≤ K, otherwise null; and, in general Note that, due to "mass" conservation M = ∞ 0 xc(x, t)dx, the long-time solution c(x, ∞) = M δ(x)/x, which simply means that as time goes the stick will just breaks down into "powders" of infinitesimal length. Time-averaging, by integration from t = 0 to ∞, of the exponentially "decaying" Eq. (2) does lead to a power-law distribution with exponent −3, as noticed in Ref. [7]. Such a result from collecting the snapshots for both the history and future, however, to our point of view is not appropriately or clearly related to the outputs in Fig. 1 (actually time accumulations of "history" ending at infinite time +∞ and reaching the current moment t are given precisely by the steady-state and general time-dependent full solution, respectively, to be presented below.) Note also that there is an annoying pulse at K in such a treatment. We first check an example with a realization withc(x, 0) = e −sx , as also in Ziff and McGrady [12], buts has quenched disorder: We always use a tilde to denote a realization of the disorder. Our solution can be obtained in two steps. First, fix µ =μ to getc (x, t) = (μt +s) 2 s 2 exp{−(μt +s)x}. Then we integrate over the distributions Pμ ofμ and Ps ofs to get the final averaged solution. The more general result in some appropriate conditions [18] is possible to be evaluated with Laplace's method. We illustrate them with definite examples as follows. For instance, assuming The restriction m > 1 here is due to the requirement of the convergence of the continuous integral, and, in practice, with discrete and/or finite scales, it may be relaxed by easily tuning the ansatz at small argument or controlling the integration range. Fig. 3 shows, with λ = 2, Λ = 5 and t = 2, a typical plot of solutions for n = 0, m = 1 and 2, compared to lines with exact slopes −3 and −4: As said, for m = 1 the integration diverge at s = 0, so we obtain the result approaching x −3 in the figure by setting Ps(s) = 0 at very small s. We just remark that the above result turns out to be quite robust for a large class of reasonable So, the algebraic tails appear to be the generic output of the combination of such distributions. Mathematically these results remind us but are different to the "weighted mixtures" of Willinger et al. [19] who requires the distribution (corresponding to our concentration) be a scaling function. But, indeed now the power-law tail is also what they stated to be "more normal than normal [Gaussian]". In genome sequence, the question is then what exactly are the initial distribution, the disorder in it and in the duplication and mutation rates? We tend to believe that the above ansatzes, with possible quantitative modifications, used for the explicit calculations should be qualitatively 'reasonable' in describing what has been happening in nature, since they just simply represent the obvious facts of peaks at some (moderately) small values and the convergence properties. But all these follow the assumption of quenched disorder. Quenched disorder should be considered to be a working hypothesis or an effective modeling strategy (as widely applied for complex systems). Intuitively, being frozen in time of the disorder in the initial concentrations may sound more natural, but that of the mutation rates is just a working simplification. Steady state solution Since duplications also happen without stoping as time goes and since mutations also lead to coagulation, it may also be illuminating to check the other ideal extreme case of statistical steady state as follows. The final steady-state solution with a realization of input reads [13] µc So, whatever the steady input is, a tail of −3 for x → ∞ is not consistent: One can just check situations withf (x) decaying as, faster than or slower than x −2 . Such a model can produce genuine slope steeper than −3 only with an input of power law steeper than −2: From Eq. (6), if and only if ǫ > 0, the input of slop −2 − ǫ produces a distribution of slope −3 − ǫ. For instance, an exponential input gives an exponential tail with algebraic prefactor. Note in particular that ǫ cannot be 0. [Such observations were already partly made by Koroteev and Miller [16] semi-empirically. For the inputf (x) = δ(x − K), a solution ∝ x −3 extends to all scales below K [7, 13], but we are afraid that such "monondisperse" be not the case of evolutionary genome sequences and that the result be not directly applicable (not to mention other issues such as a pulse at x = K.)] The restriction of power-law input for a genuine power-law tail is removed by disorder: For example, an inputf (x) =λe −λx gives which, whenλ has disorder (quenched) of a distribution ansatz Pλ(λ) ∝ λ n e −Λλ with Λ > 0 and n > 0, produces an asymptotic tail (x → ∞) The exponent α s is independent of µ disorder. Just as the decaying case, the condition n > 0 is for convergence of the continuous integration, and in practice with discrete and/or finite scales, this condition may be relaxed by tuning the ansatz at small arguments; and, also, some other 'reasonable' (again, in the sense of consistent with our understanding of the duplication and coagulation effects from mutations) ansatzes for the input also produce the power-law tails. Full time-dependent general solution Strong enough of the theoretical supports from the decaying and ultimate steady state solutions, it is still natural to consider the more general time-dependent solution with the memory of the initial supply (the first duplications) and the other inputs along the history, which is the full time-dependent general solution, composed of two parts, the linear superposition of the solution "decaying" from some initial condition and that driven by some input. As already given with the Mellin transform and Charlesby method by Ben-Naim and Krapivsky [13] whose details we resist to reproduce here, the solution is simply the linear superposition of the previous decaying solution from some given initial condition without input and the final forced steady state solution forgetting the initial condition, a consequence of the linearity of the dynamics (as long as the forcing does not depend on the solution itself). Thus, we can immediately conclude from Eq. (5,7) that the statistical average over the quenched disorder also presents asymptotic power-law tail: We remark that at different scales separated far apart, the two components may dominate respectively. Actually we have already neglected subdominant algebraic terms for x → ∞ and/or t → ∞ while giving the final decaying or steady-state solution, but the subdominant algebraic term(s) could dominate at some intermediate regime(s), showing different power law(s): Whether or not this is happening in the data depends on the coefficient(s), C i , determined by the dynamics and on whether there are other contaminations. For example, the alignment concentrations of rice, as given by Fig. 4, appear to support two power-law regimes, especially for the unmasked data. Some other data also present such a similar feature [20], though do not always have as sharp results. On the other hand, as a 'negative' effect, such mixed algebraic components may also add to the ambiguity in detecting the power law where they don't separate well or the data don't have enough range of scale to separate them [8]. DISCUSSION We iterate that the classical result of the pure fragmentation model with constant input predicts a x −3 'head' for x → 0, though the 'monodispersion' [13] case with the input being a pulse/delta at a single largest scale extends the scaling to just below the forced scale; 'monodispersion' of duplicated segments of evolutionary genomes [7] however appears to be unrealistic for us. Coming back to nature's data, we summarize that there are a variety of power laws which may be abstracted to be a problem of tails extending to the largest scales, ideally x → ∞. The physical intuition, the computed various scaling laws among different species or even the different chromosomes of the same species, and some of the poor convergence of power-law data (c.f., Fig. 1 and Refs. [8][9][10][11]), all call for the consideration of disorder which, as we have shown, turns out to help solve the problem. The exponential "head" of the concentration of course comes partly from the random matching due to the finite number, 4, of the alphabets as well as from the coagulation effect which has been taken into account in our input. Introducing disorder into the pure fragmentation model for DNA evolution [7], even though a simplification of the supposedto-be stochastic fragmentation-coagulation differential-integral equation, somehow opens a Pandora's box. Various ansatzes of disorder were found to produce power-law tails in quite a generical way, which demonstrates some university in nature, but they have interesting differences in biophysics demonstrated through the 'complexity in genomes' ( [21] and references therein). So, a meaningful direction of study is to identify the disorders in the data. Further measurements/computations from the genomes to offer more information or to quantify the various ingredients such as the distributions of the duplication, the mutation rates, the coagulation effects will help to narrow down our somewhat general results to even more specific biophysics. It is interesting to identify what is specific (especially concerning disorder) to the masked repeats and the possible relevance with selection: To our best knowledge, there is no established selective mechanisms that would be in favor of or against a powerlaw concentration. There had been some other models of molecular evolution emphasizing on different aspects of observations
2014-12-20T02:50:39.000Z
2014-11-20T00:00:00.000
{ "year": 2014, "sha1": "9fd462876698928afed47761c71f3fb32f619360", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9fd462876698928afed47761c71f3fb32f619360", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Biology", "Physics" ] }
169672966
pes2o/s2orc
v3-fos-license
Coordinated protection and control strategy with wind power integration for distribution network : In this study, an optimal protection and control coordination strategy is proposed, which pursues to prevent unwanted protection and control operations caused by wind power integration, as well as adjust the emergency states of the power system to stable operation conditions. Moreover, in order to implement the proposed strategies, a hardware in-the-loop real-time simulation and testing platform is built up to demonstrate those unexpected protective control operations and testify the related solutions in a test distribution network. The related case studies and simulation results can demonstrate the effectiveness of the proposed methods. Introduction In Denmark, 40% of the total electrical power is transformed from wind in 2015 [1]; and more than 83 other countries of the world have started to adopt wind power as one kind of their power generations [2]. In whole Europe, 11.4% of the total electrical power usage comes from wind in 2014, and this rate is growing rapidly every year. At the same time, the technology of distribution generation (DG) has been highly developed and implemented, which greatly help the integration and utilisation of wind power [3]. When DG has been integrated to power grids, many advantages can be obtained on distribution system operation and control, e.g. local balance of power generation and consumption [4] etc. Moreover, with the applications of various modern distribution automation schemes, the normal distribution power networks are gradually evolving into micro-grids with more observability and controllability [5]. However, with more DG integration and smart technology application in the distribution level, the operation conditions of the power network become more variable and unpredictable; especially with wind power generation. On the other side, the protection system is normally conservative and standalone [6,7]. It is vulnerable to those unpredictable operation conditions which are not considered in advance. Also, the normal emergency control actions, e.g. load shedding (LS) and generation rescheduling/ islanding, are based on local measurement. These actions between protection and emergency control have not been coordinated sufficiently to keep the modern power systems secure and stable in the post disturbance stage. Thus, when a power network easily and frequently changes due to the DG integration or other events, the related protection and control system is prone to lose its coordination and operate in an unwanted way, which may be a trigger of the blackout of the whole system. To inhibit the negative influences of DG integration and upgrade the protection and control system performance in the post disturbance stage, an optimal protection and control coordination strategy is proposed for the distribution network with wind power integration. The cooperation strategy between relays and controllers is defined to prevent those unexpected relay and controller operations. Moreover, a hardware in the loop (HIL) realtime simulation platform is developed to provide a practical method to implement and demonstrate the proposed strategy. The rest of this paper is organised as follows: Section 2 will briefly give the introduction of the studied test system and related problem analysis; the proposed strategy and HIL real-time simulation platform-based implementation will be presented in Section 3; case simulation and method verification will be given in Section 4; finally, the conclusions are drawn in Section 5. Simple model of active distribution network In this study, a simple test distribution network is adopted, as shown in Fig. 1. This system contains five buses, two DGs [one gas turbine generator (GTG) and one wind turbine generator (WTG)] and three loads. The nominal system voltage is 12.66 kV and total load capacity is 2.5 MW. In this test active distribution network, the detailed model of a 1.5 MW full scale converter based wind turbine synchronous generator is adopted for the WTG, while a 2 MVA general GTG with a governor and an exciter is utilised for the GTG [8]. The wind speed model, the aerodynamic model, the mechanical models, the electrical models and the controller models have been concretely built in the WTG system model. The penetration level of the DG units, i.e. WTG and GTG, is sufficient to support all local loads (L1, L2, and L3) in islanded operation mode. The relays applied on the distribution feeders are distance relays, which are implemented with the general quadrilateral impedance characteristics [9]. These relays are represented by symbols 'Rx', e.g. R1-R5 in Fig. 1. The forward ones and backward ones are shown in red and black, respectively. R5 and R5r are installed with normally open circuit breakers, which can be closed to make a loop topology. R GTG and R WTG are the relays for protecting DGs. R L1 , R L2 , and R L3 are the relays for three dispersed loads. The related basic data can be found in the Appendix of [10]. Unwanted protection operation: Based on this small test system, the main influences of DG integration on the protection system of the distribution network can be investigated. Compared with current relay algorithms, the impedance relay algorithms are regarded as a better solution with a higher selectivity in the protection system of the distribution network. However, the main unexpected protection operation issues can be still encountered due to the DG integration. (i) Protection blinding. The case shown in Fig. 2 has been adopted to describe this kind of unexpected protection operation. When the test system is operating in the islanded and radial mode, i.e. CB, The related impedance loci seen by R1 are focused, and the variations of them in the system with/without WTG can be seen from Fig. 2b. Since zone 1 is not easy to be influenced by the infeed current of DG, the issues with the backup zones (zone 2 and zone 3) are more serious. With the contribution due to WTG integration, the fault current seen by R2 is increased from 511 to 567 A, while the fault current seen by R1 is decreased from 511 to 480 A. In Fig. 2, the black impedance locus is related to the faulty situation in islanded operation mode without WTG, while the pink impedance locus is the situation in islanded operation mode with WTG connected. Thus, after WTG is connected into the grid, if the settings of the distance protection system are still set as the situation without WTG, the infeed current from the WTG will make backup zones of distance relay R1 become less sensitive or blind to F1 in the situation with WTG, when R2 fails. Especially, zone 2 cannot locate the F1 clearly in the new operation condition. The backup cooperation between R1 and R2 will be jeopardised. (ii) Sympathetic tripping. If the former fault occurs in the gridconnected condition, the impact of infeed current from the WTG is smaller than the situation in the islanded condition. When the relay setting of R1 has already considered the WTG integration in islanded condition as new zone 2, the distance relay will locate the fault in a closer place (brown locus in Fig. 2b). If the inverse time overcurrent relay is applied, a faster tripping will be initiated, which is the classic case of sympathetic tripping. As for the impedance relay, wrong fault location-induced tripping could also be regarded as sympathetic tripping. The related relay characteristics and operation loci can be seen in Fig. 2b. (iii) Unwanted DG islanding to cascading blackout. If we continually consider the former fault case in the islanded condition, during longer time delay of zone 2 of R1 when R2 does not work, the low voltage at bus 0 may induce an unexpected tripping of GTG. At almost the same time, the WTG at bus 1 could be tripped unexpectedly due to an even worse voltage situation. Then this islanded distribution network will be out of power and a local cascaded blackout occurs. The related impedance loci, breaker status, current and voltage waveforms can be observed in Fig. 3. It can be seen that all these situations caused by DG integration and network changing induce mal-operation and non-cooperation of the original protection system, which are very harmful to the required reliability of the protection system [11]. Unwanted protective control operation: If a three-phase bolted short circuit F2 is applied on the external grid side, the test distribution network will be islanded by opening the main CB, which can be seen from Fig. 4. The network islanding induces a temporary power imbalance of ∼500 kW inside. The original local generation rescheduling is very slow, while the local LS is not activated. Thus, the generation under frequency relay strategies on both GTG and WTG will be tripped since the related disturbance are big enough to violate the related criteria (f < 48 Hz, 1 s delay for WTG and 2 s delay for GTG). The related data can be found in Fig. 4 as well. Then the cascaded blackout of this small test system will be triggered, which can be observed from Fig. 5. The frequencies, voltages, currents and breaker status from the critical points are shown in the figure, it can be clearly seen that the progress of this blackout occurred in this small islanded system network due to the lack of generation is made by unexpected cascading trips. The voltage on the WTG side increases to above 2 p.u., which will be dealt with by the WTG control system to discharge the surplus energy and stall the wind turbine. Proposed strategy and implementation To prevent these unexpected protection and control operation, the related protection and control strategies need to be improved to better consider the situations and give more efficient time and room to each other. Thus, the voltage and frequency of the test distribution network in the post disturbance stage can be regulated to a secure and stable level. Based on the problem analysis and case studies discussed above, the cooperation between different protections, and the cooperation between protection and emergency controls will be predesigned offline and executed online, based on the centralised control centre and IEC 61850 communication network. In this study, the emergency control will focus on protective relay blocking (RB), generator rescheduling (GR) and LS. To obtain a reliable protection and control cooperation strategy for new system operation conditions, the prevailing breaker status and controller status are adopted to identify the updated operation condition, and then the new suitable protection setting groups (SGi) and control modes (Ci) will be chosen and applied to all related relays and controllers. The brief progress of the proposed strategy can be seen in Fig. 6. Problem statement and optimisation algorithm With the aim to efficiently coordinate the distributed protective relays and controllers, the related control centre will be conducted to choose an optimal coordination protective control strategy, which can minimise the total power loss in the post disturbance stage. The protective control strategies will include relay setting regulation (SGR), RB, GR, and LS. Thus, the objective function of the problem can be expressed as follows: min P l = min k 1 P SGR + k 2 P RB + P GR + P LS , where P l is the total power loss in the post disturbance stage; P SGR and P RB are protection strategies SGR and RB induced power loss, respectively; P GR and P LS are the control strategies GR and LS induced power loss, respectively. Also, this optimisation problem will be still constrained by protection operation requirements and limits, power flow equations, generation dispatch capability etc. [9,12]. The relationships between control strategy GR/LS and power loss can be easily deduced [13], while the relationships between SGR/RB and power loss is not that directly connected. Consider the risk of failures of SGR and RB, the related power loss can be calculated based on the possible power loss induced by those failures or those unexpected relay operations during cascading trips [10]. Two weight factors k 1 and k 2 are chosen in (1). In this study, k 1 = 1.2 and k 2 = 1.5 are adopted, which means the RB has higher priority than SGR, and protection strategies have higher priority than control strategies. Implementation of the proposed strategies To provide a practical method to implement and demonstrate the proposed strategy, a HIL real-time simulation platform is developed, which can be seen in Fig. 7. This HIL real-time simulation platform has been built based on Opal-RT's eMEGAsim simulator, OMICRON test devices and ABB Relion 670 relays [14,15]. Firstly, test power systems are modelled based on Matlab/ Simpower systems and eMEGAsim/ Artemis tool boxes, especially the primary power components. Secondly, virtual secondary power components are developed in a Matlab/Opal programming environment, e.g. relays and controllers are modelled based on the standard library models or user defined models. Thirdly, a communication network is built based on the relevant communication protocols, e.g. GOOSE (IEC 61850-8-1) and sampled values (SV/IEC 61850-9-2LE), in an Opal simulation system. Fourthly, the connection interface is built between the practical ABB Relion 670 relays and Opal simulator based on both the IEC 61850 communication network and wired analogue/digital signal channels. In the end, the related control algorithms are developed in the control centre in OPAL to realise the protection and control optimal coordination strategy and testify it in this HIL real-time simulation platform. Case study and simulation results Based on the optimal cooperation strategy described above, the former two cascaded blackout cases are used here to validate the effectiveness of those strategies 4.1 Case 1 on the cascading trips described in Fig. 4 The reason for this cascaded blackout is the mis-cooperation of the protection system, i.e. Zone 2 of R1, as a backup protective function, is slower than the operation of R WTG and R GTG . The defined solutions based on the proposed optimal strategy can be seen in Table 1. The minimised power loss is 0.4 MW. After the related solutions have been implemented, the fault has been cleared timely and the trend of cascading trips has been stopped. Thus, parts of the distribution network survive the faulty transition with new solutions, and voltages at remaining buses recover to a secure level. The impedance seen by R1 has been moved out of the operation areas. The related results can be observed in Fig. 8 4.2 Case 2 on the cascading trips described in Fig. 5 The reason for this cascaded blackout is the big power unbalance induced by islanding of the distribution network. The defined solutions based the proposed optimal strategy can be seen in Table 2. In this case, the power load loss can be totally prevented by the optimal solution. After the related solutions are implemented, the unexpected relay operations and emergency voltage/ frequency conditions have been efficiently inhibited, and the trend of cascading trips has been stopped. Thus, the islanded distribution network survives the big power unbalance with new solutions, and voltages at remaining buses recover to a secure level. The impedance seen by R1 has been moved out of the operation areas. The related results can be observed in Fig. 9 Conclusion In this study, an optimal coordinated protection and control strategy is proposed to prevent an unexpected controller and relay operations due to the WTG integration, such as protection blinding, sympathetic tripping, unwanted DG islanding etc. The proposed strategy can efficiently coordinate the distributed relays and controllers, quickly define the effective protective control strategies to adjust the emergency operation conditions and prevent the cascading events. Moreover, a HIL real-time simulation platform is developed in this study to provide a practical method to implement and demonstrate the proposed strategy. Based on the case studies, the feasibility and necessity of the proposed optimal coordinated protection and control strategy against cascading events in the distribution power network have been verified.
2019-05-30T23:45:08.162Z
2018-08-24T00:00:00.000
{ "year": 2018, "sha1": "b19450adc7c67124b29f980bee809fda5678d122", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1049/joe.2018.0205", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0f75eaa4aaa8d42de344c9d27ed0e06324f7e0ab", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
16321355
pes2o/s2orc
v3-fos-license
Genomics Study of Mycobacterium tuberculosis Strains from Different Ethnic Populations in Taiwan To better understand the transmission and evolution of Mycobacterium tuberculosis (MTB) in Taiwan, six different MTB isolates (representatives of the Beijing ancient sublineage, Beijing modern sublineage, Haarlem, East-African Indian, T1, and Latin-American Mediterranean (LAM)) were characterized and their genomes were sequenced. Discriminating among large sequence polymorphisms (LSPs) that occur once versus those that occur repeatedly in a genomic region may help to elucidate the biological roles of LSPs and to identify the useful phylogenetic relationships. In contrast to our previous LSP-based phylogeny, the sequencing data allowed us to determine actual genetic distances and to define precisely the phylogenetic relationships between the main lineages of the MTB complex. Comparative genomics analyses revealed more nonsynonymous substitutions than synonymous changes in the coding sequences. Furthermore, MTB isolate M7, a LAM-3 clinical strain isolated from a patient of Taiwanese aboriginal origin, is closely related to F11 (LAM), an epidemic tuberculosis strain isolated in the Western Cape of South Africa. The PE/PPE protein family showed a higher dn/ds ratio compared to that for all protein-coding genes. Finally, we found Haarlem-3 and LAM-3 isolates to be circulating in the aboriginal community in Taiwan, suggesting that they may have originated with post-Columbus Europeans. Taken together, our results revealed an interesting association with historical migrations of different ethnic populations, thus providing a good model to explore the global evolution and spread of MTB. Introduction Mycobacterium tuberculosis (MTB) continues to be a leading cause of human deaths by an infectious agent. It has been estimated that approximately one-third of the world's population has been infected with the tubercle bacillus, and 1.5 million die from MTB infection every year. 1 In 2012, 12,338 new cases were reported in Taiwan, with an estimated annual incidence of 53 cases per hundred thousand people. 2 Epidemiologic studies have revealed that different genotypes of MTB may be prevalent in different geographic regions worldwide and that genotype distribution is closely associated with population migrations. [3][4][5] The distribution of human MTB genotypes is closely associated with geography, ethnicity, age, and host factors. Recent developments in DNA sequencing technologies have revolutionized tuberculosis (TB) research, contributing to major advances in understanding the evolution and pathogenesis of MTB and facilitating the development of new diagnostic tests with increased specificity for TB. Identification of the genomic features of major MTB strains is key to deciphering the transmission of virulence and drug resistance among different strains. Thus, comparative analysis of wholegenome sequences can provide better insights into the evolution of the MTB strains present in Taiwan. Achieving these goals will improve our understanding of the epidemiology of TB in Taiwan and help guide prevention policy. Taiwan, a relatively isolated island in the southeast of mainland China, is regarded as a mixing vessel of immigrants over the past four centuries as colonization by different waves of ethnic groups occurred. The aborigines in Taiwan are of Austronesian descent, which distinguishes them from the major ethnic group on the island, the Han Chinese, and now reside predominantly in the mountainous regions or rural areas. 6 Documentation of aboriginals on the island can be traced back to the 16th century, when Spanish sailors arrived and named the island. There are 12 aboriginal tribes on the island, which are presumed to represent the ancestral colonies that inhabited the island for at least 4000 years. The other two predominant ethnic populations of Taiwan are descended from Genomics Study of Mycobacterium tuberculosis Strains from Different Ethnic Populations in Taiwan Han Chinese who migrated to the island in two major waves: the first during the Ming Dynasty around 1600 and the second between 1945 and 1950, when members of the military, veterans, and some civilians emigrated from mainland China due to the civil war there 7 ; in total, about two million mainland Chinese have migrated to Taiwan to date. Taiwan was occupied by the Dutch for 40 years beginning 1660, and the Japanese from 1895 until 1945. We previously demonstrated that the Beijing ancient strain and the Haarlem strain are the predominant MTB strains infecting aborigines in eastern Taiwan (Hualien City), the East-African Indian (EAI) strain is prevalent in southern Taiwan aborigines, and the Beijing modern strain is predominant in Han Chinese. [7][8][9][10] In the present study, six MTB strains − isolates of the Beijing ancient sublineage, the Beijing modern sublineage, Haarlem, EAI, T1, and Latin-American Mediterranean (LAM) − representing the major types of clinical strains isolated from three different ethnic groups (aboriginals, Han Chinese, "veterans") in Taiwan 7,8 were subjected to whole-genome sequencing. [11][12][13] The six Taiwan genomes were then compared to four reference MTB strains (H37Rv, H37Ra, CDC 1551 (LAM), and F11 (LAM) [14][15][16] as well as the genome of Mycobacterium bovis. The presence of significant sequence diversity in MTB could provide a basis for understanding pathogenesis, immune mechanisms, and bacterial evolution. Polymorphic genes are good candidates for virulence and immune determinants, because proteins that interact directly with the host are known to have elevated divergence. Examples in MTB are the PE/PPE genes that encode proteins with proline-glutamate and proline-prolineglutamate motifs. 17 Methods study patients and bacterial isolates. Aborigines and veterans are entitled to government-subsidized health benefits under the National Health Insurance Claim System of Taiwan; therefore, the identities of these patients were confirmed by the type of insurance policy or the type of identification card. We obtained MTB isolates from three hospitals in three different regions of Taiwan. Patients with symptoms compatible with pulmonary TB and with sputum cultures positive for M. tuberculosis complex were included. Isolates were stored frozen by the participating hospitals and sent to the TB laboratory in the Division of Infectious Diseases of National Health Research Institutes (NHRI). MTB genomic DNA was extracted from primary LJ egg cultures as described previously. 18 In this study, we first characterized the phenotypes and genotypes of at least 1000 isolates of MTB from different ethnic populations, at least 100 isolates from each population (aborigine, veterans, and Han Chinese), followed by a comparative genomics study to provide a snapshot of mycobacterial evolution and its pathogenesis. Genome sequencing and dNA analysis. MTB strains were sequenced separately to 10-to 30-fold coverage of the genome using a Genome Sequencer 20 (GS20) or a Genome Sequencer FLX (GS FLX) instrument (454 Life Sciences, Roche) 22 with a 500-800-base pair shotgun library for each strain. The reads generated by 454 pyrosequencing were assembled using the GS De Novo Assembler version 2.5.3 provided by the manufacturer. Protein-coding genes were predicted from all contigs of each assembly using GLIM-MER version 3.02. 23 All contig sequences of the six strains were compared to the reference strain H37Rv to detect single-nucleotide polymorphisms (SNPs) using MUMmer version 3.20. 24 The genome sequence of H37Rv was downloaded from the NCBI ftp (ftp://ftp.ncbi.nih.gov/genomes/Bacteria/ Mycobacterium_tuberculosis_H37Rv_uid57777/). MUMmer provides a pipeline with the programs "nucmer", "deltafilter", and "show-snps" for sequence aligning, repeat filtering, and SNP/INDEL calling. The pipeline is able to detect all SNPs between two genomes, even where there are many sequence rearrangements. Phylogenies based on the whole genomes of the six MTB strains sequenced in this study and the five completely sequenced strains available through NCBI were constructed using the MUMmer distance matrix method adopted by Chan et al. 25 In brief, MUMs (maximal unique matches between the two genomes) obtained from MUMmer were summarized for each pair of genome sequences and then divided by the smaller genome length of the pair. construction of phylogenetic trees. The genetic diversity (based on MIRU-VNTR loci and ST) of the six selected Taiwan MTB strains was used to build a neighbor-joining tree. A pairwise distance can be obtained after log transformation and multiplication by −1. After the distance matrix of all the 11strains (the six Taiwan isolates and five reference strains) was completed, the program MEGA4.1 was applied to build a phylogenic tree using the minimum-evolution method (Fig. 1C). screening of sNPs in the Mtb genome. Although 454 sequencing allows rapid determination of the genome sequence, the data remain incomplete, with gaps and potential errors. To circumvent the time-consuming process of finishing, we bypassed the task of whole-genome alignment and instead used the reference strains CDC1551 and H37Rv as a framework against which to align each of our contigs separately by BLAST. The gene names, SNPs, and their locations in the contigs were all identified by this alternative procedure. Using the H37Rv genomic sequence as the reference, SNPs in the six Taiwan MTB strains were identified using MUMmer, and N substitutions and frame shifts were identified using BLAT. 26 Genes encoding PE and PPE family proteins were identified using BLASTX. The number of S and N substitutions identified in PE/PPE family protein-coding genes and all other protein-coding genes for each of the strains (using H37Rv as the reference) are summarized in Table 1. statistical analyses. BioNumerics software (v 3.0; Applied Maths) was used to analyze MIRU by character types. Similarities between MIRU types were calculated using the categorical coefficient, in which all MIRU loci were weighted equally. This procedure counted the number of matched loci between pairs of isolates; when there was a difference, they were scored as unmatched, irrespective of the number of repeats present (thus, 1 versus 3 scored the same, ie, unmatched, as 1 versus 4). A dendrogram was constructed by the unweighted pair group method using arithmetic means averages. MIRU types were discriminated by similarity index and compared with octal format, ST, and family results. ethics statement. This study was approved by the Human Ethics Committee of the National Health Research Institutes, Taiwan (Code: EC0961103). Because of the retrospective nature, routine collection of clinical data in daily practice, and dislinkage of personal information, the requirement to obtain informed consent was waived by our institutional review board. 7 7 7 5 7 0 3 7 7 1 6 7 7 7 7 7 4 7 7 4 1 3 7 7 1 7 7 7 7 7 7 7 7 7 7 6 0 7 7 1 7 7 7 7 7 7 7 7 0 0 2 0 7 7 1 7 7 6 1 7 7 6 0 7 7 6 0 7 7 1 Spoligotype Ancient Beijing Modern Beijing EAI2 T1 Haarlem3 LAM3 Table 2. Sequencing of the first three strains (M3, M7, and W6) was carried out using the GS20 pyrosequencing system, and the remaining three genomes (A18, A27, and M24) were sequenced using the GS FLX system, which doubled the read length from ∼100 bases to more than 200 bases ( Table 3; total number of contigs/bases; predicted number of coding genes). Initial reference mapping of the shotgun reads was performed using the complete genomic sequence of the highly studied MTB strain H37Rv (Table 3; number of SNPs; number of INDELs). Coverage ranged from about 10 × to 30 × with the GS20 system, and we achieved an assembly rate of over 96%, with the total assembled genomes ranging in size from 4.2 to 4.3 Mbp. This level of coverage significantly lengthened the average contig sizes and also reduced the number of large contigs (.800 nucleotide flows, about 500 called bases). Pairwise comparison of the M. tuberculosis genomes. The coding sequences of each of the six Taiwan MTB strains were compared against the completely sequenced genomes of five MTB strains available through NCBI: H37Rv, H37Ra, CDC1551, F11, and KZN. The number of SNPs identified in the coding sequences of M3, M7, and A27 were lower compared to the other three strains when H37Rv was used as the reference strain, suggesting that the former three are more closely related to H37Rv (Table 3). Notably, the number of SNPs identified in the coding sequences of M3, M7, and A27 were lower than the number of SNPs identified in the coding sequences of W6, M24, and A18. In all pairwise comparisons, the percentages of nonsynonymous (N) substitutions ranged from 58% to 63%, except in the case of M7 and F11. The M7 strain, a LAM-3 sublineage collected from a Taiwanese aboriginal patient, showed a strikingly lower number of singlenucleotide substitutions when compared against the Western Cape F11 strain, isolated from a TB epidemic in South Africa (Table 4), suggesting a closer relatedness. The number of synonymous (S) and N nucleotide substitutions identified between M7 and F11 are nearly equal (S = 58; N = 57). Phylogeny of Mtb strains based on genomic seq uencing and VNtrbased genotyping. The six Taiwan MTB strains and the five reference strains were included in this ana lysis (Fig. 1). This phylogenetic tree (Fig. 1B) is almost congruent with the one we constructed using the neighbor-joining method as well as with the 17 loci MIRUbased analysis (Fig. 1A). sNP identification in strains A18, A27, M3, M7, M24, and w6 using H37rv as the reference. The six strains have similar N/S ratios when all SNPs in the protein-coding genes are taken into consideration. It is notable that three of the strains (A27, M7, and M3) have fewer SNPs and may be evolutionarily more closely related to H37Rv. The genes for the PE/PPE family proteins in A27, M3, and M7 also have fewer SNPs in comparison to the other strains. The dn/ds ratio for the PE/PPE family was higher in A18 (EAI2_Manilla strain) compared to the average dn/ds ratio for the PE/PPE family. sNP identification in strains A18, A27, M3, M7, M24, and w6 using Mycobacterium africanum as the reference. SNP identification was also performed for the six strains using the M. africanum strain GM041182 (NC_015758) as (Table 5). In contrast to the SNPs for all proteincoding genes, most of the strains showed slightly lower dn/ds values in the PE/PPE family protein genes, except A18. sNP identification in strains A18, A27, M3, M7, M24, and w6 using Mycobacterium cannettii and Myco bacterium marinum as references. SNP identification was performed for the six strains using M. cannettii (NC_015848) as the reference (Table 6) and also M. marinum (accession no. CP000854) as the reference, a near relative of MTB with a 6.63-Mb genome containing 5424 coding sequences ( Table 7). Comparison of the six Taiwan strains to M. cannettii or M. marinum yielded many more nucleotide substitutions compared to using H37Rv or M. africanum as the reference. However, the dn/ds ratio was found to be much lower here. Analysis of M. bovis genomic sNPs using M. marinum as the reference. We extracted the sequence data for the 3953 coding sequences from the M. bovis genome (accession no. BX248333) and mapped them to the M. marinum genome. In total, 231,941 SNPs were identified in protein-coding genes and, among these, 49,900 are nonsynonymous substitutions. The N/S ratio was determined to be 0.27 for all proteincoding genes. A total of 1104 of the SNPs belong to PE/PPE family protein genes, of which 315 are nonsynonymous substitutions. The N/S ratio for PE/PPE family protein genes was determined to be 0.399. For the PE/PPE family or all genes, the N/S ratio for the SNPs identified in M. bovis using M. marinum as the reference is ,1. This is very different from the previous analysis for the six Taiwan MTB strains against M. marinum, in which the SNPs of the PE/PPE family have a higher N/S ratio than that of all genes (average 0.58 for PE/ PPE versus 0.27 for all). discussion Comparison of nonsynonymous substitutions per nonsynonymous site to the number of synonymous substitutions per synonymous site (dn/ds) between homologous genes is an important index in molecular evolution. In this study, we compared the dn/ds ratios among the SNPs called from mapping the sequencing reads of six MTB strains to different reference genomes. 17 There are two types of natural selection in biological evolution: (1) positive selection promotes the spread of beneficial alleles and (2) negative selection hinders the spread of deleterious alleles. 27 In our results, the average dn/ds values for SNPs called using H37Rv or M. africanum were found to be much higher than those using M. cannettii and M. marinum as references. Apparently, higher dn/ds ratios in genes encoding the PE/PPE proteins occur when genomes of relatively distantly related species such as M. cannettii and M. marinum are used as the reference genome (Tables 6 and 7). In other words, different selection effects are applied to PE/PPE protein genes versus other protein-coding genes in Mycobacterium evolution. Our observation coincides with those of previous studies that suggested a general positive selection or relaxation of negative selection in the molecular evolution of PE/PPE proteins in MTB. 28 We believe that the second explanation is more likely in the case of MTB for the following reasons. (a) The effective population size of MTB has been significantly reduced because of its pathogenic lifecycle. This reduction in population size usually leads to decreased efficiency of selection, thus allowing deleterious mutations to accumulate in the genome. (b) The MTB−M. marinum dn/ds ratios are generally smaller than the ratios between different MTB strains. This is analogous to "smaller between-species distance than within-species distance." The small (between-species) divergence usually indicates negative selection, whereas the larger (within-species) diversity usually means relaxation of negative selection, unless some type of diversifying selection has been in action. Diversifying selection can cause remarkable genetic and phenotypic differences between different strains. One good example is the skin color of different human races. However, we will need stronger evidence and good biological explanations to claim this type of selection in MTB. In addition, we would like to emphasize that relaxed negative selection can sometimes be the source of functional innovations. A classical theory is "evolution by duplication," which means that duplicated genes (as in the case of the PPE family) are subject to relaxed negative selection because of functional redundancy. Therefore, they are free to evolve and may accidentally acquire new functions that may increase the fitness of the carrier organisms. Such functions and the related genes will be subject to positive selection afterward. So the type of selection actually changes over time and with the context of evolution. In contrast, the differing dn/ds ratios between PE/PPE protein genes and all protein-coding genes are not detectable in most of the Taiwan MTB strains when H37Rv or M. africanum is used as the reference. The A18 MTB strain (EAI2_Manilla) is an exception, in that it was the only strain that showed elevated dn/ds values in the PE/PPE family, regardless of the reference genome used for SNP calling. This suggests a different fate for PE/PPE proteins in the evolution of the A18 strain in contrast to the other five strains from Taiwan. Due to a complex interaction between the host, the pathogen, and the environment, the outcome of MTB infection and disease is highly variable. 29,30 There is mounting evidence that this variable outcome may be influenced by MTB genomic diversity. 29,31 However, the evolutionary forces that shape this variation are not well understood. Genomic comparisons have identified genetic variation for population screening; however, these analyses are limited to relatively few genetic loci that vary between the compared genomes and therefore are potentially misleading. 14,32,33 Nucleotide sequences provide robust data for studying population variation. The mutational processes that generate this variation are understood, and sequence data have been successfully used in the study of bacterial epidemiology, population structure, and evolution. 34 The complete genome sequences [14][15][16]34 provide access to all regions of the chromosome and facilitate such studies. A previous study of MTB strains in Taiwan by our group (based on ST and VNTR-MIRU analysis) revealed an interesting association of strains with historical migrations of different ethnic populations. 35 Comparing whole-genome sequences of the main MTB strains in Taiwan in the present study confirmed the previous findings, thus establishing a good model to explore the global evolution and spread of MTB. The genome sequences of MTB strains isolated from representatives of Taiwanese aborigines (M3/Haarlem-3, M7/LAM-3, M24/Beijing ancient strain), a representative of a recent Han immigrant (W6/Beijing modern strain), and representatives of historic Han immigrants (A18/EAI2_MANILLA, A27/T1) were determined by 454 sequencing technology. 11 More than 95% of the reads were assembled into sequence contigs. The sequence data from these representative strains will be further analyzed to discover the unique genomic features of MTB infecting different ethnic groups in Taiwan. At present, we are focusing on MTB genomic information together with epidemiological and clinical data in order to identify factors significant in transmission, virulence, drug resistance, and protection efficacy of vaccines among different strains. We conducted Ka/Ks analysis to identify selection pressure on the protein-coding regions. As shown in Table 7, we found that, in general, there are more N than S changes in the MTB-coding sequences. Second, we found that the M7 strain isolated from an aboriginal patient is most closely related to the F11, an MTB strain isolated from a TB epidemic in the Western Cape of South Africa. This finding further supports our previous study, in which we demonstrated that the Haarlem and LAM lineages were circulating in the aboriginal community in Taiwan and suggesting a link of these strains to post-Columbus Europeans. 6,36 The Beijing strains have spread worldwide as a genetically conserved genotype of MTB, often in association with drug resistance. 8,[37][38][39][40] This worldwide spreading in the population structure of MTB is driven in part by man-made factors, and perhaps also linked with intrinsic mycobacterial characteristics. By using DNA MassARRAY ® technology (Sequenom), we have established protocols for rapid and cost-effective assays for distinguishing different sublineages of Beijing strains. 33 Moreover, we found SNPs in a putative DNA repair gene, which may be involved in facilitating spreading of the pathogen, but did not demonstrate an association with multidrug resistance. 33,[41][42][43][44][45] Furthermore, we are conducting informatics analysis of the six sequenced Taiwan MTB genomes. Preliminary data indicate as many as 620 SNPs in at least two of the sequenced strains. The information generated by comparative analysis is the basis for establishing an MTB genotyping procedure for tracing the evolution and distribution of different MTBs in Taiwan. Molecular population genetic analysis of clinical strains delineates relationships among closely related strains of pathogenic microbes and allows construction of genetic frameworks for examining the distribution of biomedically relevant traits, such as virulence, transmissibility, and host range. In seeking to describe the distribution of characteristics of MTB strains and identify the determinants of that distribution, we are attempting to identify factors that determine disease transmission. Comparative genomic hybridization microarray chips will be designed based on the determined genomic sequences in order to conduct population genetic studies quickly and efficiently. Such studies will not only help us to understand the dynamics of TB transmission in Taiwan but will also combine sequence analysis and microarray technology for investigating drug resistance and virulence. conclusions We demonstrated that Haarlem and LAM MTB strains are present in the aboriginal community in Taiwan, suggesting a link of these strains to post-Columbus Europeans. Taken together, our results revealed an interesting association of MTB strains with historical migrations of different ethnic populations, thus providing a good model to explore the global evolution and spread of MTB.
2018-04-03T05:50:34.068Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "a8962e8631e39a706bf4e18ee3c9e86371b490d8", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.4137/EBO.S40152", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a8962e8631e39a706bf4e18ee3c9e86371b490d8", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
207902366
pes2o/s2orc
v3-fos-license
Bladder cancer survival nomogram Supplemental Digital Content is available in the text Introduction Bladder cancer (BC) is a common urinary malignant tumor, [1] characterized by high morbidity and mortality. In 2018 in the United States (US) alone, there were 81,190 newly-diagnosed BC cases, and 17,240 BC-attributable deaths. [2] Approximately 25% of BC patients present with muscle-invasive BC or metastatic disease, while 75% present with non-muscle invasive BC (NMIBC). [3] The proportion of patients with NMIBC is relatively high; however, the high rate of recurrence (70%) in low-and intermediate-risk disease, and the fairly high rate of progression to muscle-invasive disease (30%) in high-risk NMIBC are cause for concern. [4][5][6] The majority of BC cases occur in people aged over 60. The main risk factor for BC is increasing age, but smoking and exposure to some industrial chemicals have also been reported as risk factors. [7] Numerous staging systems have been proposed for urinary bladder carcinoma and the most commonly used is the American Joint Committee on Cancer (AJCC) staging system. [8] The eighth edition of the US Joint Commission on Cancer (AJCC) staging manual, has established a tumor node metastasis (TNM) classification system, which indicates increasing understanding of the pathophysiology of BC and applicable treatments. This staging system is currently widely used in making prognostic estimates and treatment decisions in clinical practice. [9] However, there is evidence to suggest that patients with the same pathological grade or clinical staging of BC might still have different prognosis and ultimate survival. This further suggests existence of factors not directly related to the characteristics of BC that might affect patients' prognosis. To predict survival accurately and reliably, the use of nomograms has been proposed. A method based on more refined TNM staging for predicting individualized survival of bladder cancer patients is required, and a nomogram is a good method for this purpose. Nomograms are based on the TNM staging system, and other key prognostic factors associated with patient survival, and have been applied as a layered tool in clinical research on prostate, breast, gastric, and colorectal cancer. [10][11][12][13] Analysis of these nomograms has led to identifying potential survival prognostic factors. Overall, using the c-index, these models are considered to surpass clinical judgment when predicting patient survival. [10,14] Within BC research, previous studies have found potential prognostic factors through the construction of novel nomograms; for example, age, stage, grade, [15] tumor size, lymphovascular invasion, variant histology, [16] genetic variants (smad6, fn1, galectin-9, p53, pRB, p21, p27, and cyclin E1), [17][18][19] and hemocyte type (hemoglobin, albumin, lymphocyte, and platelet type) have been identified using this approach. [20] However, to assess accurately the prognosis of BC patients, studies involving a large number of patients, and internal and external validation datasets, are required. To our knowledge, this study is the first attempt to construct a clinical nomogram for bladder cancer survival, using a large database. We further construct a population-based survival-predicting model with internal validations. In addition, we used a TCGA cohort to externally validate the nomogram. Patients and study design We identified BC cases from the Surveillance, Epidemiology, and End Results (SEER) database of the National Cancer Institute (http://seer.cancer.gov/) before we signed Research Data Agreement is on file at SEER. We are allowed to utilize the SEER * Stat client-server system and/or download the files which make up the SEER Research Data. To be included in the study, patients had to have had pathologically-confirmed BC recorded in one of the 18 SEER-covered registries at any point during the database's coverage . We extracted data on the patients' age, sex, race, stage (T/N/M), survival time, and mortality. Patients were excluded if any of the data was missing or incomplete. We used an internal verification method to randomly divide our dataset into 2 cohorts (training and a validation cohort at ratio 7:3). In addition, we also extracted data on 130 patients from the Cancer Genome Atlas (TCGA) (https://portal.gdc.cancer.gov), which is a publicly available database, using as an external validation cohort. The clinical information of BC is publicly available in the SEER and TCGA program, so the approval of local ethics committee was not needed. Statistical analysis First, univariate Cox regression analysis including selected parameters was carried out. Second, independent prognostic factors were identified in multivariate Cox proportional hazards regression analysis. Third, a nomogram based on these prognostic factors was constructed base on the training cohort data. The Cox proportional hazard regression model was used to estimate the hazard ratio (HR), and the corresponding 95% confidence interval (CI), for each of the potential risk factors. The multivariate Cox regression model was constructed with a nomogram predicting 3-and 5-year BC survival. Validation of the nomogram was performed using the c-index, the area under the receiver operating characteristic curve (AUC), and the calibration curve. The nomogram-predicted survival probability was compared with observed survival probability, calculated with the Kaplan-Meier (KM) method. The c-index was used to estimate the predictive accuracy and discrimination ability of each factor as well as of the overall nomogram: the higher the cindex, the better its prognostic accuracy. The receiver operating characteristics (ROC) curves are similar to the c-index, but are considered less suitable for use with censored data. The calibration curves were used to assess the nomogram-predicted 3-and 5-year survival with the observed 3-and 5-year survival. Patients were divided into high-risk and low-risk with the cut-off value set at the median risk, estimated in the KM analysis. To achieve this, we estimated an optimism-corrected calibration curve with a bootstrapped sample of 1000. Data extraction was performed using the SEER * Stat software version 8.3.5. Statistical analyses were performed using R software version 3.5.3 (http:// www.r-project.org) with the RMS, survival, and foreign statistical packages. For all of the analyses, a two-tailed P value < .05 was considered statistically significant. Patient characteristics Data on a total of 398,173 patients were extracted from the SEER database, according to the screening criteria. Subsequently, data from 332,202 patients were excluded, as they had not been assigned accurate parameters. The final sample included 65,971 patients in the entire cohort (Table 1). Subsequently, the cohort was divided and a total of 46,179 (70%) patients were included in the training cohort, while 19,792 (30%) patients were included in the internal validation cohort. The external validation cohort included 130 patients from the TCGA ( Table 2). The clinicopathological characteristics of the training and validation cohorts are shown in Table 1. The median survival time for BC patients in this sample was 38 months, with the specific 3-and 5year survival rates at 52% and 13%, respectively ( Fig. 1) in the SEER cohort. Independent bladder cancer survival prognostic factors Univariate Cox regression analysis of the training cohort revealed a role of the following parameters in predicting patient survival. Factors such as age, sex, race, stage_T1, stage_T2a, stage_T2b, stage_ T2NOS, stage_ T3a, stage_T3b, stage_T3NOS, stage_T4a, stage_Ta, stage_Tis, stage _TX, stage_N, stage_M were associated with patients' prognosis. Among these factors, stage_T (c-index = 0.729) and age (c-index = 0.645) each had superior discrimination power in predicting BC survival compared with other factors. The results of multivariate analyses, with stepwise models including risk factors identified as significant in univariate analysis, showed that age, sex, race, stage_T1, stage_T2a, stage_T2b, stage_T3a, stage_Ta, stage_Tis, stage_N, and stage_M were independent predictors of BC survival (Tables 3 and 4). These factors were subsequently included in the predictive model. In the analysis of the TCGA data used for external validation, only lymph node metastasis was associated with survival (Table 5). Prognostic nomogram for OS Predictive models with nomograms integrating all factors affecting survival in the training cohort are shown in Figure 2. Each prognostic parameter was assigned a score according to its prognostic value; the sum total of the scores was used to predict 3-and 5-year survival. The total score for all the variables was converted into an estimate of the probability of death. The cindex of the prognostic nomogram for overall survival prediction was 0.7916 (95% CI, 0.79-0.80) in the training cohort, and 0.7917 (95% CI, 0.79-0.80) in the internal validation cohort. The validation set was superior to the training set, indicating the robustness of the model. The AUCs for the 3-and 5-year survival were 0.82 and 0.813, respectively, in training cohort (Fig. 3A, B). The AUC combined with the c-index reflected good discrimination ability of the model. The calibration curves estimating survival probability at 3 and 5 years showed excellent agreement between the nomogram-predicted and observed values (Fig. 4A, B). When patients were divided into high-risk and low-risk groups, with median risk used as the cutoff point, the survival curves showed significant differences in prognosis. The 3-and 5year survival rates for the low-risk group were 85% and 80%, respectively; the same outcomes for the high-risk group were 45% and 35%, respectively (Fig. 5A). Age was strongly predictive of BC survival (Fig. 5B), as was race (Fig. 5C). The 3and 5-year survival rate for men and women with BC was comparable (Fig. 5D). Lymph node metastasis and the number of the lymph nodes affected as well as the number of distant metastases were also key factors in 3-and 5-year survival outcomes. Validation of the nomogram's predictive accuracy In the internal validation cohort, the c-index of the prognostic nomogram for overall survival was 0.7917, which was slightly higher compared to the training cohort (0.7916), suggesting high discrimination ability of the model. The c-index of the external validation cohort was 0.724 (95% CI, 0.66-0.79). The AUCs, an (Fig. 6A, B). The calibration plots showed excellent agreement between the internal and external validation cohorts (Fig. 7A-D) Table 3, http://links.lww. com/MD/D309, Supplemental Content, which illustrates the c-index for the TNM-based model in TCGA cohort)(see Table 4, http://links.lww.com/MD/D309, Supplemental Content, which illustrates the c-index for the AJCC-TNM classification-based model in TCGA cohort). Discussion Urothelial carcinoma of the urinary bladder is a heterogeneous disease with multiple possible treatment modalities and a wide spectrum of clinical outcomes. Nomograms are considered a reliable graphical calculation model that combines all risk factors for tumor occurrence and have been used to predict individual risks of particular events. [21,22] A tool that accurately evaluates the likelihood of metastatic progression, cancer-specific complications and mortality as well as long-term quality of life is important in patient counseling and decision-making. The model presented in this study was based on more than 60,000 patients from the SEER database and 130 patients in TCGA database. In this retrospective study, we evaluated the clinicopathological parameters of BC and independent prognostic factors that might have affected survival of patients included in the SEER database. We showed that age, sex, race and stage_T1, stage_ T2a, stage_T2b, stage_T3a, stage_Ta, stage_Tis, stage_N, stage_M were independent prognostic factors. In addition, in multivariable analyses, we demonstrated that older age as well as more advanced T and N stages were independently associated with lower overall survival in BC. Given these independent prognostic factors, we constructed a novel nomogram that combined TNM staging with some clinical parameters. The nomogram illustrated that age, the T stage, and M stage are the most significant contributors to prognosis, while gender and the N stage showed limited impact on outcomes. In addition, over the age of 60 years, for every 10 years of age, the risk of mortality increased multifold. The multivariate analysis revealed that age was the main determinant of a high-risk prognosis, followed by distant and lymph node metastases, and tumor invasion of the abdominal and pelvic wall. Nevertheless, age remains the most significant risk factor for BC mortality, as most BC cases occur among people over 60 years old. Other risk factors, such as smoking and exposure to some industrial chemicals, which have been shown to increase BC risk, should be accounted for in patient assessment. Our nomogram revealed an unexpected finding; in this study, some of the stage_T (T3b, T4a, T4b) sub-categories were not independent prognostic factors. The TNM staging systems are commonly used for predicting patient prognosis. However, even in patients with the same stage of BC, there might be significant differences in prognosis and survival. In the present study, we developed a pictogram that predicts the overall survival. We observed that the c-index for the TNM-based model was superior to that for the AJCC-TNM classification (SEER training cohort: 0.7916 vs 0.739, P < .05; TCGA cohort: 0.724 vs 0.69, P < .05). The findings presented in this study demonstrate the differences in the clinical value of the distinct risk evaluation systems, TNM-and AJCC-TNM-based systems, in estimating overall survival in BC patients. Moreover, the nomogram based on the TNM staging system was more effective in predicting patients' survival. Meanwhile, the c-index for the TNM-based model in the validation cohort was slightly higher than for the internal training cohort based on high level score (training cohort 0.7916 vs internal validation cohort 0.7917), indicating that the nomogram we constructed was robust and accurate. The c-index for the external cohort was 0.724 (95% CI, 0.66-0.79). Although the cindex for the external validation cohort was not higher than the cindex for the training cohort (0.724), it was sufficient to confirm the stability and reliability of the prognostic model. Calibration plots demonstrated excellent agreement between the nomogram-predicted and observed survival, which confirmed the validity and reliability of the novel nomogram. In addition, the survival curves showed that although the median survival time was 38 months, most patients did not survive beyond that point. Regarding specific TNM stages, we found that Ta, Tis, and T1 stages were significant predictors of the overall survival compared to the other T-staging indicators. In clinical practice, BC at any of these three stages is referred to as non-muscular invasive bladder cancer (NMIBC). This suggests that NMIBC might be an independent prognostic factor of BC survival. In addition, we divided our sample into risk levels based on median risk values derived from the KM curve analysis. Such finelevel grouping can support clinicians in assessing distinct prognoses for seemingly similar patients. To the best of the authors' knowledge, this is the first nomogram derived based on a large dataset. The data used originated from the SEER database, ensuring the validity and reliability of our conclusions, as well as the internal and external validity of the nomogram, which offers an improvement in predictive accuracy over previously established models. To verify the value and prevent over-fitting of the present model, it was necessary to verify the novel nomogram. [23][24][25] We have verified This study used the SEER and TCGA datasets; several similarities and differences between the TCGA and SEER data were observed. More than half of the patients with BC in the SEER database (60.2%) were over 70 years old, while there were slightly fewer patients in this age group in the TCGA database (49.9%). Men were three times as likely to have BC compared to women (SEER F:M ratio: 23.4% vs 76.5%; TCGA F:M ratio: 20.7% vs 79.2%). As previously reported, the incidence of bladder cancer in men is three to four times higher than in women. Seven percent of all new cancer cases were men, but only women include 2% of new cases of the cancer. [26] This is consistent with our findings. Most patients included in the SEER www.md-journal.com database were BC stage_Ta (45.4%). In contrast, in the TCGA database, most patients had BC stage_T3b (29.2%). However, Tstage classification was the strongest predictor of survival in our study, as reported by a cohort study based on the international Cancer Database. [27] Given that this was retrospective analysis, the time coverage of the SEER database might have resulted in selection bias. We used the TCGA database for external validation, as an independent validation cohort is required to confirm the validity and reliability of a nomogram. The c-index of the external cohort was 0.724 (95% CI, 0.66-0.79), while the AUCs, another indicator of discrimination ability, were 0.815 and 0.803, for the 3-and 5-year survival, respectively. Moreover, calibration plots showed excellent agreement with the external validation cohort. Using external validation data provides additional evidence of the reliability, accuracy, and validity of our model. This notwithstanding, the TCGA database does not contain details on clinical diversity factors, such as tumor metastasis sites, tumor size, or surgical methods used during patient treatment. As we were not able to account for these potentially relevant factors, our model might be missing potentially relevant variables that could further improve its predictive value. The Surveillance, Epidemiology, and End Results (SEER) database has been chosen in the current analysis given its high quality as well as large suitable cohort sizes. However, the SEER database is a retrospectively rather than a prospectively maintained resource, which might have biased our analysis or affected it in ways that we were not able to adjust for. At the time of writing, there are prospective, randomized studies underway, which aim to verify the validity and reliability of models predicting overall cancer survival. These studies use sophisticated data modeling, accounting for biochemical and immunological factors, such as tumor markers and genetic variants. However, until these results become available, no additional prognostic factors can be included in the nomogram. It is worth noting that our model is based on the SEER database, which includes different races, further supporting the potential application of our nomogram to international populations. One of the key advantages of the proposed nomogram is the involvement of parameters that are easy to access and assess, meaning the model can be used in clinical practice without the burden of costs associated with complex tests, for example, tumor markers. Considering the size of the study population, these potential classification factors are unlikely to affect our conclusions. However, this nomogram was based on a retrospective study design; prospective clinical trials are needed to further validate this model. Despite these limitations, the models presented in this study might be suitable for clinical use, supporting individualized assessment of expected survival in BC patients. They might also be used as a layered tool for clinical research, and evidence for development of interventions aimed at improving the overall survival.
2019-11-07T14:10:53.003Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "9ba1a038cddb576f1b20c14b30033fae108ac879", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc6946294?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "13609c47e5bd4cd4f7d13562e766afa1afdec84e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266489791
pes2o/s2orc
v3-fos-license
A NEW FAMILY OF HYBRID CONJUGATE GRADIENT METHOD FOR UNCONSTRAINED OPTIMIZATION AND ITS APPLICATION TO REGRESSION ANALYSIS . We know many conjugate gradient algorithms (CG) for solving unconstrained optimization problems. In this paper, based on the three famous Liu–Storey (LS), Fletcher–Reeves (FR) and Polak– Ribi´ere–Polyak (PRP) conjugate gradient methods, a new hybrid CG method is proposed. Furthermore, the search direction satisfies the sufficient descent condition independent of the line search. Likewise, we prove, under the strong Wolfe line search, the global convergence of the new method. In this respect, numerical experiments are performed and reported, which show that the proposed method is efficient and promising. In virtue of this, the application of the proposed method for solving regression models of COVID-19 is provided. Introduction Unconstrained optimization is a branch of optimization in which we minimize an objective function that depends on real variables with the total absence of restrictions on their values of those variables.Thus, we consider the general unconstrained optimization problems, as follows: where : R → R is the continuously differentiable function and its first derivative is represented by () = ∇ ().Though several robust optimization algorithms with rapid convergence have shown to be available to solve the above nonlinear optimization model, many researchers still refer to the conjugate gradient algorithm (CG) as it uses low memory and good convergence properties.Besides, this method was first established by Hestenes and Stiefel [18] to solve unconstrained linear optimization problems.Then, in 1964, Fletcher and Reeves [14] extended the form of the conjugate gradient method to solve unconstrained nonlinear minimization problems.As a consequence, the results of the expansion inspired researchers to suggest a new conjugate gradient method with good computational performance and, at the same time, good convergence properties [6].Generally, the iterates of the CG methods are usually determined through the following recursive computational scheme: and where is the current iteration, is the gradient of at the point , is the search direction, ∈ R is the conjugate parameter which characterizes different versions of the CG methods and > 0 is the step size many line search techniques can obtain.In the midst of which, exact line search, weak Wolfe line search, or strong Wolfe line search, but we use, in our research, strong Wolfe line search, which is defined by the following conditions [27,28]: where scalars and satisfy 0 < ≤ < 1. One of the essential classes of CG methods is the hybrid conjugate gradient algorithms.Further, the hybrid schemes have better computational performances and more robust convergence properties than conventional CG methods as they take advantage of the two parameters used to build them.For this reason, many researchers cared about hybrid or mixed conjugate gradient methods.Djordjević [10], proposed the following hybrid method: [5], proposed the following hybrid method: = DY + (1 − ) HS , Li and Sun [20], proposed the following hybrid method: Liu and Li [21], proposed the following hybrid method: In addition, Sabrina et al. [17] proposed a new hybrid CG method based on combination of FR, PRP and DY conjugate gradient algorithms in which , where, 0 < < 1. Inspired by this research, we propose in this study a new hybrid CG method based on combination of LS, FR and PRP conjugate gradient algorithms for solving unconstrained optimization problems.In addition, we alike apply, in this study, the new method for solving a model of COVID-19 outbreak around the globe in which the data is taken from January to September 2020. In light of this, the paper is structured as follows.In Section 2, we will describe the proposed method with its corresponding algorithm and further establish the descent condition and convergence under inexact line search.In Section 3, we present the numerical experiments to show the efficiency of our new method and the application of regression models of COVID-19 using the new method is illustrated in Section 4. Finally, a brief conclusion is drawn in Section 5. Proposed method, algorithm In this paper, we propose another combination of LS, FR and PRP conjugate gradient algorithms.We use the following conjugate gradient parameter: As a consequence, the direction is given by: The parameters , in (6) satisfying 0 ≤ , ≤ 1 which will be determined in a particular way that will later be described.It should be noted that: -If = 1 and = 0, then New -If = 0 and 0 and PRP 𝐾 is a convex combination between LS and FR .See [11].Finally, if ∈]0, 1[, ∈]0, 1[ and 0 < + < 1, then we have a new hybrid CG method as a convex combination of three methods "LS, FR and PRP".From ( 6) and (7), it is clear that: To select the parameters and , we use the traditional conjugacy condition, i.e. ( +1 = 0).Thus, we have the following lemma.Lemma 1.If the conjugacy condition +1 = 0 is satisfied at every iteration, we get Proof.Multiplying (8) by from the left and using the conjugacy condition, we obtain Finally, after some algebra, we have: The parameter given by ( 9) can be outside the interval [0, 1].However, in order to have a real convex combination in (6), the following rule is considered: if (6).Therefore, under this rule for selection. The sufficient descent condition In this study, we will establish the sufficient descent of our new method, which plays a vital role in the global convergence analysis.Thus, we need the following assumptions: Assumption 2. In a neighborhood of the function is continuously differentiable and its gradient ∇ () is Lipschitz continuous, i.e. there exists a constant 0 < < ∞, such that: Under Assumptions 1 and 2 on , there exists a constant Γ ≥ 0, such that: for all ∈ [4]. The following theorem proves that the search direction obtained by the new method satisfies the sufficient descent condition. Theorem 1.Let the sequences { } and { } be generated by SCH method.Then, the search direction satisfies the sufficient descent condition: where > 0. Proof.The following proof is by induction.We show that search direction shall satisfy the sufficient descent condition holds for = 0, the proof is a trivial one, i.e. 0 = 0 so 0 0 = −‖ 0 ‖ 2 , and we conclude that sufficient descent condition holds for = 0. Next, we assume that (13) holds for some ≥ 1. Now we have: we can write It follows that Produces after some arrangements Multiplying ( 14) by +1 from the left, we get We have proven seven cases. Case 1.If = 1 and = 0, then we get We are going to prove that the sufficient descent condition holds for LS, we have In addition, we have where is positive, so • By using (16) in the above inequality, we get Therefore Case 2. If = 0 and = 1, the relation (15) becomes , under the strong Wolfe line search, the FR method satisfies the sufficient descent condition [16] where 2 > 0. Case 3. If = 0 and = 0, the relation (15) becomes We are going to prove that the sufficient descent condition holds for PRP when the strong Wolfe line search is used.Thus, we have: we know that ≤ where is positive, multiply both sides by (−1), we have Implies that so the equation ( 18) becomes where 3 > 0.Then, the proof is completed.Case 4. If = 0 and 0 < < 1, we get In [9], Djordjević proved that the sufficient descent condition holds where 4 > 0. Case 5.If = 0 and 0 < < 1.Then The sufficient descent condition is fulfilled and mentioned in [1], such that where 5 > 0. Case 6.If 1 − − = 0 and 0 < , < 1, then = 1 − and the relation (15) becomes Djordjević proved in [11] that LSFR +1 satisfies the sufficient descent condition for all , i.e. there exists a number 6 > 0, such that Case 7. [9] Now, we are going to prove that the direction satisfies the sufficient descent condition when (15), we get So, it is proved that +1 satisfied the sufficient descent condition. Convergence analysis The Zoutendijk condition [32] is often utilized to prove the global convergence of the CG method.Moreover, the following lemma shows that the Zoutendijk condition holds for the proposed method under the strong Wolfe conditions of formulas ( 4) and ( 5). Lemma 2. Suppose that Assumptions 1 and 2 hold.Consider common iterate (2), where is a descent direction and is determined by the strong Wolfe line search (4) and (5).Then, the Zoutendijk condition According to the Assumptions 1 and 2, the strong Wolfe conditions and (13), we conclude that , which is obtained in our new method is not equal to zero, i.e. there exists a constant > 0 such that The following theorem gives the global convergence of SCH method. Numerical analysis This section is devoted to testing the implementation of the new method.Based on this, we compare the computational performance of the proposed method with some known algorithms, such as the LS, FR, PRP conjugate gradient algorithms, and the new hybrid methods: FRPRPCC (Fletcher-Reeves-Polak-Ribiere-Polyakconjugate-condition) from [9] which we call DJA here, and hFRPRPDY from [17] which we call HYB here.For this comparisons, we consider 400 unconstrained optimization test problems from the CUTE library [7] along with other large-scale optimization problems presented in [3].Above and beyond, we selected 30 largescale unconstrained optimization problems in extended or generalized form.More to the point, each problem is tested for several variables: = 2, 4, . . ., 25000.Nonetheless, the analysis was based on the number of iterations and central processing unit CPU time.For the numerical tests, the iterations are terminated when ‖ ‖ ∞ < 10 −6 , at which ‖ • ‖ ∞ is the maximum absolute component of a vector, the parameters in the strong Wolfe line searches are chosen to be = 10 −3 and = 10 −4 and the hybridization parameter = 0.5.On the other hand, all programs are written in Matlab and compiler settings on the PC machine with Intel(R) Core(TM) i3-4030U CPU @1.90 GHz processor and 4GB RAM and Windows seven professional system. Comparisons of these methods are given on the following two sides.On the first side, for the th problem, let 1 and 2 be the optimal value found by 1 method and 2 method, respectively.We say that, for the particular problem th, the performance of the 1 method was better than the performance of the 2 method if and number of iterations, or CPU time of 1 method is less than those of 2 method, respectively.On the other side, to obtain complete comparisons in CPU time, we used the profile of Dolan and Moré [12] to evaluate and compare the performance of the set of methods on a test set .Assume that consists of methods, consists of problems.For each problem, ∈ and method ∈ denote , be the computing time required to solve problem by method .The comparison between different methods is based on the performance ratio defined by , := , | min ∈ , .In consequence, the performance profile is given by where : R → [0, 1] and 1 ≤ ≤ .The function is the distribution function for the performance ratio.Moreover, for a method is a non-decreasing, piecewise constant function, continuous from the right at each breakpoint.Note that ( ) is the probability for method ∈ that log 2 , is within a factor ∈ R + of the best possible ratio.Obviously, when takes a certain value, a method with a high value of ( ) is preferable or represents the best method. In the first set of numerical experiments, we compare the performance of our new algorithm to the LS, FR and PRP conjugate gradient algorithms.Figures 1 and 2 represent the performance profiles of the new method versus LS, FR and PRP based on the CPU time and number of iterations, respectively.The two figures show that the new method is superior to the other conjugate gradient methods on the testing problems. In the second set of numerical experiments, we present a comparison with the new hybrid method FRPRPCC from [9], which we call DJA here, this comparison shall be under the same above conditions.Moreover, Figures 3 and 4 represent the performance profiles of SCH versus DJA based on the number of iterations and CPU time. The two figures show that our method performs better than the DJA method for the number of iterations and CPU time.Simultaneously, according to the performance of DJA method in [9], we can conclude that our method is also better than CCOMB from [4], HYBRID from [2], the algorithm of Touatti-Ahmed and Storey (ToAhS) from [26], the algorithm of Hu and Storey (HuS) from [19] and the algorithm (GN) of Gilbert and Nocedal from [15].The third set of numerical experiments compares our new method to the hybrid conjugate gradient algorithm hFRPRPDY from [17], which we call HYB here.Besides, Figures 5 and 6 represent the performance profiles of the new method versus HYB based on the CPU time and number of iterations, respectively. From these figures illustrated above, we notice that the new algorithm behaves similarly to or better than the HYB method inspired by it. Application of the New CG method to regression analysis Novel coronavirus-19 (COVID-19) is a new chain of corona group viruses that was not recognized in human history earlier than December 2019.It was first discovered in Wuhan, China [30] and has spread to various urban areas in China as well as approximately 196 different countries of the world.It has since been declared an outbreak by the World Health Organization (WHO).It is difficult to take a single point of view on this virus's origin.It can be due to a seafood market exchange, the people's migration from one location to another, or the transmission from animals to humans.Most people infected by the virus will develop mild to moderate symptoms, such as mild fever, cold, and difficulty breathing, and recover without special treatment.According to data reported by the WHO, on the 20th of October 2020, the laboratory declared that the number of confirmed cases is over 40 million, with more than one million deaths recorded in 215 regions and countries around the world since the disease was first reported in Wuhan. Mathematical modeling plays a vital role in describing the epidemic of infectious diseases and thus overcoming the same at an early stage.Recently, numerous studies modeled various aspects of the coronavirus outbreak, and the application of numerical methods on some COVID-19 models was also studied [25,31].Besides, this paper aims to investigate the performance of the proposed method on a parameterized COVID-19 regression model.For deriving the COVID-19 regression model, the study will consider the total confirmed cases of the infection from January 2020 until September 2020.Subsequently, the obtained data would be transformed into an unconstrained optimization problem, which would later be solved using the proposed method. Regression analysis is one of the most effective statistical tools for modeling problems in the applied sciences, physical sciences, management and many others.Based on the previous description, we can describe regression analysis as a statistical technique used to estimate the relationship between a dependent variable and one or more independent variables.In virtue of this, the function of regression analysis is defined as follows: where , = 1, 2, . . ., , > 0 is the predictor, is the response variable, and is the error.For any problem related to regression analysis, the linear regression function can be derived by computing such that: with 0 , . . ., representing the regression parameters, these parameters are estimated to minimize the error value.This scheme is often used when the relationship between and is approximated by a straight line.However, these cases rarely occur because most problems are often nonlinear.Therefore, the nonlinear regression scheme is frequently used.In this paper, we considered the nonlinear regression one. To derive the approximate function, we consider the data from the global confirmed cases of COVID-19 from January to September 2020.Table 1 illustrates the process description from the statistics obtained from the World Health Organization [29].We have data for nine months (Jan-Sept), the months of data collection would be denoted by -variable, and the -variable would denote the confirmed cases corresponding to these months.However, the data for eight months (Jan-Aug) would be considered for fitting the data, while the data for September 2020 would be reserved for error analysis. From the above data, the approximate function for the nonlinear least square method is defined by: The above function (30) will be utilized when approximating the data values based on data values from January to August.Let denote the number of months and be the confirmed cases for that month.Based on this information, the above least squares method ( 30) is transformed into the following unconstrained minimization problems. The data of the first eight months from Table 1 are utilized to formulate the nonlinear quadratic model for the least square method, which is further used to derive the unconstrained optimization model.Based on the above discussion, it is evident that there exist some parabolic relations between the data and the value of with the regression function defined by (30) and the regression parameters 0 , 1 and 2 min Subsequently, using the data of Table 1, we transform (32) to obtain our nonlinear quadratic unconstrained minimization model as follows: The above nonlinear quadratic model was constructed using data from January to August.Meanwhile, the data for September is reserved for relative error analysis of the predicted data.At present, we can apply the SCH, LS, FR and PRP methods for solving the model (33) under the strong Wolfe line search conditions (4) and (5), we obtain the performance results based on iteration numbers and CPU time illustrated in Table 2. To overcome the difficulty of computing the values of 0 , 1 , 2 using matrix inverse, we implement the previously mentioned methods using different initial points.We terminate the computation : -The defined stopping criteria are satisfied based on the value defined for each function. -The method is unable to solve the model. Trend line method In this subsection, we aim to estimate the confirmed cases of COVID-19 for eight (8) months using the proposed SCH, some known CG, and least square methods.From the actual data obtained from Table 2, we use Microsoft Excel software to plot the trend line as demonstrated in Figure 7. Furthermore, to show the efficiency of the proposed method, we compare the approximation functions of the SCH method with the functions of the LS, FR, PRP and trend line methods.Based on the results illustrated in Table 2, it is evident that the suggested SCH method is faster and more efficient compared to the used methods.On the other hand, from the plot, it is clear that the trend line equation obtained is a nonlinear quadratic equation.In light of this, the ideal purpose of the regression analysis is to estimate 0 , 1 , . . ., where the error is minimized.From the above discussion, we can conclude that the SCH method can be used as an alternative to the trend line method and the least squares method, which implies that the method is applicable to real-world situations. Conclusion In light of the facts above, the hybrid CG methods are usually obtained based on the classical CG methods by integrating their advantages.In this paper, we proposed a new hybrid conjugate gradient algorithm in which the famous parameter is computed as a convex combination of LS , FR and PRP algorithms.Based on some conditions, we show that the proposed algorithm enjoys sufficient descent condition and converges globally under strong Wolfe line search.Further, numerical experiments are considered to illustrate the performance of the proposed method.The results show that the new method is more effective and has a better convergence rate than LS, FR, PRP, FRPRPCC and hFRPRPDY methods.More to the point, our proposed method can solve the COVID-19 case model. Figure 1 . Figure 1.Performance Profile based on the iteration number.SCH versus LS, FR and PRP conjugate gradient algorithms. Figure 2 . Figure 2. Performance Profile based on the CPU time.SCH versus LS, FR and PRP conjugate gradient algorithms. Figure 4 . Figure 4. Performance Profile based on the CPU time.SCH versus hybrid FRPRPCC (DJA). Figure 5 . Figure 5. Performance Profile based on the iteration number.SCH versus hybrid hFRPRPDY (HYB). Figure 6 . Figure 6.Performance Profile based on the CPU time.SCH versus hybrid hFRPRPDY (HYB). Table 2 . Test results for optimization of quadratic model for SCH, LS, FR and PRP.
2023-12-23T16:19:54.189Z
2023-12-22T00:00:00.000
{ "year": 2023, "sha1": "b3ed48b971650bdce2f765cada2577322d684d05", "oa_license": "CCBY", "oa_url": "https://www.rairo-ro.org/articles/ro/pdf/2024/01/ro230109.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "998b8a888f9ecc28c2678d00bc319e427b21e820", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [] }
8941952
pes2o/s2orc
v3-fos-license
Diversity and Seasonal Variation of Endophytic Fungi Isolated from Three Conifers in Mt. Taehwa, Korea The needled leaves of three conifer species were collected in Mt. Taehwa during different seasons of the year. Total 59 isolates and 19 species of endophytic fungi were isolated from the leaves and identified using morphological and molecular characteristics. As a result, Shannon index was different in its host plant; Larix kaempferi had a highest value of species diversity. According to the sampling season, 9 species of 19 species were isolated during fall season. The results suggest that the existing of host plant and sampling season are major factors of distribution of endophytic fungi. Coniferous forests are considered to be in decline worldwide. Several factors are hypothesized to contribute to this decline, such as the geographical isolation of conifer species distribution areas, the destruction of conifer forests by humans, and polluted air conditions due to acid rain and climate change [1,2]. Moreover, the decrease in conifer forest cover represents a global situation, with these trees and associated ecosystems being extremely important from the perspective of biological resources. Endophytes are fungi that grow in the living tissues of plants, without causing any apparent disease [3]. Currently, our understanding about these organisms remains limited as it is difficult for researchers to isolate these organisms from host plants. Arnold et al. [4] analyzed endophytic fungi from Pinus taeda L. by using both the culture media method and PCR-cloning, with the latter technique proving more powerful. The results indicated that many species of endophytic fungi could not be isolated from their host plant; therefore, greater effort is required to detect their presence in plants, in parallel with a more taxonomical approach to validate their existence [4]. Only a few studies on the endophytic fungi of plants exist in Korea, several of which have been conducted on woody plants, including Lindera obtusiloba [5], Pinus densiflora [6,7], and Pinus koraiensis [8]. In this study, we isolated endophytic fungi from 3 species of conifers growing on Mt. Taehwa in Korea, in addition to analyzing the biodiversity and seasonal variation in the numbers of these fungi. . Each tree was growing at a distance of at least 100 m from the other sampled trees between 400~800 m in altitude, and all trees were sampled using a GPS, an aluminium tag, and a tagging tape: April (spring), July (summer), and November (autumn). MATERIALS AND METHODS Isolation of endophytic fungi. Samples were treated within 48 hr after collection. All the 2-yr-old leaves were washed with tap water and then placed for 3 min in 1% NaOCl solution, 2 min in 70% ethanol, and finally washed twice with distilled water [9]. These surface-sterilized leaves were cut into 4 segments that were 5 mm in length and then placed into 3 types of culture medium: potato DNA extraction and data analysis. All isolates were grouped into morphotypes on the colony shape, height, and color of the aerial hyphae, in addition to the base color, growth rate, margin characteristics, surface texture, and depth of growth into the medium. One or 2 isolates of each morphotype were selected for molecular identification. DNA was extracted according to the protocol of the DNeasy Plant Mini Kit (Qiagen, Hilden, Germany), and PCR was performed to amplify the internal transcribed spacer (ITS) region, including 5.8S rDNA, by using the primers ITS1F and ITS4 [10] C for 5 min. The PCR products were sequenced and compared with reference sequences on NCBI by using BLAST. MEGA5 [11] was used to construct the phylogenic tree with neighbor-joining analysis. RESULTS AND DISCUSSION A total of 59 morphotypes were isolated from the host plants (Table 1). Only morphotypes with a ≥ 97% similarity value [12] were used for analysis (Fig. 1). It was not possible to identify Bionectria spp. and Pestalotiopsis spp. to the species level based on sequence. Depending on host plants, 6 species were identified among the 12 morphotypes from the juniper trees, 8 species among 21 morphotypes from the Japanese larch, and 11 species among 26 morphotypes from the pine trees. One species was isolated from both the juniper tree and the Japanese larch, 3 species were isolated from both the Japanese larch and the pine tree, and 2 species were isolated from both the pine tree and the juniper tree. However, no species was isolated from all 3 host plant species. In addition, N. diffusa was the most abundant species in the juniper tree; P. papaya, the Japanese larch; and L. pinastri, the pine tree ( Table 1). The Shannon index (H') [13] was used to assess the species diversity of the endophytic fungi (Table 1). In the juniper tree, the total H' was 1.47, and the highest H' (1.00) was observed in April. In the Japanese larch, the H' was 1.74, and the highest H' (1.33) was observed in November. In the pine tree, H' was 1.58, and the highest H' (1.43) was observed in November. The Japanese larch showed the highest species diversity. Depending on sampling season, 7 species were identified among the 9 morphotypes in April, 8 species among 21 morphotypes in July, and 9 species among 29 morphotypes in November. One species was isolated during both April and July, 4 species were isolated during both July and November, and 1 species was isolated during both April and November (Table 1). More than 600,000 species of endophytic fungi are theorized to exist worldwide [14], and various scientific N. oryzae ( A J 2 3 0 6 7 6 ) - [15]. These results also indicate that the number of morphotypes belonging to endophytic fungi increases across the season. Therefore, it is likely that a combination of these factors enhanced the species diversity of the Japanese larch. J. rigida is mainly distributed in lower altitude forests, while L. kaempferi is primarily distributed in middle altitude forests. However, P. densiflora is found at higher altitudes, where the forest conditions are cooler and drier in the Korean Peninsula. Thus, most endophytic fungi were obtained from lower to middle altitudes, with only one species of endophytic fungi being discovered at an altitude above 800 m. This observation indicates that endophyte distribution is influenced by the distribution of host plants. Surveys for endophytic fungi have been conducted at all altitudes, with specimens being found at all sites; however, endophytic fungi have an ability to adapt to variations in abiotic and biotic conditions along the altitudinal gradient [16]. Thus, standpoints of host specificity and adaptation ability are required, which is only possible through the collection and study of endophytes as ecological components and biological resources.
2016-08-09T08:50:54.084Z
2013-06-01T00:00:00.000
{ "year": 2013, "sha1": "bab70e4cbd0c32cccee04586df02ce3c6efe38ce", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.5941/MYCO.2013.41.2.82?needAccess=true", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "bab70e4cbd0c32cccee04586df02ce3c6efe38ce", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
751026
pes2o/s2orc
v3-fos-license
Real-Time Fluorescence Measurements of ROS and [Ca2+] in Ischemic / Reperfused Rat Hearts: Detectable Increases Occur only after Mitochondrial Pore Opening and Are Attenuated by Ischemic Preconditioning Mitochondrial permeability transition pore (mPTP) opening is critical for ischemia / reperfusion (I/R) injury and is associated with increased [Ca2+] and reactive oxygen species (ROS). Here we employ surface fluorescence to establish the temporal sequence of these events in beating perfused hearts subject to global I/R. A bespoke fluorimeter was used to synchronously monitor surface fluorescence and reflectance of Langendorff-perfused rat hearts at multiple wavelengths, with simultaneous measurements of hemodynamic function. Potential interference by motion artefacts and internal filtering was assessed and minimised. Re-oxidation of NAD(P)H and flavoproteins on reperfusion (detected using autofluorescence) was rapid (t0.5 < 15 s) and significantly slower following ischemic preconditioning (IP). This argues against superoxide production from reduced Complex 1 being a critical mediator of initial mPTP opening during early reperfusion. Furthermore, MitoPY1 (a mitochondria-targeted H2O2-sensitive fluorescent probe) and aconitase activity measurements failed to detect matrix ROS increases during early reperfusion. However, two different fluorescent cytosolic ROS probes did detect ROS increases after 2–3 min of reperfusion, which was shown to be after initiation of mPTP opening. Cyclosporin A (CsA) and IP attenuated these responses and reduced infarct size. [Ca2+]i (monitored with Indo-1) increased progressively during ischemia, but dropped rapidly within 90 s of reperfusion when total mitochondrial [Ca2+] was shown to be increased. These early changes in [Ca2+] were not attenuated by IP, but substantial [Ca2+] increases were observed after 2–3 min reperfusion and these were prevented by both IP and CsA. Our data suggest that the major increases in ROS and [Ca2+] detected later in reperfusion are secondary to mPTP opening. If earlier IP-sensitive changes occur that might trigger initial mPTP opening they are below our limit of detection. Rather, we suggest that IP may inhibit initial mPTP opening by alternative mechanisms such as prevention of hexokinase 2 dissociation from mitochondria during ischemia. Introduction Reperfusion of the heart following prolonged ischemia causes irreversible damage through myocyte death and resulting infarct formation. Critical to this process is opening of the mitochondrial permeability transition pore (mPTP) that occurs after about 2 min of reperfusion, when the intracellular pH (pHi) returns to preischemic values from the low ischemic values (<6.5) that inhibit mPTP opening [1,2]. MPTP opening compromises cellular bioenergetics, impairing restoration of ionic homeostasis, including [Ca 2+ ], while also increasing reactive oxygen species (ROS) production. These effects together produce further mPTP opening and bioenergetic compromise leading to a spreading wave of necrotic cell death that constitutes the infarct [3,4]. Cardioprotection is afforded by pharmacological inhibition of mPTP opening by Cyclosporin A (CsA) [5], Sanglifehrin A [6] or cinnamic anilides [7], as well as by regimes such as ischemic preconditioning (IP) and post-conditioning which also act, at least in part, by preventing mPTP opening [3,4]. Although the exact molecular composition of the mPTP remains uncertain [8][9][10], it is well established that its opening is triggered by elevated matrix [Ca 2+ ], while the sensitivity to [Ca 2 + ] is greatly increased by oxidative stress, elevated phosphate and decreased matrix adenine nucleotides [3]. These are all conditions associated with ischemia / reperfusion (I/R), and we, like many others, have proposed that the main triggers for mPTP opening during early reperfusion are an increased matrix [Ca 2+ ] together with mitochondrial ROS production [3,8]. Furthermore, it has been proposed that IP reduces or prevents mPTP opening, and thus I/R injury, by attenuating these triggers [3,11]. While evidence in support of this view has come from studies on isolated cardiac myocytes subject to simulated I/R [12,13], we were concerned that the conditions experienced by isolated myocytes subject to simulated I/R do not adequately reproduce those occurring in the intact heart subject to I/R. In particular, the beating perfused heart exhibits much higher metabolic turnover and calcium cycling rates than isolated cardiac myoctes which also cannot mimic the complex cell / cell interactions occurring in the whole heart. Hence we wished to monitor ROS and [Ca 2+ ] dynamics in the perfused beating heart subject to I/R. Others have monitored dihydroethidium (DHE) surface fluorescence of the perfused heart to detect ROS production during I/R [14,15]. However, major concerns have been expressed over the use of DHE as a ROS probe, both in terms of whether the fluorescent species monitored really detects superoxide [16] and its sensitivity to changes in mitochondrial membrane potential [17]. The luminescent probe lucigenin has also been employed which did detect a large increase in ROS during reperfusion, but this only reached a peak after about 5 min of reperfusion [17] which is after mPTP opening [1,2]. Others have used multi-photon microscopy to measure changes of [Ca 2+ ] and ROS in the perfused heart [18], but in these studies pharmacological inhibition of contractile function was required. This would have greatly reduced ATP turnover and thus the energetic demand on mitochondria which may well have caused substantial modulation of the kinetics of mPTP opening during reperfusion when compared to the beating heart. Although sophisticated gating mechanisms can be employed in the beating heart to correct for motion artefacts [19], such an approach would only report changes in a few cardiomyoctes and thus might not reflect the behaviour of the majority. Others have used mass spectrometry probes or protein carbonylation to detect increases in mitochondrial superoxide and other ROS species during reperfusion [11,20]. However, these techniques required the hearts to be freeze-clamped at a specific time and, unlike continuous fluorescent measurements, do not have the time resolution to detect whether this ROS increase precedes or follows mPTP opening. Here, we report data from experiments in which we continuously monitored surface fluorescence of the beating heart at multiple excitation and emission wavelengths using a bespoke fluorimeter that also monitors surface reflectance. We have measured endogenous NAD(P)H and flavoprotein fluorescence as well as intracellular [Ca 2+ ] with Indo-1 and ROS with both cytosolic and mitochondria-targeted ROS probes. In parallel we measured aconitase activity and total [Ca 2+ ] in mitochondria isolated after 90 s reperfusion as additional indicators of mitochondrial ROS and [Ca 2+ ] changes. Contrary to our expectations, we were unable to detect a significant increases in ROS until after about 1.5-2 min of reperfusion, which was after mPTP opening had occurred. Furthermore, the progressive increase in ROS seen after this time point in reperfusion was attenuated by inhibiting mPTP opening with Cyclosporin A (CsA) or IP, both of which reduced infarct size. Measurement of [Ca 2+ ] i with Indo-1 showed it to increase progressively during ischemia, and drop rapidly within 90 s of reperfusion when total mitochondrial [Ca 2+ ] was increased. While these early changes in [Ca 2+ ] were not attenuated by IP, a substantial [Ca 2+ ] increase was observed after 2-3 min reperfusion that was prevented by both IP and CsA and thus likely to be secondary to mPTP opening. We conclude that any early IP-sensitive changes in [Ca 2+ ] and ROS that might trigger initial mPTP opening are too small to be detected. Rather, we propose that IP may inhibit initial mPTP opening by alternative mechanisms such as prevention of hexokinase 2 dissociation from mitochondria during ischemia [21,22]. Subsequent increases in ROS and [Ca 2+ ] follow this initial mPTP opening and so are also attenuated by IP. This may prevent a progressive cycle of mPTP opening, ROS production and calcium overload that would lead to infarct formation. Information. Measurement of heart succinate content was performed by freeze-clamping the heart followed by acidic extraction of metabolites as described previously [21] and enzymatic assay of succinate [24]. Whole heart surface fluorescence measurements. Epicardial fluorescence was monitored using a spinning-wheel fluorimeter designed and custom-built by the authors (PP and APH) in collaboration with Cairn Research Ltd, Faversham, Kent, ME13 8UP. For these experiments, a modified perfusion apparatus was employed that was contained within a light-proof box accommodating the optic fiber from the fluorimeter. This optic fibre was placed at 2-3 mm distance from the left ventricular wall of the heart which was maintained at 37˚C in a water jacketed and humidified Plexiglas perfusion chamber throughout the perfusion. The equipment is shown in Fig 2A and 2B and illustrated schematically in Fig 2C. Details of the filters used are given in S1 Table which also includes photomultiplier voltage settings. A more extensive description of the equipment and techniques used for surface fluorescence measurements is provided in Supporting Information. Autofluorescence of NAD(P)H and flavoproteins. These were measured using excitation at 340 and 460 nm and emission 485 and 535 nm, respectively. For measurements of the rapid changes of autofluorescence seen on reperfusion data were collected continuously at 10 Hz for the last 15 s of ischemia and the first 2 min of reperfusion. Fig 1A and 1B, 10 nmol/L insulin was present in the KHB perfusion medium during dye loading to maintain glycogen levels over the extended pre-ischemic perfusion period. In its absence glycogen became depleted and this gave significant cardioprotection [21,25]. For PO1 the dye was not pre-loaded, but present throughout the perfusion ( Fig 1C) and insulin was not required. In all experiments employing probes loaded in their acetoxymethyl ester (AM) form, the KHB was also supplemented with showing the water jacketed perfusion chamber containing a Langendorff-perfused rat heart. B, Zoom in of the perfusion chamber to show that the light is shining on the surface of the left ventricular free wall and that motion artefact was minimized by placing hearts' apex in a small retaining cup. C, Schematic of the optics. Insert shows alternative configuration for measuring surface fluorescence spectra by switching between the 440 DCLP dichroic mirror for filter based measurements and a 100% reflecting mirror for fluorescence spectrum measurements. 0.1 mmol/L probenecid to limit dye leakage while Pluronic F-127 was added to the stock solutions of the dyes to aid their solubilisation. Dye loading started about 7-8 min after the heart cannulation and was preceded by measurement of background fluorescence for 3-5 min. ROS and [Ca Cardiomyocyte isolation and confocal imaging of MitoPy1. Rat ventricular myocytes were isolated from male Wistar Rats (250g) with a modified Langendorff method as described previously [26]. Cells were immobilised on laminin-coated round glass coverslips, cultured in 6-well plates in medium M199 and used immediately following isolation or after an overnight incubation. Myocytes were washed into HEPES-bufffered saline solution containing (in mmol/L): NaCl 130, HEPES 25, KCl 5, glucose 10, MgCl 2 1, and CaCl 2 1.8 and loaded with MitoTracker Deep Red (200 nmol/L) in the presence or absence of 14 μmol/L MitoPY1 for 30 min at 37˚C. Confocal microscopy was carried out on a Leica SP-5 microscope (63.0 x1.40 Oil) at 37˚C in the Woolfson Bio-imaging Centre at the University of Bristol (http://www. bristol.ac.uk/wolfson-bioimaging/). After loading, 2.5 ml HEPES-bufffered saline solution was added to the cells. H 2 O 2 was added directly to the coverslips (final concentration 100 μmol/L), incubated for 10 minutes and the cells were imaged again. MitoPY1 was excited by Argon laser (514 nm line, 100%) and its fluorescence was collected at 520-570 nm. MitoTracker Deep Red was excited by HeNe laser (633 nm line, 4.1%) and its fluorescence was collected at 640-700 nm. Measurement of mitochondrial enzymes activities and total calcium content. Mitochondria were isolated from rat hearts as described above using a modified isolation buffer lacking EGTA but enriched with 0.2 μmol/L of Cyclosporin A, 0.5 μmol/L of ruthenium red and 2 mg/mL of free fatty acid BSA. At the end of the perfusion protocol (S1 Fig), the heart was flushed with 12 mL of modified isolation buffer at 4˚C prior to mitochondrial isolation. Mitochondria were divided into aliquots and immediately frozen in liquid nitrogen for further biochemical measurements. Citrate synthase activity was assessed as previously described [21] while aconitase activity was determined at 37˚C using a modification of that described by Chouchani et al. [20]. The assay buffer used was: KH 2 PO 4 50 mmol/L; Triton X-100 0.1% (w/v); NADP + 0.2 mmol/L; MnCl 2 0.6 mmol/L, pH 7.4 with KOH. Purified mitochondria were incubated in 1 mL of the assay buffer supplemented with 2 units/mL of NADP-dependent isocitrate dehydrogenase (Leebio products). The reaction was started by the addition of 5 mmol/L of sodium-citrate (pH 7.4) and the production of NADPH followed at 340 nm. In the absence of citrate no NADPH production was detected. To evaluate aconitase sensitivity to H 2 O 2 in the different groups mitochondria were incubated for 5 min on ice in the presence of 200 μmol/L of H 2 O 2 . The assay was then performed as described above (final H 2 O 2 concentration in the cuvette 20 μmol/L). For determination of total calcium content, 200 μg of the mitochondria were mixed with an equal volume of 0.6 mol/L HCl. The mixture was then sonicated two times for 10 s at full power using a Fisherbrand FB15062 sonicator interspersed with a 10 s rest interval. The final homogenate was used to evaluate mitochondrial total calcium content (expressed as ngatoms/ mg protein) using the α-cresolphthalein complexone assay (Cayman Chemical), according to the manufacturer's instructions. Preliminary experiments were performed to confirm that 15 μL of a mixture containing an equal volume of modified ISA and 0.6 mol/L HCl did not alter the pH of the assay buffer (210 μL final volume in the well). Measurement of mPTP opening in hearts using mitochondrial calcein retention. After the stabilization period, Langendorff-perfused hearts were loaded for 10 min with 0.4 μmol/L final of calcein-AM (see S1 Fig). Purified mitochondria (300 to 500 μg) were resuspended in 2 ml final of assay buffer (KH 2 PO 4 33 mmol/L, Triton X-100 0.1% (w/v), pH 7.2) in order to determine the calcein emission spectra using a Xenius fluorimeter (Safas, Monaco) with the following settings: PMT = 900 volts, bandwidth = 10 nm, Ex 493 nm, Em scanned from 505 to 540 nm with a 0.2 nm increment. The maximum emission obtained in the different populations of mitochondria were calibrated using a standard curve obtained using known quantities of calcein. Mitochondrial calcein content was expressed as nmol per mg protein. Background fluorescence was determined in parallel experiments with mitochondria isolated from hearts (n = 3) subjected to a mock loading (DMSO vehicle only). Statistical analysis Data are presented as means ± SEM for the number of separate heart perfusions or mitochondrial preparations indicated. Statistical analysis was performed using the relevant functions in Excel, GraphPad Prism 6 or SPSS 17.0. For real-time fluorescence traces, a 2-tailed t-test was employed to determine the significance of differences between control and IP or drug-treated hearts. An F-test was used first to check the variances and if unequal a heteroscedastic t-test was used. For analysis of differences in infarct size and hemodynamic function between groups a 1-way ANOVA followed by Tukey's multiple comparisons test was used. Validation of the surface fluorescence measurements We first validated the use of our bespoke surface fluorescence apparatus by monitoring the changes in NAD(P)H and flavoprotein autofluorescence during ischemia and reperfusion ( Fig 3A). Upon ischemia, we observed the expected rapid change in reduction state of both NAD (P)H (increase in 340 ex / 485 em fluorescence) and flavoproteins (decrease in 460 ex / 535 em fluorescence) and their re-oxidation during reperfusion. An important feature of our optical configuration is that we can measure the reflectance of light at the wavelengths used for both excitation and emission. These signals, included in Fig 3A, will change in response to both the geometry of the heart surface and the internal filtering caused by changes in the absorbance of endogenous chromophores such as myoglobin and cytochromes. However, changes in light scattering dominate because the optic fibre is held 2-3 mm from the surface of the heart to avoid contact artefacts (see Fig 2B). This is apparent in Fig 3A where the reflectance signals at all wavelengths decrease during ischemic contracture (increase in ventricular pressure) which causes the surface geometry of the heart to change. Importantly, contracture caused no significant change in the fluorescence signals implying that changes in surface geometry are not significantly affecting our fluorescence measurements. We also carried out experiments with rapid data collection to confirm that we could detect beat by beat changes in cytosolic [Ca 2+ ], determined using Indo-1 fluorescence (Fig 3B). By using a ratiometric dual emission dye, movement artefacts are minimised because detection of emitted light at both wavelengths is simultaneous, and both are subjected to the same changes in optical geometry caused by the heart beat. Transient changes in the 405/485 nm fluorescence emission ratio of Indo-1 were detected that exactly followed changes in developed pressure monitored using a latex balloon inserted in the left ventricle. As noted above, another potential artefact that might interfere with fluorescent measurements during ischemia and reperfusion is a change in the internal filtering exerted by myoglobin and cytochromes absorbance as they undergo oxygenation / de-oxygenation and oxidation / reduction [17,29]. The most intense absorbance by oxymyoglobin and deoxymyoglobin (absorbance maxima at 420 and 440 nm respectively [30]) occurs outside the range of all dyes used except for Indo-1, and in this case the 405/485 nm emission ratio was employed. This should be relatively insensitive to myoglobin oxygenation state since the 405/485 absorbance ratios of oxymyoglobin and deoxymyoglobin are similar. Furthermore, signals were corrected for autofluorescence by performing parallel experiments with unloaded hearts under identical conditions and instrument settings. In order to assess the extent to which changes in internal filtering might affect fluorescence signals of probes employed in our studies that are excited and emitting at higher wavelengths, we measured surface emission spectra under normoxic conditions and at different times of ischemia and reperfusion. These data are presented in Fig 4 which also includes the wavelength bandwidths of the emission filters we employed. Changes in fluorescence spectra were observed within the first 30 s of ischemia and were rapidly reversed on reperfusion. These changes are likely to be caused by modifications of the internal filtering by myoglobin as it moves between its oxy-and deoxy-state. Indeed, myoglobin is known to be rapidly de-oxygenated upon ischemia (t 0.5 , 6 s) with 90% equilibration reached at 12 s; re-oxygenation at reperfusion occurs at a similar rate [17]. Thus some caution must be employed when interpreting data obtained within the first 10 s or so of ischemia or reperfusion, when the major changes in myoglobin oxygenation state and thus internal filtering occur. However, this is unlikely to account for any differences observed between control and IP hearts whose rates of myoglobin deoxygenation and oxygenation would not be expected to differ greatly. Furthermore,after about 10 s of reperfusion, comparison of fluorescent signals between control and IP hearts should be unaffected by changes in either internal filtering or surface geometry under our experimental conditions. ROS measurements We explored the use of several fluorescent probes that detect different forms of ROS, including dihydroethidium and several dichlorodihydrofluorescein dyes, but found the majority to be unsatisfactory for use with the perfused rat heart for a variety of reasons as outlined in Supporting Information. Nevertheless, we found that 5-carboxy-2', 7'-dichlorodihydrofluorescein diacetate (5-cH 2 DCFDA) in its diAM form consistently loaded into the heart and was quite well retained as illustrated in Fig 5A. However, there is some debate as to the mechanism of 2',7'-dichlorodihydrofluorescein oxidation in cells and even whether it is suitable to monitor ROS levels [31,32]. Thus we also explored the use of boronate cage reagents have been developed whose fluorescence responds specifically to H 2 O 2 [33]. IP attenuates 5-cH 2 DCFDA oxidation during reperfusion Fig 5A presents typical 5-cH 2 DCFDA fluorescence traces for non-ischemic and ischemic hearts without (control) and with IP. In order to discriminate between fluorescence artefacts caused by changes in internal filtering and real ROS related changes in fluorescence, parallel experiments were performed using hearts loaded with the ROS-insensitive dye, calcein ( Fig 5B), whose fluorescence properties closely match those of 5-cDCFDA. Comparison of the Effects of Ischemic Preconditioning on ROS and Ca 2+ in Ischemic / Reperfused Hearts ROS measurements using 5-cDCF surface fluorescence in control and IP hearts during ischemia / reperfusion A, Typical traces for 5-cH 2 DCFDA, diAM loaded control and IP hearts. B, Parallel data for hearts loaded with the ROS-insensitive dye, calcein-AM. The signal from a non-ischemic 5-cH 2 DCFDA, diAM loaded heart (A, dark grey trace) demonstrates good dye retention during the time of the experiment. C and D, Mean data (± SEM, error bars) of 6 hearts loaded with 5-cH 2 DCFDA, diAM (C) or calcein-AM (D). Signals were normalized using the mean fluorescence value obtained after 1 min of reperfusion. Note that loss of calcein signal with time is due to dye leakage and occurs even in normoxia. Statistically significant effects of IP are shown by the horizontal lines (** p 0.01; * p 0.05). Corresponding infarct sizes are presented in Table 1. doi:10.1371/journal.pone.0167300.g005 Effects of Ischemic Preconditioning on ROS and Ca 2+ in Ischemic / Reperfused Hearts 5-cDCFDA and calcein data (Fig 5A vs. 5B) reveals that the very rapid changes of 5-cDCFDA fluorescence at the onset of ischemia and reperfusion are unlikely to be related to ROS production. Rather, they are probably caused by changes in myoglobin absorbance as seen in the fluorescence emission spectra ( Fig 4B). However, the slower increase in 5-cDCFDA fluorescence on reperfusion is not seen with calcein and thus would appear to report a real increase in ROS which is largely prevented by IP. Fig 5C presents mean data (± SEM) of 6 separate experiments for control and IP hearts. To correct for slight differences in dye loading between hearts, data were normalized to the fluorescence signal at 1 min of reperfusion. In control hearts, a short transient increase in 5-cDCFDA signal is seen between 1.5 and 2 min followed after 4.2 min by a significant and progressive rise in fluorescence that is not seen in IP hearts (p 0.01). Infarct size data for these hearts after 2 hours reperfusion are reported in Table 1 and confirm that IP was strongly cardioprotective under these conditions (15.3 ± 1.4% vs. 60.3 ± 5.4%). The observed increase in 5-cDCFDA fluorescence was not accompanied by a parallel increase in the calcein signal which showed no significant difference between control and IP hearts ( Fig 5D). Note that loss of calcein signal with time is due to dye leakage and occurs even in normoxia. The similar dye leakage between control and IP hearts suggests that at the early stage of reperfusion myocyte plasma membranes integrity is similar in control and IP hearts. IP attenuates PO1-detectable hydrogen peroxide production during reperfusion We first employed PO1 (535 ex / 615 em ) to measure global H 2 O 2 in the heart during ischemia and reperfusion and Fig 6A presents normalised mean data (± SEM) for control and IP hearts. The data show that PO1 was taken up by the perfused heart but not retained upon wash out and therefore must be present in the perfusion medium throughout the experiment. Very rapid decreases in PO1 fluorescent signals were observed upon ischemia with an equally rapid reversal on reperfusion. These changes are similar, but in reverse direction, to those seen for 5-cDCFDA and calcein and might be the result of changes in internal filtering by cytochromes a and a 3 that have a reduced minus oxidised absorbance maximum of 606 nm in intact mitochondria [34]. This is consistent with the rapid change in fluorescence emission spectra of PO1 shown in Fig 4A. However, we observed an additional, slower increase in PO1 fluorescence that began after about 2-3 min of reperfusion and was greatly inhibited by IP as shown in Fig 6B. This response is very similar to that seen with 5-cDCF (Fig 5C). In a separate set of experiments we investigated the effects of adding 0.2 μmol/L Cyclosporin A (CsA), to inhibit mPTP opening, on the increase in PO1 fluorescence during reperfusion. Fig 6C shows that the increase in signal was diminished, but the effect was much less profound than that induced by IP and only clearly visible after about 8 min as shown in Fig 6D. This is not unexpected since, as reported in Table 1, CsA gave less reduction in infarct size (25.5 ± 1.7% from 38.6 ± 3.0%) than IP (10.4 ± 0.6% from 44.4 ± 3.2%) which is consistent with CsA being less effective than IP at preventing mPTP opening as measured by the mitochondrial deoxyglucose entrapment technique [23]. Neither MitoPY1 nor aconitase activity detect increased mitochondrial H 2 O 2 early in reperfusion In order to measure changes in mitochondrial matrix ROS during reperfusion we first employed MitoPY1, a boronate-cage H 2 O 2 -specific fluorescent probe that is targeted to mitochondria using the positively charged triphenylphosphonium group [35]. We established that MitoPY1 is correctly targeted to mitochondria in isolated cardiac myocytes and confirmed that it responded to H 2 O 2 (Fig 7). We then loaded MitoPY1 into the Langendorff-perfused hearts and demonstrated that it responded to added H 2 O 2 in this setting (Fig 8A). However, unlike for PO1 and 5-cDCFDA, we were unable to demonstrate any significant changes between control and IP hearts in the small fluorescence increase upon reperfusion (Fig 8B). Although there appeared to be a trend towards a lower signal in the IP hearts, the data of Fig 8C suggest this is probably caused by changes in autofluorescence rather than MitoPY1 itself. It should also be noted that the MitoPY1-loaded IP hearts showed a significantly greater infarct size than unloaded hearts (Table 1) suggesting that MitoPY1 loading might attenuate the effects of IP by some unidentified mechanism. Overall, we conclude that the small changes in the MitoPY1 signal observed were more likely to represents changes in flavoprotein autofluorescence (485 nm excitation) and that any increase in matrix H 2 O 2 upon reperfusion is below our limit of detection. Further evidence against any major increases in mitochondrial ROS occurring during early reperfusion was obtained by measuring aconitase activity in mitochondria isolated after 90 s of reperfusion. No significant differences in activity were detected between pre-ischemic and 90 s reperfused samples, whereas treatment of the mitochondrial extracts (at 0˚C) with 200 μmol/L H 2 O 2 did reduce enzyme activity by about 20% (Fig 8D). Significant decreases (~50%) in aconitase activity on reperfusion have been detected by others [20], but the earliest time point measured was after 15 min of reperfusion when our PO1 and 5-cDCF data also detect increases in ROS. The effects of IP on tissue succinate and autofluorescence are inconsistent with complex 1 mediated generation of matrix superoxide in early reperfusion Chouchani et al. have proposed that superoxide production occurs early in reperfusion from the matrix face of Complex 1 which they suggest is maintained in a highly reduced state by Effects of Ischemic Preconditioning on ROS and Ca 2+ in Ischemic / Reperfused Hearts reverse electron flow mediated by the oxidation of succinate that accumulates during ischemia [20,36]. In order to investigate this possibility we have monitored the redox state of NAD(P)H and flavoproteins over the first two minutes of reperfusion using continuous acquisition at 10 Hz. In Fig 9 mean data (± SEM), normalised to the end ischemic value, are presented for control (n = 8) and IP hearts (n = 9). Since the fluorescence of NAD(P)H decreases on re-oxidation (Panel A), while that of flavoproteins increases (Panel B), we also calculated the ratio (Panel C) since this will reduce any errors caused by motion artefacts. In control hearts the redox state of flavoproteins and NAD(P)H returned to pre-ischemic levels with a characteristic and very reproducible multiphase pattern that included a very rapid initial oxidation (t 0.5 < 10 s). In IP hearts re-oxidation showed only a single phase and was significantly slower than for Table 1. control hearts (t 0.5~1 5 s). We have also measured the levels of succinate in hearts and found pre-ischemic values were below our limit of detection (~0.2 nmol/mg dry weight (dw)) but increased to 4.82 ± 0.47 nmol/mg dw (n = 6) after 30 min ischemia as described by Chouchani et al [20]. However, a similar increase in succinate to 3.91± 0.38 nmol/mg dw (n = 6) was also observed in IP hearts subject to 30 min ischemia arguing against reverse electron from succinate being the source of ROS at complex 1 that is modulated by IP. Measurement of intracellular [Ca 2+ ] during ischemia / reperfusion using Indo-1 We monitored [Ca 2+ ] i using Indo-1 and it is important to note that some of this dye loads into mitochondria [26] and thus its response may reflect changes in mitochondrial as well as cytosolic [Ca 2+ ]. Fig 10 shows individual traces for changes in the Indo-1 ratio corrected for background autofluorescence in control (panel A) and IP (panel B) hearts. In both cases, during the first 2-3 min of ischemia the Indo-1 ratio rises and then decreases again before rising continuously until the end of 30 min of ischemia. Since the binding of Ca 2+ to Indo-1 is pH sensitive [37], the decrease in pH during ischemia could initially attenuate or reverse any increase caused by an increase in cytosolic [Ca 2+ ]. Indeed, this may account for the initial rise and fall in signal. Nevertheless, the continuous rise between 5-30 min of ischemia does reflect a real and progressive increase in [Ca 2+ ], but importantly there is no significant difference between control and IP hearts. However, a major difference in cytosolic [Ca 2+ ] between control and IP hearts is seen upon reperfusion as highlighted in Fig 10C where mean data (± SEM) for 15 control hearts and 10 IP hearts are presented. In both cases the signal initially drops rapidly, most likely reflecting a decrease in cytosolic [Ca 2+ ] as it is removed from the cytosol by export from the cells and uptake into the SR and re-energized mitochondria. Indeed, in a separate set of experiments in which we isolated mitochondria at 90 s of reperfusion under conditions that maintain their calcium content, we found that the total matrix calcium (ngatoms/mg protein and means ± SEM) increased from 0.380 ± 0.028 in preischemic hearts to 0.501 ± 0.022 and 0.486 ± 0.054 in control and IP hearts respectively (Fig 11A). Effects of Ischemic Preconditioning on ROS and Ca 2+ in Ischemic / Reperfused Hearts As reperfusion continued the Indo-1 signal stayed low in IP hearts (infarct size 13.9 ± 1.7%) with only a gradual increase upon reperfusion, whereas in control hearts (infarct size 61.9 ± 2.3%), the signal began to rise rapidly after about 2 min, although with substantial variation between hearts as seen in Fig 10A. This rise probably occurs after mPTP opening since we have shown previously, using the deoxyglucose entrapment technique, that mPTP opening begins within 2 min of reperfusion and is largely prevented by IP [23]. However, in order to confirm this to be the case under our current experimental conditions, we isolated mitochondria from hearts loaded with calcein (S1 Fig). The calcein-AM that enters mitochondria in situ will be trapped within them following AM ester hydrolysis, but will be lost following mPTP and IP (grey) in Langendorffperfused hearts subjected to 30 min ischemia + 30 min reperfusion. Fluorescence was normalized to the 1 min preiscaemic value. MitoPY1 was successfully loaded into the hearts as shown by the increase in fluorescence upon addition of H 2 O 2 to the perfusion medium. B, Mean 535 nm fluorescence (± SEM as error bars) was normalized to the average value between 20.5 and 21.5 min obtained on reperfusion after 30 min global ischemia in control (black, n = 10) and IP hearts (grey, n = 8). C, Corresponding autofluorescence data for hearts subject to a mock loading protocol (n = 6 in both control and IP hearts). Corresponding infarct sizes are presented in Table 1. D, Aconitase activity in mitochondria isolated from normoxic control hearts (Cont) and both control (CP Rep) and ischemic preconditioned (IP Rep) hearts subjected to 30 min ischemia and 90 s reperfusion as described in S1 Fig Table presents mean data for the corresponding values at 1 min before ischemia and at 2, 10 and 30 min of reperfusion, while a full representative trace for each parameter is given in S2 Fig Fig 10. Measurement of [Ca 2+ ] using Indo-1 in control and IP hearts during ischemia / reperfusion. Indo-1 fluorescence ratio corrected for autofluorescence was monitored in control (A) and IP (B) hearts undergoing 30 min global ischemia and reperfusion. C, Mean data (± SEM as error bars) for control (black, n = 15), IP (light grey, n = 10) and CsA-treated hearts (dark grey, n = 10) during the last 10 min of ischemia and 30 min of reperfusion. In CsA-treated hearts, 0.2 μmol/L CsA was present in the perfusion medium for 10 min before ischemia and for 30 min of reperfusion (n = 10). Corresponding infarct sizes are presented in Table 1. doi:10.1371/journal.pone.0167300.g010 Effects of Ischemic Preconditioning on ROS and Ca 2+ in Ischemic / Reperfused Hearts opening and isolation of mitochondria. At 90 s of reperfusion mitochondria isolated from control I/R hearts showed significantly less entrapped calcein than mitochondria from pre-ischemic hearts (Fig 11B). This confirms that some mPTP opening had occurred at this time, whereas for IP hearts this was not the case, consistent with inhibition of mPTP opening. These data support our conclusion that the onset of the rise in cytosolic [Ca 2+ ] during reperfusion follows rather than precedes mPTP opening. Additional evidence for this is provided by the data of Fig 10D which shows that the presence of 0.2 μmol/L CsA prior to ischemia and during reperfusion to inhibit mPTP opening, also largely prevented this [Ca 2+ ] rise while reducing infarct size from 61.9 ± 2.3 to 44.8 ± 3.2%. Opening of the mPTP in isolated heart mitochondria induces ROS formation Our data suggest that the major rise in ROS and [Ca 2+ ] observed after the first few minutes of reperfusion is a consequence of mPTP opening. In Fig 12 we present data to confirm that opening of the mPTP does greatly enhance ROS production and release accumulated [Ca 2+ ], consistent with this prediction. Isolated heart mitochondria were energised with glutamate + malate and succinate (GMS) to mimic the substrate availability in situ [28], and simultaneous measurements made of light scattering (LS), mitochondrial membrane potential (ΔC m ) and extramitochondrial [Ca 2+ ]. Parallel experiments monitored H 2 O 2 emission with Amplex Red and LS. Fig 12A shows that sequential additions of 100 μmol/L Ca 2+ were rapidly taken up by the mitochondria with a progressive reduction in ΔC m and consequently ROS production rate until the mPTP opened (rapid and total loss of ΔC m , release of accumulated Ca 2+ and a large LS decrease). At this point, ROS production greatly increased again. These data are quantified in Fig 12B. Discussion Although it is widely accepted that increases in mitochondrial matrix [Ca 2+ ] and ROS trigger mPTP opening early in reperfusion [3,4], this has not been adequately demonstrated in the beating perfused heart subject to ischemia and reperfusion. The bespoke surface fluorescence equipment we describe in this paper, together with a robust experimental approach has allowed us to address many of the substantial challenges associated with such measurements. Our data lead us to the conclusion that reduction in ROS and [Ca 2+ ] early in reperfusion may not be the primary mechanism by which IP inhibits mPTP opening. Validation and limitations of surface fluorescent techniques to measure ROS and Ca 2+ during ischemia / reperfusion of the beating heart Although major changes in surface reflectance signals were detected in response to ischemic contracture, they were not accompanied by significant changes in NAD(P)H and flavoprotein autofluorescence (Fig 3A). This confirms the reliability of our fluorescence measurements even in the face of the significant changes in optical geometry occurring at the onset of both ischemia and reperfusion. Concerns over fluorescence artefacts caused by changes in internal filtering by myoglobin and cytochromes during ischemia and reperfusion were addressed by Effects of Ischemic Preconditioning on ROS and Ca 2+ in Ischemic / Reperfused Hearts measuring the fluorescence emission spectrum from the surface of the heart in pre-ischemic hearts and at different times of ischemia and reperfusion (Fig 4). The data show that although there are potential artefacts caused by such changes, they are likely to affect only the first few seconds of ischemia and reperfusion and most importantly to be similar in control and IP hearts. Furthermore, we employed two different ROS-fluorescent dyes with distinct spectral properties and ROS specificity: 5-cH 2 DCFDA (loaded as diAM), whose oxidation occurs in response to oxidative stress but which may not directly monitor ROS [31,32], and the H 2 O 2 specific probe, PO1 [33]. Reassuringly, we obtained similar data with both. In the case of 5-cH 2 DCFDA we also performed parallel experiments with the ROSinsensitive dye calcein, which has similar fluorescent properties to 5-cDCFDA, and confirmed that it did not show a similar increase in fluorescence on reperfusion. Thus we are confident that the time course of ROS change we observe with both dyes accurately reflects global changes in ROS. Measurements of [Ca 2+ ] using the 405/485 nm fluorescence emission ratio of Indo-1 are relatively insensitive to changes in the internal filtering effects caused by changes in myoglobin oxygenation or cytochrome oxidation state (Fig 4). In addition, we performed parallel experiments in the absence of Indo-1 (at the same instrument settings) to correct for changes in autofluorescence. We have not attempted to correct for the influence of pH on the response of Indo-1 to [Ca 2+ ] since we are interested in detecting major differences between control and IP hearts rather than absolute [Ca 2+ ] values, and differences in pH are unlikely to explain the ability of IP to prevent the large increases in [Ca 2+ ] observed later in reperfusion. We recognise that an additional limitation of our surface fluorescence measurements is their restriction to measuring ROS and [Ca 2+ ] changes in the epicardium, but this is true of any optical technique that uses emission and excitation wavelengths close to 500nm. It is also possible that the magnitude and time courses of fluorescence changes observed in the epicardium may not exactly mirror those in more internal layers of the ventricle wall. However, our measurements of mitochondrial aconitase activity and calcium content suggest that any such differences would not significantly affect our conclusions since mitochondria were isolated from the whole ventricle and these show effects entirely consistent with those obtained by the fluorescence measurements. Thus at 90 s of reperfusion, no change in aconitase activity was detected (Fig 8D) in agreement with our inability to detect significant ROS increases with 5-cH 2 DCFDA (Fig 5C), PO1 (Fig 6B) or MitoPY1 (Fig 8B). Nor did IP attenuate the increase in mitochondrial Ca 2+ content on reperfusion ( Fig 11A) consistent with no change in the Indo-1 fluorescence data at this time point (Fig 10C). The role of reverse electron flow in superoxide production at complex 1 Chouchani et al. [20] have proposed that superoxide production occurs early in reperfusion from the matrix face of Complex 1 which they suggest is maintained in a highly reduced state by reverse electron flow mediated by the oxidation of the succinate that accumulates during ischemia [20,36]. However, our measurements of the redox state of NAD(P)H and flavoproteins in the first two minutes of reperfusion (Fig 9) revealed that they were both very rapidly Fig 12. ROS production by isolated mitochondria before and after MPTP opening. A, Effects of sequential additions of Ca 2+ (100 mmol/L) to isolated rat heart mitochondria incubated with 5 mmol/L Lglutamate + 2 mmol/L L-malate and 5 mmol/L succinate (GMS) on mPTP opening measured as the loss of ΔΨ (Rhd-123 fluorescence increase) or accumulated Ca 2+ (Fura-FF fluorescence increase) and decrease in LS. ROS production was determined using Amplex Red in a parallel experiment on the same batch of mitochondria. B, Mean rates of ROS production (± SEM of 6 different mitochondrial preparations) before Ca 2+ addition, after the penultimate Ca 2+ addition and following pore opening. ** p 0.01. doi:10.1371/journal.pone.0167300.g012 Effects of Ischemic Preconditioning on ROS and Ca 2+ in Ischemic / Reperfused Hearts re-oxidised (t 0.5 < 15 s) and that this re-oxidation was slower in IP than control hearts. This would appear to be inconsistent with IP reducing superoxide production during early reperfusion. However, the data are consistent with IP enhancing inhibition of the mitochondrial ATP synthase by its inhibitor IF1 during ischemia, as has been observed in some studies (see [38]) since this could lead to slower activation of oxidative phosphorylation and respiration on reperfusion. Furthermore, the accumulation of succinate during 30 min ischemia was not prevented by IP (3.91± 0.38 nmol/mg dw (n = 6) compared to 4.82 ± 0.47 nmol/mg dw (n = 6) in controls). Thus if an increase in ROS is occurring early in reperfusion to stimulate mPTP opening our data suggest it is not a result of reverse electon flow from succinate leading to increased superoxide production at Complex 1. We next address whether such an increase in matrix ROS early in reperfusion does occur prior to initial mPTP opening. Detectable ROS increases start at 2-3 min of reperfusion, after mPTP opening, and are inhibited by IP and CsA A wealth of published data, including our own [3], led us to anticipate that we would observe a significant increase in ROS immediately on reperfusion that would act as a trigger for mPTP opening, and that this would be reduced by IP. However, neither 5-cDCFDA nor PO1 signals began to rise consistently until 2-3 min of reperfusion (Figs 5C and 6C) which is after mPTP opening occurs, as revealed in early experiments by increased entrapment of 2-deoxyglucose [2] and in the present experiments by release of entrapped calcein (Fig 11B). It is possible that 5-cDCFDA and PO1 are insufficiently sensitive to detect any small early increases in mitochondrial matrix ROS that might be responsible for triggering mPTP opening. However, our measurements of NAD(P)H and flavoproteins autofluorescence during the first phase of reperfusion do not suffer these limitations. As noted above, these data reveal that NAD(P)H and flavoproteins are oxidised very rapidly (t 0.5 < 10 s) and so would be unlikely to generate significant ROS after the first few seconds of reperfusion. Equally important is the observation that the rate of oxidation of NAD(P)H and flavoproteins is slower in IP hearts than in control hearts. If complex 1 were producing a rapid burst of ROS responsible for triggering mPTP opening our data would predict greater ROS production in the mitochondria of IP hearts and thous would be expected to enhance rather than impair mPTP opening. Furthermore, we were unable to detect any significant increase in mitochondrial H 2 O 2 using the targeted probe MitoPY1 (Fig 8B) or aconitase activity (Fig 8D) after 90 s of reperfusion. Again, we recognize that these techniques suffer limitations. Thus loading with MitoPY1 appeared to have additional "off-target" effects that modulate the infarct size (Table 1), most notably increasing it in IP hearts from 13.2 ± 2.1% to 33.6 ± 1.9%. Nor can we completely rule out the possibility that the sensitivity and kinetics of the reaction of MitoPY1 and aconitase with H 2 O 2 are sufficient to detect any small increases in matrix H 2 O 2 early in reperfusion that might trigger mPTP opening. A compounding factor for MitoPY1 is that small changes in its signal during the first 10-20 seconds of reperfusion could be masked by major changes in internal filtering caused by myoglobin re-oxygenation that occurs over this time period [17] as well as by rapid changes in autofluorescence (Fig 8C). Nevertheless, even taking these limitations into account, our data clearly show that any changes in mitochondrial matrix ROS in early reperfusion must be very small compared with those observed later in reperfusion. It should be noted that although Chouchani et al. [20] did detect an increase in H 2 O 2 using the mitochondrial-targeted boronate cage reagent MitoB and aconitase activity, this was measured at 15 min of reperfusion, again after mPTP opening has peaked [2] and when our own data confirm that a significant increase in ROS has occurred. However, it is also important to note that our data do not rule out the possibility that there is a rapid burst of superoxide production in the early stages of reperfusion which is immediately removed by endogenous defense mechanisms such as superoxide dismutase working in combination with glutathione peroxidase and glutathione reductase. The subsequent rises in ROS we observe later in reperfusion might reflect the exhaustion of these defence mechanisms. Overall, our data suggest that any increases in mitochondrial matrix H 2 O 2 early in reperfusion are too small to be detected by the techniques we employed and thus are unlikely to mediate the initial IP-inhibitable opening of the mPTP. This may explain why our earlier studies with the mitochondria-targeted superoxide scavenger, MitoQ, failed to demonstrate cardioprotection in the Langendorff-perfused heart [39]. It should be noted that earlier data showing protection from I/R injury of the heart by MitoQ was obtained in a whole animal model in which rats were treated for 14 days with mitoQ in their drinking water before induction of myocardial ischemia [40]. Thus the observed cardioprotective effects of the drug in these experiments could be secondary to its longer term effects and not directly via mitochondrial ROS scavenging in the heart. Rather, our data suggest that the observed increase in ROS on reperfusion detected with both 5-cDCFDA ( Fig 5C) and PO1 (Fig 6B) occurs after mPTP opening and is attenuated by IP and CsA as a secondary response to their inhibition of pore opening. It should be noted that superoxide production occurring after mPTP opening, whether in the matrix or the intermembrane space, will be detected by extramitochondrial ROS probes such as 5-cDCFDA and PO1 that we employed. The reduction in ROS caused by IP is considerably greater than that by CsA ( Fig 6C) and this matches the relative ability of IP and CsA to prevent mPTP opening on reperfusion as determined with the 2-deoxyglucose entrapment protocol [23]. The data we present in Fig 12 confirm that increased ROS production does occur following mPTP opening in isolated heart mitochondria as has previously been reported [41]. It is probably the result of cytochrome c and NADPH loss from the mitochondria, both of which are important for ROS scavenging [21,41]. Indeed it has been proposed that such mPTP-mediated ROS production drives progressive mPTP opening in adjacent mitochondria and cells leading to an expanding wave of necrotic cell death as reperfusion continues [3,41]. However, the presence of ROS scavengers during reperfusion has proved to be relatively ineffective at preventing IR in either the clinical setting or with in vivo and ex vivo animal models of IR [42,43]. A better understanding of the mechanisms leading to ROS production later during the reperfusion phase may help the developement of drugs that inhibit this process more effectively, although a role of ROS in cardioprotective signalling pathways [43,44] may work against this strategy. Furthermore, it is also possible that mPTP-induced ROS production is less important in causing a progressive cascade of mPTP opening than the dysregulation of [Ca 2+ ] discussed below. 2+ ] during reperfusion that occur after mPTP opening Another major trigger for mPTP opening is matrix [Ca 2+ ]. As predicted, we did detect an increase in Indo-1 ratio during ischemia, consistent with a rise in cytosolic and mitochondrial [Ca 2+ ], and this was followed by a return to basal at the start of reperfusion (Fig 10A), most likely reflecting cytosolic calcium efflux from the cardiomyocytes and / or uptake into mitochondria and other intracellular stores. Similar data were obtained with 20 min ischemia by Hassinen's group using Fura-2, but this dual wavelength excitation indicator (340/380 ex , >515 em ) requires much greater corrections for autofluorescence than Indo-1 [30,45]. We were also able to demonstrate that mitochondria isolated after 90 s of reperfusion contained more calcium than pre-ischemic hearts consistent with calcium being an important trigger for mPTP opening. However, our data revealed that the increase in [Ca 2+ ] during ischemia in IP hearts was similar to that in control hearts (Fig 10A and 10B), as was the total calcium content of mitochondria isolated at 90 s of reperfusion (Fig 11A). Thus it is unlikely that IP modulates mPTP opening on reperfusion by attenuating [Ca 2+ ] within the mitochondrial matrix. Nevertheless, after 2-3 min of reperfusion, control hearts did show a profound and progressive increase in Indo-1 ratio which was not observed in IP hearts (Fig 10C). This large rise in [Ca 2 + ] later in reperfusion, like that of ROS, is likely to be secondary to mPTP opening and probably reflects energy compromise in cells following mPTP opening with subsequent impairment of ionic homeostasis. Indeed we show that treatment of hearts with CsA to inhibit mPTP opening also largely prevented the observed increase in [Ca 2+ ] on reperfusion (Fig 10C). Similar observations have been made by others using an isolated myocyte model of ischemia / reperfusion [46]. The dysregulation of intracellular [Ca 2+ ] following initial mPTP opening may act in conjunction with the rise in ROS to cause additional mPTP opening in adjacent "unopened" mitochondria and lead to a progressive wave of cell death as discussed above. This may explain the waves of [Ca 2+ ] and ROS preceding mPTP opening that have been observed by confocal microscopy in the intact contraction-inhibited heart subject to hypoxia and reoxygenation [18]. IP inhibits the large increases in [Ca It is important to stress that our data do not rule out a role for mitochondrial Ca 2+ accumulation at the end of ischemia and during early reperfusion in mPTP opening. Such increases in matrix [Ca 2+ ] have been demonstrated previously in both cardiac myocytes [47] and perfused hearts [48] and confirmed here ( Fig 11A). Indeed, hearts of mice with an adult cardiomyocytespecific deletion of the mitochondrial calcium uniporter (MCU) have recently been shown to exhibit reduced IR injury [49,50] confirming earlier data obtained with the MCU inhibitors ruthenium red and Ru360 [51,52]. However, what our data do imply is that inhibition of mPTP opening by IP is unlikely to be secondary to decreased matrix calcium overload, but rather by decreased sensitivity to matrix [Ca 2+ ]. Implications for cardioprotection by therapeutic interventions Prevention of mPTP opening at reperfusion is widely regarded as a promising target for cardioprotection [3,8]. In the context of cardiac surgery, preconditioning (ischemic and pharmacological) is feasible but this is not the case for treating a coronary thrombosis by percutaneous coronary intervention (PCI-also called angioplasty). This may explain why clinical trials failed to show a significant cardioprotective effect of CsA given intravenously prior to the treatment of coronary thrombosis with PCI [53]. However, under these conditions, ischemic post-conditioning may be effective. Here the relevant coronary arteries are subject to short transient cycles of blood flow and occlusion for the first few minutes of reperfusion [54]. Although the mechanisms underlying ischemic post-conditioning are not fully elucidated, it has been shown to be associated with inhibition of mPTP opening. This may be partially explained by maintenance of an acid pH (inhibitory for mPTP opening) for longer during the early stages of reperfusion, but other signalling pathways are also thought to be involved [44,55]. There are no published data to show that reduced ROS production during the early stages of reperfusion post-conditioning is the mechanism by which it inhibits mPTP. However, this is not unexpected in view of our inability to detect increases in ROS early in reperfusion. Although we have not confirmed it experimentally, we would suggest that no matter how post-conditioning inhibits the first phase of mPTP opening, it would prevent secondary ROS production and dysregulation of [Ca 2+ ] with less subsequent mPTP opening and thus reduced infarct formation. Indeed, our data lead us to predict that any protocl employed during reperfusion that reduces disturbances in ROS and calcium during the later phase of the reperfusion should be a good strategy to limit further mPTP opening and thus provide cardioprotection. Effects of Ischemic Preconditioning on ROS and Ca 2+ in Ischemic / Reperfused Hearts Conclusions Taken together, our data suggest that any changes in [Ca 2+ ] or ROS early in reperfusion that might trigger mPTP opening are small compared with those later in reperfusion, and not attenuated by IP. If this is the case, IP would have to work through alternative mechanisms to attenuate initial mPTP opening. One such established mechanism involves hexokinase 2 (HK2) whose binding to mitochondria is associated with resistance to mPTP opening [22,56,57] and also with stabilisation of contact sites between the inner and outer membrane whose breakage may enhance cytochrome c release [56,58]. HK2 is known to dissociate from heart mitochondria during reperfusion and this is prevented by IP [21,22,56,57]. Indeed, there is a very strong inverse correlation between the amount of HK2 remaining bound to mitochondria at the end of ischemia and the infarct size after 120 min of reperfusion [21]. We propose that it may be these changes to the mitochondria together with the elevated [Ca 2+ ] present at the end of ischemia and during early reperfusion that sensitise mitochondria to mPTP opening. As reperfusion continues, the intracellular pH returns to control values (>7) from end-ischemic values (< 6.5) that inhibit mPTP opening. As a result, mPTP opening now occurs and leads to the observed secondary increase in ROS release. In addition, through impaired energy metabolism and consequent dysregulation of calcium homeostasis, a large rise in [Ca 2+ ] occurs. These secondary changes in [Ca 2+ ] and ROS, will then trigger mPTP opening in adjacent mitochondria and neighbouring cells leading to a spreading wave of mPTP opening and increasing infarct size as reperfusion continues. IP, by preventing the HK2 loss and contact site destabilization during ischemia, will prevent these secondary increases in [Ca 2+ ] and ROS and thus attenuate the resulting cell death and infarct development. This is illustrated schematically in Fig 13. We have suggested previously how a wide range of cardioprotective strategies and signalling pathways may exert their effects through this common pathway [56].
2018-04-03T04:13:02.659Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "5f655baf87fc396a907bbe2052e6389994c2b663", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0167300&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b10dd9a6f88ff51a579d97926725071ea438d83c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
229243721
pes2o/s2orc
v3-fos-license
Risk Prediction for Diabetes Mellitus-A Population Based Approach In many scientific disciplines, the study that predicts or forecast what is going to happen within the future has contributed to our understanding of the planet. The worth of scientific studies that provide models to tell strategies that may modify and possibly mitigate future events is of importance to society. Examples include estimating the impact of climate or environmental changes on the earth‘s ecosystems or the impact of policy changes on the economy.1-3 These prediction models are accepted as valuable tools by scientists and have provided critical information for the event of strategies to change predicted trends4,5 within the field of epidemiology, prediction models are underrepresented and also the concept of risk prediction is overshadowed by the estimation of relative risk measures to clarify etiological perspectives of disease. Etiological models use identical estimation procedures as most predictive modelling (i.e., regression) to quantify the relative risk related to a selected exposure on an outcome. Though regression is usually used for both purposes, the way within which the model is built will differ thanks to the goals of the model. The goal of a prediction model differs in several important ways. First, the result which must be optimized . ABSTRACT INTRODUCTION In many scientific disciplines, the study that predicts or forecast what is going to happen within the future has contributed to our understanding of the planet. The worth of scientific studies that provide models to tell strategies that may modify and possibly mitigate future events is of importance to society. Examples include estimating the impact of climate or environmental changes on the earth's ecosystems or the impact of policy changes on the economy. [1][2][3] These prediction models are accepted as valuable tools by scientists and have provided critical information for the event of strategies to change predicted trends 4,5 within the field of epidemiology, prediction models are underrepresented and also the concept of risk prediction is overshadowed by the estimation of relative risk measures to clarify etiological perspectives of disease. Etiological models use identical estimation procedures as most predictive modelling (i.e., regression) to quantify the relative risk related to a selected exposure on an outcome. Though regression is usually used for both purposes, the way within which the model is built will differ thanks to the goals of the model. The goal of a prediction model differs in several important ways. First, the result which must be optimized . is an absolute measure of risk, often expressed as percentage or probability (versus a risk or hazard ratio). Second, the goal of a prediction model is to maximise the flexibility to discriminate between in danger groups and to properly classify true risk, referred to as discrimination and calibration. Typically these indices don't seem to be evaluated in etiological models. Thirdly, prediction models must be generalizable in other populations to which the model is applied. Typically etiological models fit the information accustomed to generate the relative risk estimate as tightly as possible and as a result, might not be reproducible using data in other settings or might not be applicable to explain risk in another population. These goals change the factors for model assessment and concomitantly the methodological framework is employed. In medicine, prediction models, within the type of risk algorithms, are used as tools for patient decision-making. A risk algorithm could be a tool wont to estimate absolutely the risk of an outcome for a person as a function of their baseline characteristics. Typically, the risk is expressed because of the probability of dying or developing a disease in a very given fundamental measure. 6 One in every of the foremost utilized risk tools is that the Framingham Heart Score. 7 This tool is employed to calculate the probability that a patient will develop coronary heart condition in 5 or 10 years and has been widely integrated into disorder prevention and management throughout the planet. [8][9][10][11][12] Risk algorithms are widely recommended by medical societies for appropriate identification of patients that may have the benefit of specific interventions. This is often exemplified in clinical guidelines for pharmacologic interventions like cholesterol-lowering medications. 13 Several potential benefits are also realized by extending the appliance of those tools to the population level. Just like the individual level, at the population setting predictive risk tools have the potential of providing insight into the longer-term burden of a disease in a complete region or nation and therefore the influence of specific risk factors. These tools can support health care higher cognitive process, including the effective and efficient allocation and distribution of health care resources and plan for effective disease prevention interventions. To date, prediction tools specifically designed to be used at the population level are neither created nor used for planning. MATERIALS AND METHODS Using a cohort design that links baseline risk factors to a validated population-based diabetes registry, a model (Diabetes Population Risk Tool or DPoRT) to predict risk factors for diabetes using commonly-collected national survey data was developed and validated. The event cohort was the National Population Health Survey (NPHS) linked to the validated Diabetes Database, a provincial component of the National Diabetes closed-circuit television (NDSS). Variables were restricted to factors routinely measured within the population. The probability of developing diabetes was modelled using sex-specific survival functions for those > 20 years, without diabetes and not pregnant at baseline (N = 19,000). [Ethical clearance no. BRLSABV7/07/2019] The model was validated in two external validation cohorts, both linked to administrative data for NDSS-defined physician-diagnosed diabetes. Predictive accuracy was assessed by comparing observed physician-diagnosed diabetes rates with predicted risk estimates from DPoRT. Discrimination of the model was assessed employing a C statistic and calibration was assessed with the chi-square statistic. RESULTS In the development cohort, 700 males and 665 females developed physician-diagnosed diabetes within the 5 year follow-up period. The age-standardized 5-year incidence rates in the development cohorts were 6.52% for males and 5.42% for females. The 3-year age-standardized incidence rates in the development cohort were 3.42% for males and 2.41% for females. The age-standardized 5-year incidence rates in the development cohorts were 6.42% for males and 4.20 % for females. The age-standardized 3-year incidence rates for validation cohort was 3.45 % for males and 3.22 % for females. All baseline population characteristics in the derivation cohort and two validation cohorts are shown in table 1. Both the validation cohorts differed from the derivation cohort. There were similar in age distribution, however, both had a higher proportion of obese individuals. Compared to the derivation cohort had a higher baseline prevalence of hypertension and heart disease but a lower prevalence of smoking while the other cohort had higher levels of hypertension and heart disease compared to the derivation cohort in women only. DISCUSSION This study demonstrated that diabetes risk can be accurately predicted at the population level using self-reported age, sex, Body Mass Index and other measures available in population health surveys. In addition to displaying good discrimination, DPoRT-predicted rates closely agreed with observed rates for both males and females in both external validation cohorts, and this agreement was generally maintained across deciles and quintiles of risk. To my knowledge, DPoRT is the first validated risk tool that is integrated into commonly-collected population health survey data. DPORT offers advantages over existing methods used to estimate future diabetes risk in populations. Previous studies that estimate future diabetes burden have either extrapolated overall trends in diabetes prevalence or indirectly incorporated information on the influence of risk factors with various assumptions. 13 Other studies focus on overall diabetes burden, a useful approach, but one which does not enable users to directly assess the impact of risk factors, such as BMI, on future diabetes. Furthermore, these studies did not assess how future diabetes can be prevented by targeting risk factors since they do not directly quantify the influence of risk factors on baseline risk or diabetes incidence. Complex modelling and simulation studies differ from the approach used in this study in that they use additional information on how populations and risk factors change over time. 14,15 Other simulation studies add more detailed clinical information such as fasting blood sugar level or information on diabetes family history, data not available at the population level. Strength of these simulation models is that they can combine different data sources and study findings 16 However, these models are complex and often represent clinical or theoretical populations, making their estimates difficult to validate in external populations that are meaningful for population health planning. DPoRT could be incorporated into simulation models that consider future changes in population composition and risk factors. The nature of diabetes risk allowed us to discriminate and explain risk using a limited number of variables -most importantly BMI. Discrimination of DPoRT is as high as or higher than many clinical risk prediction tools used in clinical practice. The algorithm was further calibrated using population means, which may atten- uate differences between populations since risk estimates are relative to baseline risk in the population. Given current data in most countries, DPoRT is a more balanced approach to estimating diabetes risk than methods used in previous research. Several important clinical values are excluded from DPoRT, such as hip to waist ratio, waist circumference, fasting blood glucose, and family history. [17][18][19] Although these variables may be clinically important for assessing diabetes risk, adding these, or other detailed anthropometric measures is not feasible because they are not routinely collected in most populations. These omitted variables are unlikely to have a major impact on the performance characteristics of the model due to the clustering of risk factors, particularly when dealing with abnormalities of the metabolic system. [20][21] Variables not included in DPoRT, such as the family history of diabetes or poor diet, are also associated with the clustering of metabolic risk factors that are included in the algorithm such as hypertension and BMI. Obesity is the most important factor in predicting diabetes risk. BMI is the most commonly used marker of obesity; however, measures of central obesity may capture the entire risk domain more comprehensively and be more meaningful across all age groups. 22 A recent meta-analysis has shown that there is no evidence of a difference in estimates associated with incident diabetes between BMI, waist circumference and waist/ hip ratios. 23 Furthermore, algorithms to identify individuals for weight loss in populations did not differ if using BMI or waist circumference. 24 To ensure DPoRT can be applied in different populations, we gave preference to variables that were: based on established evidence, remained stable over time, were unlikely to be subject to serious measurement error (such as alcohol and dietary habits), and were easily captured using survey data in different populations. For example, physical activity has been shown to have a protective effect on diabetes incidence 25 but was removed from the final algorithm due to the inability to capture this in a reliable and reproducible manner across studies, and because of it marginal improvement in the discrimination of diabetes risk in our creation cohort. Despite placing considerable constraints on variable selection as a means of ensuring maximum feasibility, DPoRT maintained good discrimination. The effect of self-reported BMI may depend on the population where the algorithm is being applied since these patterns have been shown to vary across gender and socioeconomic status. [26][27][28] The ability of DPoRT's predictive estimates to agree with observed diabetes risk in different populations will be reduced if systematic errors associated with responses vary across populations or time. If diabetes testing/screening increases over time predicted estimates could be lower than the observed estimates (under the assumption that this would lead to increased case detection). DPoRT is accurate in different populations for different periods; however, DPoRT could be recalibrated to predict total diabetes cases using revised information on screening/ testing or using estimates of the number of undiagnosed cases relative to diagnosed cases in the population. Finally, the potential for inaccuracy increases the longer into the future the predictions are made or when unforeseen changes occur; therefore, it is recommended that predictions from DPoRT are updated frequently by using the most recent data, limiting predictive calculations to 5 years or less, and validating the risk tool in the population where it is being applied. CONCLUSION In building the risk tool for diabetes it was demonstrated that that BMI (a relative measure of weight for height) overwhelmingly influences the predictions for developing diabetes in the future. For that reason, clarifying determinants of weight and weight change is essential when developing strategies to prevent or reduce the future diabetes burden. In monitoring trends over time researchers are often faced with the dilemma of separating trends between individuals and trends within individuals. Multilevel growth models allow us to model both these aspects which strengthen the ability to model trends that vary between and within individuals.
2020-11-19T09:15:37.165Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "209a0c42eaf1ff5038226051ac0a152e32279525", "oa_license": null, "oa_url": "https://doi.org/10.31782/ijcrr.2020.122125", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "729ec983a74d2076b1c4fed7dc4ffe5ee5a07fd5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
2633555
pes2o/s2orc
v3-fos-license
The Association between Apolipoprotein E Gene Polymorphism and Mild Cognitive Impairment among Different Ethnic Minority Groups in China The association, in different ethnic groups, of apolipoprotein E (apoE) gene polymorphism with mild cognitive impairment (MCI) has been unclear. Few studies have examined the association in Chinese minorities. The current study explores the association between apoE gene polymorphism and MCI in one of the biggest ethnic groups—the Hui—and compares it with the Han. The Minimental State Exam, Activities of Daily Living Scale, and Geriatric Depression Scale were administered to 306 ethnic Hui and 618 ethnic Han people aged ≥55 years. ApoE genotypes were determined using the high resolution melting curve method. The distribution of the apoE genotype and the frequency of alleles ε2, ε3, and ε4 were similar in the Hui and Han groups. In analyses adjusted for age, gender, and education level, the ε4 allele was a risk factor for MCI in both the Hui group (OR = 2.61, 95% CI: 1.02–6.66) and the Han group (OR = 2.36, 95% CI: 1.19–4.67), but the apoE ε2 allele was protective for MCI only in the Han group (OR = 0.48, 95% CI: 0.38–0.88). The association of some apoE genotypes with MCI may differ in different ethnic groups in China. Further studies are needed to explore this effect among different populations. Introduction Apolipoprotein E (apoE) is a plasma protein involved in regulating the body's metabolism of lipoproteins and cholesterol balance. It has an important role in the neurobiological system [1]. Human apoE has three isoforms, 3, 4, and 2, due to the cysteine-arginine interchanges at codons 112 and 158 [2]. The most frequently occurring allele is 3, followed by 4 and 2 [1]. The apoE 4 allele has been widely studied as a risk factor for mild cognitive impairment (MCI) and Alzheimer's disease (AD), but its impact has been found to differ across ethnic groups [3][4][5][6][7]. Seet et al. reviewing the frequency of the apoE alleles in different ethnic groups found that the relative frequency of 4 was the lowest in the Chinese sample [8]. This raised a question as to whether the Chinese people did in fact benefit from the reduced frequency of the apoE 4 allele. The prevalence of MCI was estimated to be over 18.5% among Chinese people aged 55 years or older [9], with about seven million Chinese people suffering from AD among people aged 55 years or older [10]. Within the Chinese population, studies have shown that apoE 4 increases the risk of MCI in the elderly Han ethnic group (the majority group) [11,12], but few studies have examined the association between the 4 allele and MCI among different ethnic minority groups in China, although the frequency of apoE genotypes has been found to differ across different ethnic groups [13][14][15]. China has fifty-five national minorities, among which the Hui ethnic group is the largest minority group. The Chinese Hui ethnic group has descended from Arab and Persian Muslim immigrants [16]. They share the same language, customs, and living environments with the local Han majority but are different in terms of genetic background [17]. Deng et al. investigated 2 International Journal of Alzheimer's Disease the diversity distributions of 15 short tandem repeats (STRs) loci in a sample of the Hui ethnic group comparing them with other Chinese ethnic groups. The results showed a significant difference between the Hui and other ethnic groups in some loci [18]. In addition, previous studies of community residents showed that the Hui ethnic group had a higher prevalence of abnormal lipid metabolism than the local Han population [19]. However, there is no study focused on the frequency of the apoE 4 allele and the association between the 4 allele and MCI in the Hui ethnic minority. In the current study, we aim to compare the relative frequency of the apoE alleles 2, 3, and 4 in a communitybased sample of the Hui ethnic group with a comparable sample of the Han ethnic group in Mainland China and to examine the association between apoE gene polymorphism and MCI in those two groups. Subjects. Five communities were selected from two cities using a convenience cluster sampling method. Among them, three communities were from Yinchuan (the biggest city in the Ningxia province, and over 80% of the population is Han ethnicity) and two communities were from Wu Zhong (the second biggest city in the Ningxia province, and over 80% of the population is Hui ethnicity). Individuals aged 55 years or older who had permanent residency and agreed to participate were included in the study. Those who could not complete the survey due to vision and hearing disabilities, a long history of alcohol consumption, or serious diseases were excluded. Demographic information was collected using a questionnaire designed by the research team, and the ethnic identity was verified by residency registration information (called hukou in Chinese) authorized by the local government. The residency registration system is the basic institution which documents population information and distributes public resources in China [20]. Of 1022 participants enrolled in the study, 924 (90.5%) completed the entire examination, including 306 Hui ethnic people (mean age is 65.4 years; standard deviation (SD) is 6.8) and 618 Han ethnic people (mean age is 66.9 years; SD is 6.5). This study was approved by the Institutional Review Board of the Ningxia Medical University. Participants provided written informed consent prior to the survey. Neuropsychological Testing and Physical Examination. All the participants had a face-to-face interview performed by trained medical students using a structured questionnaire, which included questions on smoking, alcohol use, memory, and medical history. A series of tests including the Minimental State Exam (MMSE) [21], Activities of Daily Living Scale (ADL) [22], and the Geriatric Depression Scale (GDS) [23] were administered by two clinicians from the Geriatrics Department of Ningxia Medical University. MCI was diagnosed according to Peterson's criteria [24]. One hundred and eighty-one people (19.6%) met the criteria for MCI, including 66 Hui and 115 Han ethnic people (Table 1). All participants underwent a careful physical examination in community health care centers to obtain the blood pressure and disease history. ApoE Gene Polymorphism Test. Genomic DNA was isolated from venous blood leukocytes using a genomic DNA extraction and purification kit following the manufacturer's protocol (Wizard, Genomic DNA Purification Kit; Promega Company, USA). The primer was made by Invitrogen Company and diluted and kept at −20 ∘ C. An 89 bp PCR product encompassing acid position rs429835 (112) was produced by using the forward primer 5 -CGGGCACGGCTGTCCAAG-3 and the reverse primer 5 -CGGTACTGCACCAGGCGGC-3 . A 70 bp PCR product encompassing acid position rs7412 (158) was produced by using the forward primer 5 -GCTGCGTAAGCGGCTCCT-CC-3 and the reverse primer 5 -GGCCCCGGCCTGGTA-CACT-3 . ApoE gene polymorphism was detected by the high resolution melting (HRM) curve method [25]. Instruments were undertaken in a Roche Light Cycler 480 (Roche, USA). The reaction system was composed of 5 L reaction mixture, 0.5 L primer mix, 0.5 L fluorescein EVA, and 3 L pure water. The total volume was 9 L. The system was placed in a 1.5 mL Eppendorf tube and then in a vortex momentarily for mixing. The reaction system was added to Roche 96 pore plates, and each pore plate was added 1 L DNA moldboard. A computerized test was run after shocking. All detections were completed by professional technicians at the Biochip Ningxia Center following the manufacturer's instructions. In order to control for an accurate genotype call, twelve DNA samples were retested using the dideoxy-mediated chaintermination method in an independent Lab (Genex, China). All the genotypes were detected by the sequencing method consistent with that detected by the HRM. Statistical Analyses. We identified the carriers of the 2 allele, including genotypes 2 2 and 2 3, and the carriers of the 4 allele, including genotypes 3 4 and 4 4, and 3 3 as control. A gene risk index model, widely used in other studies, was used to evaluate the variation in apoE polymorphisms [26]. The model assigned +1, 0, or −1 for 2, 3, or 4 allele, respectively. The genotypes 2 2, 2 3, 2 4, 3 3, 3 4, and 4 4 then had scores of +2, +1, 0, 0, −1, and −2, respectively. The distributions of apoE genotypes, the frequency of the alleles ( 2, 3, and 4), 2 allele carrier, 4 allele carrier, and demographical variables between Han and Hui or between MCI and no MCI were analyzed using the R * C chi-square test. Age and apoE risk index were analyzed using Student's -test. Logistic regression model was performed to obtain the odds ratios (OR) and 95% confidence intervals (CI) for the 2 or 4 allele carriers after controlling for age, gender, and level of education. Also, two separated logistic regression models performed stratified by ethnic. The statistically significant level was set at < 0.05. All analyses were performed using the statistical package for social sciences version 16.0 (SPSS Inc., Chicago, IL, USA). Frequency of ApoE Allele and Genotype Distributions in the Hui and Han Ethnic Groups. There was no statistically significant difference in the distribution of the six kinds of genotypes and three kinds of allele between the Hui and Han ethnic groups (Table 1). Compared with the Han ethnic group, the Hui ethnic group had higher levels of serum total cholesterol and diastolic blood pressure and lower proportions of smoking, alcohol use, and living alone and lower education level. Table 2 shows the association between apoE 4 allele and MCI; we investigated the association between the apoE 4 allele and MCI in the Hui and Han ethnic groups combined and separately. Consistent correlations between apoE 4 allele and MCI were found in three subgroups. The proportion with the 4 allele (7.5% versus 3.2%) and of 4 carriers (14.4% versus 5.7) was significantly higher in the participants with MCI than those without MCI; < 0.001. There was a significantly lower risk index score for those with MCI in the combined group ( = 0.002) and for the Han ethnic group ( = 0.007), but not in the Hui ethnic group ( = 0.118). (Table 3). After controlling for age, gender, and education level, the 4 carrier was a risk factor for MCI in both the Hui (OR = 2.61) and the Han (OR = 2.36) ethnic groups. The 2 carrier was protective for MCI in the Han ethnic group (OR = 0.48), but not in the Hui ethnic group. Discussion The present study found that there was no significant difference in apoE genotype and 4 allele frequency between older Hui and Han ethnic groups. This finding is similar to that for the Chinese Uyghur ethnic people [15]. However, compared with the Saudi Arabia Muslims who are believed to be the ancestors of the Chinese Hui ethnic people, the frequency of 4 allele in the Hui ethnic group (3.9%) was close to that in the Arabs (3.9-6.3%) [27,28]. The frequency of the 4 allele is 4.4% in Hui and 3.9% in Han in current study; this finding agrees with previous studies which have reported that the frequency of apoE genotypes is significantly different in different races and ethnicities. The gene frequency of 4 allele was from 12.4% to 19.8% in Caucasians, while the Chinese Han ethnic population has been found to have a lower frequency of the 4 allele, ranging from 7.5% to 8.7% [8,13,26]. Regarding other Chinese minorities, the frequency of 4 allele ranged from 4.9% in the Zhuang ethnic group [29] to 15.4% in the Mongolian ethnic group [30]. Studies have shown that apoE 4 carrier increases the risk of MCI in the elderly Chinese Han ethnic group [11,12]. However, except for the influence of gender, age, and level of education, current study found a consistent association between the 4 allele and MCI in both the Han and the Hui ethnic groups, and these findings were similar to that of other studies [31]. MCI was the most commonly accepted prodromal AD stage; this suggests that the apoE 4 allele has relevance for MCI screening [32,33]. In contrast, apoE 2 allele has been reported to be a protective factor for cognitive function in elderly people. Presence of the apoE 2 allele was reported to have an association with improvement in episodic memory over time [34] and to reduce the risk of cognitive decline among older adults [35]. A recent study showed no significant association between apoE 2 presence and MCI in Chinese Han ethnic people, but apoE 2 was inversely associated with AD [11]. The current study found that the 2 carrier was a protective factor for MCI in the Han ethnic group, but not in the Hui ethnic group. This difference may be due to the relatively smaller sample size for Hui participants and the relatively lower frequency of 2 in Chinese population which reduced statistical power [26]. For the present study, the power for logistic model was 0.85 in Han groups and 0.73 in Hui groups, respectively. Conclusions In summary, this study has shown that apoE 4 was a risk factor for MCI in both of the Han and Hui ethnic groups in China. But the association of apoE 2 with MCI may differ in different ethnic groups in China. Further studies are needed to explore this effect among different populations. Limitations There are several limitations to the present study. First, the cross-sectional design of this study could only detect the association between genes and MCI, but not a causal relationship. Second, the samples were not randomly selected so that they could not represent the general population. In addition, the sample size of the Hui ethnic group was relatively smaller than that of the Han ethnic group, which might reduce the comparability of the two samples.
2018-04-03T02:10:49.876Z
2014-08-05T00:00:00.000
{ "year": 2014, "sha1": "095820078dbf5e969587db48bb71ea31615661d7", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ijad/2014/150628.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "095820078dbf5e969587db48bb71ea31615661d7", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
259837543
pes2o/s2orc
v3-fos-license
Model-checking in the Foundations of Algorithmic Law and the Case of Regulation 561 We discuss model-checking problems as formal models of algorithmic law. Specifically, we ask for an algorithmically tractable general purpose model-checking problem that naturally models the European transport Regulation 561, and discuss the reaches and limits of a version of discrete time stopwatch automata. Model-checking and algorithmic law The European transport Regulation 561 [49] concerns activities of truck drivers as recorded by tachographs. A tachograph recording determines for each time unit the activity of the driver which can be driving, resting or doing other work. Regulation 561 is a complex set of articles that limits driving and work time by prescribing various types of rest periods. The regulation prescribes that the time units are minutes, so a tachograph recording of 2 months determines a sequence of activities of length 87840. It is clear that the legality of such a recording can only be judged with the help of an algorithm. By the application of a law to a case we mean the decision whether the case is legal according to that law or not. By an algorithmic law we mean a law whose application to a case is executed by an algorithm. Instead of designing one algorithm per law we are interested in general purpose algorithms: these take as input both a case from a set of cases of interest, and a law from a set of laws of interest, and decide whether the given case is legal according to the given law or not. In order to present cases and laws of interest as inputs to an algorithm, both have to be suitably formalized. Computational problems in algorithmic law For Regulation 561, a case is a sequence of activities and hence straightforwardly formalized as a word over the alphabet Σ ∶= {d, r, w}: e.g., the word dddwrr ∈ Σ 6 is the activity sequence consisting of 3 minutes driving, followed by 1 minute other work, followed by 2 minutes resting. 1 Generally, we formalize a set of cases by a class of finite structures K. 2 In this setting, a generic formalization of a law is given by translating the law to a sentence ϕ of a formal language, i.e., a logic L. That a particular case K ∈ K is legal according to the law ϕ then formally means that K ⊧ ϕ, i.e., K satisfies ϕ. We arrive at what is the central computational problem of algorithmic law: Model-checking The model-checking problem (for L over K) is a formal model for a family of algorithmic laws where laws are formalized by sentences of L and cases are formalized by structures in K. A model-checker (for L over K) is an algorithm deciding MC(K, L). This is a general purpose algorithm as asked for above. We consider two more computational problems associated to algorithmic law. Consistency-checking A minimal requirement for law design is that it should be possible to comply with the law (cf. [31] for a problematic case). For laws governing activity sequences consistency means that there should be at least one such sequence that is legal according to the law. A related question of interest is whether a certain type of behaviour can be legal. This is tantamount to ask whether the artificial law augmented by demanding the type of behaviour is consistent. This is formally modeled by the consistency problem (for L over K): Con(K, L) Input: ϕ ∈ L. Problem: does there exist some K ∈ K such that K ⊧ ϕ ? Scheduling Assume a truck driver has to schedule next week's driving, working and resting and is interested to drive as long as possible. A week has 10080 minutes, so the driver faces the computational optimization problem to compute a length 10080 extension of the word given by the current tachograph recording that is legal according to Regulation 561 and that maximizes driving time. Consider laws governing activity sequences, that is, K is the (set of structures corresponding to the) set of finite words Σ * over some alphabet Σ. For a word w = a 0 ⋯a n−1 ∈ Σ n (the a i are letters that represent the corresponding activities) and a letter a ∈ Σ, let # a (w) denote the number of times the letter a appears in w, i.e., # a (w) ∶= {i < n a i = a} . Model-checking as a formal model There is a vast amount of research concerning model-checking problems MC(K, L). The two main interpretational perspectives stem from database theory and from system verification. In database theory [46], K is viewed as a set of databases, and L a set of Boolean queries. In system verification [7], K is as a set of transition systems or certain automata that formalize concurrent systems or parallel programs, and L formalizes correctness specifications of the system, that is, properties all executions of the system should have. We add a third interpretational perspective on model-checking problems as formal models for families of algorithmic laws. We highlight three conflicting requirements on such a formal model. Tractability requirement The first and foremost constraint for a model MC(K, L) of a family of algorithmic laws is its computational complexity. For the existence of a practically useful general purpose model-checker the problem MC(K, L) should be tractable. We argue that the notion of tractability here cannot just mean PTIME, a more fine-grained complexity analysis of MC(K, L) is required. Classical computational complexity theory tells us that already extremely simple pairs (K, L) have intractable model-checking problems. An important example from database theory is that MC(K, L) is NP-complete for L the set of conjunctive queries and K the set of graphs (or the single binary word 01) [18]. An important example [51] from system verification is that MC(K, L) is PSPACE-complete for L equal to linear time temporal logic LTL and K the class of finite automata [53]. However, this PSPACE-completeness result is largely irrelevant because the model-checking problem is fixed-parameter tractable (fpt), that is, it is decidable in time f (k) ⋅ n O (1) for some computable function f ∶ N → N where n is the total input size and k ∶= ϕ the size of (a reasonable binary encoding of) the input LTL formula ϕ. In fact, we have parameter dependence f (k) ⩽ 2 O(k) . Informally speaking, we are mainly interested in inputs with k ≪ n, so this can be considered tractable. In other words, the computational hardness relies on uninteresting inputs with relatively large k. In contrast, model-checking conjunctive queries over graphs is likely not fixed-parameter tractable: this is equivalent to FPT ≠ W [1], the central hardness hypothesis of parameterized complexity theory. 3 The focus on inputs with k ≪ n is common in model-checking and it is an often repeated point that a reasonable complexity analysis must take this asymmetry of the input into account; [48] is an early reference addressing both perspectives from database theory and system verification. The theoretical framework for such a fine-grained complexity analysis is parameterized complexity theory [35,26,27] whose central tractability notion is fixedparameter tractability. 4 To sum up, judging he tractability of MC(K, L) should be based on a fine-grained complexity analysis that measures the computational complexity with respect to various input aspects n, k, . . .. 5 The quality of the model MC(K, L) depends on the "right" identification of relevant aspects in its complexity analysis. Expressivity requirement Recall that we ask for a general purpose model-checker that solves a model-checking problem MC(K, L) modeling a family of algorithmic laws instead of single-purpose model-checkers deciding MC(K, {ϕ}), one per algorithmic law ϕ. From a theoretical perspective we expect insight on which laws can possibly be algorithmic. From a practical perspective, this avoids the costly production of many algorithms, their updates following law reforms and their validation for legal use. It is thus desirable to find tractable MC(K, L) for as rich as possible classes K and L. In particular, for laws governing sequences of activities (i.e., K = Σ * ) we ask for an as expressive as possible logic L. Of course, this is in tension with the tractability requirement. Naturality requirement From an algorithmic perspective it is not only the expressivity of L that matters, but also its succinctness. Typically, model-checking complexity grows fast with the size of the sentence ϕ formalizing the law, so logics allowing for shorter formalizations are preferable. E.g., it is well-known that the expressive power of LTL is not increased when adding past modalities but their use can lead to exponentially shorter sentences. Crucially, the complexity of model-checking (over finite automata) is not substantially increased. Moving to a more succinct logic is not necessarily an improvement. E.g. further adding a now-modality again increases succinctness exponentially but apparently also the model-checking complexity [43]. Furthermore, it is one thing to model a law application by a model-checking instance (K, ϕ) any old how and another to do so by somehow typical members of K and L. E.g., in case the formalization of actual laws uses only special artificial members of K (semantic overkill) or L (syntactic overkill), one would want to trade the richness of K and L for a faster model-checker. Very long or contrived formalizations of laws are also prohibitive for legal practice which requires the law to be readable and understandable by humans. This is vital also for the validation of their formalization, i.e., their translation from the typically ambiguous natural language into a formal language able to be algorithmically processed. Without attempting a definition of this vague term, we thus informally require that the (formalization given by the) model MC(K, L) must be natural. Other requirements We focus on the above three requirements but, of course, there are more whose discussion is beyond the scope of this paper. An important one is trust in the output of model-checkers. This is a threefold issue. First, the formalization process requires trust: laws are written in natural language and thereby formally not precise and ambigue; formalization typically leads to choices to disambiguate or even repair the written law; this calls for a collaboration of different experts. Second, the implementation process requires trust: this could call for formally verified implementations; we refer to [1] for an example. Third, one needs trust that the data given to the algorithm are correct and in the right format (we refer to [31] for a discussion); for example, Regulation 561 prescribes working in UTC and it is known that no tachograph actually records in UTC; theoretically, the change from non-UTC to UTC data can have drastic effects [24]. Furthermore, algorithmic outputs should be transparent and explainable to be used in legal practice and it is unclear what this exactly means. Further requirements on the model might come from ethical or political considerations -e.g., the required transparency can be in conflict with intellectual property rights and there can be more general issues concerning the involvement of the private sector in law execution. Contribution and outline We focus on laws governing temporal sequences of activities, that is, laws concerning cases that can readily be formalized by words over some finite alphabet Σ, i.e., K = Σ * . This paper is about the quest for a logic L such that MC(K, L) is a good model for such laws. To judge expressivity and naturality we use European Regulation 561 [49] as a test case, that is, we want L to naturally formalize Regulation 561. Given the complexity of this regulation, this is an ambitious goal and we expect success to result in a model that encompasses a broad family of laws concerning sequences of activities. The imperative constraint is the tractability of MC(K, L). The next section surveys the relevant literature on model-checking and discusses shortcomings of known model-checkers. Thereby we build up some intuition about what the right input aspects are, i.e., those relevant to calibrate the computational complexity of MC(K, L) and to judge its tractability. We suggest (a version of) discrete time stopwatch automata SWA as an answer to our central question, that is, we propose MC(Σ * , SWA) as a model for algorithmic laws concerning sequences of activities. Stopwatch automata are defined in Section 3. Our main technical contribution is the construction of a stopwatch automaton expressing Regulation 561 in Section 4. Sections 5 and 6.3 gauge the expressivity of stopwatch automata and the computational complexity of the problems mentioned in Section 1.2: model-checking problem, consistency-checking and scheduling. It turns out that while stopwatch automata have high expressive power, their model-checking complexity is relatively tame, and scales well with the aspects identified in Section 2: we summarize our technical results in Section 2.4. Regulation 561 and various logics Model-checking complexity has been investigated mainly from two interpretational perspectives: database theory and system verification. We give a brief survey guided by our central question to model Regulation 561. Regulation 561 and Büchi's theorem We recall Büchi's theorem and, to fix some notation, the definitions of regular languages and finite automata. An alphabet Σ is a non-empty finite set of letters, Σ * = ⋃ n∈N Σ n denotes the set of (finite) words. A word w = a 0 ⋯a n−1 ∈ Σ n (the a i are letters) has length w ∶= n. A (non-deterministic) finite automaton B is given by a finite set of states Q, an alphabet Σ, sets of initial and final states I, F ⊆ Q, and a set ∆ ⊆ Q × Σ × Q of transitions. A computation of B on w = a 0 ⋯a n−1 ∈ Σ n is a sequence q 0 ⋯q n of states such that (q i , a i , q i+1 ) ∈ ∆ for every i < n. The computation is initial if q 0 ∈ I and accepting if q n ∈ F . The language L(B) of B is the set of words w ∈ Σ * such that B accepts w, i.e., there exists an initial accepting computation of B on w. A language (i.e., subset of words over Σ) is regular if it equals L(B) for some finite automaton B. We refer to [54] for a definition of MSO-definable languages and a proof of: This can be extended to infinite words and trees using various types of automata -we refer to [29] for a monograph on the subject. The proof of Büchi's theorem is effective in that there is a computable function that computes for every MSO-sentence ϕ and automaton B ϕ that accepts a word w if and only if w ⊧ ϕ. It follows that MC(Σ * , MSO) is fixed-parameter tractable: given an input (w, ϕ), compute B ϕ and check B ϕ accepts w. This takes time 6 f ( ϕ )⋅ w for some computable function f ∶ N → N. It also follows that Con(Σ * , MSO) is decidable because finite automata have decidable emptiness: there is an (even linear time) algorithm that, given a finite automaton A, decides whether L(A) = ∅. MSO is a very expressive logic. In [24] it is argued that Regulation 561 can be formalized in MSO, and naturally so. Thus, in a sense MC(Σ * , MSO) is tractable, expressive and natural, so a good answer to our central question. The starting point of this work was the question for a better model, namely improving its tractability. The problem with the runtime f ( ϕ ) ⋅ w of Büchi's model-checker is that the parameter dependence f (k) grows extremely fast: it is non-elementary in the sense that it cannot be bounded by 2 2 ⋰ 2 k for any fixed height tower of 2's. This is due to the fact that in general the size B ϕ of (a reasonable binary encoding of) B ϕ is non-elementary in ϕ . Under suitable hardness hypotheses this non-elementary parameter dependence cannot be avoided, not even when restricting to firstorder logic FO [37]. This motivates the quest for fragments of MSO or less succinct variants thereof that allow a tamer parameter dependence. In system verification, LTL has been proposed: an LTL formula of size k can be translated to a size 2 O(k) Büchi automaton [57] or a size O(k) alternating automaton [55]. The model-checking problem asks given a system modeled by a finite automaton A whether all (finite or infinite) words accepted by the automaton satisfy the given LTL-sentence ϕ. The model-checker decides emptiness of a suitable product automaton accepting L(A) ∩ L(B ¬ϕ ) and takes time 2 O( ϕ ) ⋅ A . This is the dominant approach to model-checking in system verification. [24] formalizes part of Regulation 561 in LTL. In part, these formalizations rely on Kamp's theorem (cf. [52]) stating that LTL and FO have the same expressive power over Σ * . But the translation of an FO-sentence to an LTL-sentence can involve a non-elementary blow-up in size. Indeed, [24] proves lower bounds on the length of LTL-sentences expressing parts of Regulation 561. Very large sentences are not natural and lead to prohibitive model-checking times. Restrict attention to words representing one week of activities, i.e., words of length 7 ⋅ 24 ⋅ 60 over the alphabet Σ = {d, w, r}. A straightforward formalization of Article 6.2 in LTL is (using d ∈ Σ as a propositional variable) the huge disjunction of 56⋅60 > 10 2784 many disjuncts. To conclude, MSO gives the wrong model because it does not allow sufficiently fast modelcheckers, and LTL is the wrong model because it is not sufficiently (expressive nor) succinct, hence not natural. It can be expected that, like Regulation 561, many algorithmic laws concerning sequences of activities state lower and upper bounds on the duration of certain activities or types of activities. The constants used to state these bounds are not necessarily small, and an important aspect to take into account when analyzing the model-checking complexity. Regulation 561 and timed modal logics The above motivates to look at models with built-in timing constraints: "In practice one would want to use 'sugared' versions of LTL, such as metric temporal logic (MTL; [47]) which allow for expressions such as n+1 to be represented succinctly" [24]. MTL has modalities like [5,8] ϕ expressing that ϕ holds within 5 and 8 time units from now. For Regulation 561, cases are tachograph recordings which, formally, are timed words (a 0 , t 0 ) (a 1 , t 1 )⋯ where the a i are letters and the t i an increasing sequence of time-points; intuitively, activity a 0 is observed until time point t 0 , then a 1 until t 1 , and so on. Alur and Dill [4] extended finite automata to timed automata that accept sets of timed words -see [12] for a survey. Roughly speaking, computations of such automata happen in time and are governed by finitely many clocks: transitions from one state to another are enabled or blocked depending on the clock values, and transitions can reset some clocks (to value 0). Alur and Dill [4] proved that timed automata have decidable emptiness, thus enabling the dominant model-checking paradigm. Consequently, a wealth of timed temporal logics have been investigated - [40,13] are surveys. The following are some of the most important choices when defining such a logic: semantics time clocks finite words signal-based continuous R ⩾0 branching internal infinite words event-based discrete N linear external A subtle choice is between signal-or event-based semantics. It means, roughly and respectively, that the modalities quantify over all time-points or only over the t i appearing in the timed word; MTL is known to be less expressive in the latter semantics over finite timed words [28]. A crucial choice is between time N or R ⩾0 . Internal clocks appear only on the side of the automata, external clocks appear in sentences which reason about their values. We briefly survey the most important results. An early success [2] concerns the infinite word signal-based branching continuous time logic TCTL (timed computation tree logic): over (systems modeled by) timed automata it admits a model-checker with runtime t O(c) ⋅ k ⋅ n where n is the automaton size, k the size of the input sentence, c the number of clocks, and t is the largest time constant appearing in the input. [42] extends this allowing external clocks. However, continuous branching time is semantical and syntactical overkill for Regulation 561. For linear continuous time we find MTL and TPTL (timed propositional temporal logic), a more expressive [15] extension with external clocks. Since model-checking is undecidable for these logics [6,5], fragments have been investigated. Surprisingly [47] found an fpt model-checker for MTL over event-based finite words via a translation to alternating automata with one clock, albeit with intolerable parameter dependence (non-primitive recursive). MITL (metric interval temporal logic) [5] is the fragment of MTL disallowing singular time constraints as, e.g., in [1,1] ϕ. [33,44] gives an elegant translation of MITL to timed automata and thereby a model-checker with runtime 7 2 O(t⋅k) ⋅ n. Over discrete time, [6] adapts the mentioned translation of LTL to Büchi automata and gives a model-checker for TPTL with runtime 2 O(t c ⋅k) ⋅ n. As said, from the perspective of algorithmic law, t is not typically small and runtimes exponential in t = 56h = 3360min are prohibitive. Tamer runtimes with t moved out of the exponent have been found for a certain natural MITL-fragment MITL 0,∞ both over discrete and continuous time -see [40,5]. However, "standard real-time temporal logics [. . . ] do not allow us to constrain the accumulated satisfaction time of state predicates" [3, p.414]. It seems that this is just what is required to formalize the mentioned Article 6 (2), and we expect similar difficulties to be encountered with other laws concerning activity sequences. There are various attempts to empower the logics with some reasoning about durations. Stopwatch automata [25] are timed automata that can not only reset clocks but also stop and activate them. However, emptiness is undecidable already for a single stopwatch [41]. Positive results are obtained in [3] for observer stopwatches, i.e., roughly, stopwatches not used to govern the automaton's transitions. On the logical side, [11] and [17] study fragments and restrictions for TCTL with (observer) stopwatches. On another strand, [20] puts forward the calculus of durations, but already tiny fragments turn out undecidable [19]. For discrete time, [39] gives an fpt model-checker via a translation to finite automata. For continuous time, [36] obtains fpt results under certain reasonable restrictions of the semantics. A drawback is that these fpt results have non-elementary parameter dependence. To conclude, the extensive research on "'sugared' versions" of LTL in system verification does not reveil a good answer to our central question for a model-checking problem modeling algorithmic laws concerning activity sequences. In particular, many known model-checkers are too slow in that they do not scale well with time constants mentioned in the law. The perspective from algorithmic law The new perspective on model-checking from algorithmic law seems orthogonal to the dominant perspectives from database theory and system verification in the sense that it seems to guide incomparable research directions. In database theory there is special interest in model-checking problems for a rich class K, formalizing a large class of databases, and possibly weak logics L formalizing simple basic queries. In algorithmic law (concerning activity sequences) it is the other way around, focussing on K = Σ * . System verification gives special interest to infinite words and continuous time (cf. e.g. [2]) while algorithmic law focusses on finite words and discrete time. Most importantly, system verification focusses on structures specifying sets of words: its model-checking problem corresponds to (a generalization of) the consistency problem in algorithmic law. In algorithmic law the consistency problem is secondary, the main interest is in evaluating sentences over single words. Finally, the canonical parameterization of a model-checking problem takes the size ϕ of the input sentence ϕ as the parameter. Intuitively, then parameterized complexity analysis focusses attention on inputs of the problem where ϕ is relatively small. Due to large constants on time constraints appearing in the law to be formalized this parameterization does not seem to result in a faithful model of algorithmic law. We shall come back to this point in Section 6.2. Compared to system verification this shift of attention in algorithmic law opens the possibility to use more expressive logics while retaining tractability of the resulting model. In particular, complexity can significantly drop via the shift from continuous time, infinite words and consistency-checking, to discrete time, finite words and model-checking. While discrete time is well investigated in system verification, it has been noted that both finite words and model-checking have been neglected -see [34] and [45], respectively. To make the point: over finite words consistency-checking LTL is PSPACE-complete but model-checking is PTIME, even for the more succinct extensions of LTL with past-and now-modalities [45], or even finite variable FO [56]. 8 Model-checking stopwatch automata: summary We take advantage of this possibility to use more expressive logics and suggest (a version of) discrete time stopwatch automata SWA as an answer to our central question, that is, we propose MC(Σ * , SWA) as a model for algorithmic laws concerning sequences of activities. 9 Our stopwatches are bounded, and their bounds correspond to time constants mentioned in laws. Mimicking notation from Section 2.2, we let c A denote the number of stopwatches and t A the largest stopwatch bound of a stopwatch automaton A. We give the following upper bound on the complexity of MC(Σ * , SWA): There is an algorithm that given a stopwatch automaton A and a word w decides whether A accepts w in time We prove a slightly stronger result in Theorem 20. Notably, the aspect t A does not appear in the exponent, so this overcomes a bottle-neck of various model-checkers designed in system verification (see Section 2.2). We obtain similar algorithms for consistency-checking and scheduling (Corollary 19 and Theorem 21). This is despite the fact that stopwatch automata are highly expressive, namely have the same expressive power as MSO over finite words (Theorems 15 and 1). The final Section 6 discusses our model MC(Σ * , SWA) following the criteria of Section 1.2, and gives a critical examination of the factor t c A A in the runtime of our model-checker. Intuitively, typical inputs have small c A and large t A , and it would be desirable to replace this factor by, e.g., . We show this is unlikely to be possible. Theorem 28 implies: Theorem 4. Assume FPT does not contain the W-hierarchy. Let f ∶ N → N be a computable function. Then there does not exist an algorithm that given a stopwatch automaton A and a word w decides whether A accepts w in time The complexity-theoretic assumption here is weaker than FPT ≠ W[1] considered earlier. Stopwatch automata Before giving our definition we informally describe the working of a stopwatch automaton. A stopwatch automaton is an extension of a finite automaton whose computations happen in discrete time: the automaton can stay for some amount of time in some state and then take an instantaneous transition to another state. There are constraints on which transitions can be taken at a given point of time as follows. Time is recorded by a set of stopwatches X, every stopwatch x ∈ X has a bound β(x), a maximal time it can record. Every stopwatch is active or not in a given state. During a run that stays in a given state for a certain amount of time, the value of the active stopwatches increases by this amount of time (up to their bounds) while the inactive stopwatches do not change their value. Transitions between states are labeled with a guard and an action. The guard is a condition on the values of the stopwatches that has to be satisfied for the transition to be taken, usually requiring upper or lower bounds on certain stopwatch values. The action modifies stopwatch values, for example, resets some of the stopwatches to value 0. Instead of transitions, states are labeled by letters of the alphabet. A stopwatch automaton accepts a given word if there exists a computation leading from a special state start to a special state accept and that reads the word: staying in a state for 5 time units means reading 5 copies of the letter labelling the state. Abstract stopwatch automata We now give the definitions that have been anticipated by the informal description above. Definition 5. An abstract stopwatch automaton is a tuple A = (Q, Σ, X, λ, β, ζ, ∆) where -Q is a finite set of states containing the states start and accept; -Σ is a finite alphabet; -X is a finite set of stopwatches; Here, G is the set of abstract guards (for A), namely sets of assignments, and A is the set of abstract actions (for A), namely functions from assignments to assignments. An assignment To be precise, we should speak of a β-assignment but the β will always be clear from the context. We define the bound of A to be understanding that the empty product is 1 so that ∏ x∈∅ (β(x)+1) ∶= 1. This is the cardinality of the set of assignments (for A). We say that a transition (q, g, α, q ′ ) ∈ ∆ is from q, and to q ′ , and has abstract guard g and abstract action α. Computations of stopwatch automata are defined in terms of their corresponding transition systems. In this case, we say that the computation is from (q 0 , ξ 0 ) and to (q ℓ , ξ ℓ ); it is initial if ξ 0 is constantly 0 and q 0 = start; it is accepting if q ℓ = accept. The computation reads the word We understand that σ 0 denotes the empty string for every letter σ in the alphabet Σ and juxtaposition of strings corresponds to concatenation. Through computations, we define strings and languages accepted by a Stopwatch automaton. Definition 8. The automaton A accepts w ∈ Σ * if there is an initial accepting computation of A that reads w. The set of these words is the language L(A) of A. Remark 9. The requirement q i ≠ accept for all i < ℓ in the definition of computations means that we interpret accept as a halting state; it implies that the label λ(accept) as well as transitions from accept are irrelevant. Without this condition, w ∈ L(A) implies wa n ∈ L(A) for a ∶= λ(accept) and all w ∈ Σ * and n ∈ N. Remark 10. Stopwatch automata are straightforwardly explained for continuous time R ⩾0 where they read timed words, and bounds β(x) = ∞. Stopwatch automata according to [25,41] are such automata where guards are Boolean combinations of x ⩾ c (for x ∈ X and c ∈ N), and actions are resets (to 0 of some stopwatches). The emptiness problem for those automata is undecidable [41]. So-called timed automata additionally require stopwatches to be active in all states, and have decidable emptiness [4]. The model allowing x ⩾ c + y (for x, y ∈ X and c ∈ N) in guards still has decidable emptiness and is exponentially more succinct than guards with just Boolean combinations of x ⩾ c ( [14]). Allowing more actions is subtle, e.g., emptiness becomes undecidable when x ∶= x ∸ 1 or when x ∶= 2x is allowed; see [16] for a detailed study. Specific stopwatch automata To consider an abstract stopwatch automata as an input to an algorithm, we must agree on how to specify the guards and actions, i.e., properties and functions on assignments. This is a somewhat annoying issue because on the one hand our upper bounds on the model-checking complexity turn out to be robust with respect to the choice of this specification in the sense that they scale well with the complexity of computing guards and actions, so a very general definition is affordable. On the other hand, for natural stopwatch automata including the one we are going to present for the European Traffic Regulation 561, we expect guards and actions to be simple properties and functions. As mentioned, typically guards mainly compare certain stopwatch values with constants or other values, and actions do simple re-assignments of values like setting some values to 0. Hence our choice on how to specify guards and actions is somewhat arbitrary. To stress the robustness part, we use a general model of computation: Boolean circuits. In natural automata, we expect these circuits to be small. An assignment determines for each stopwatch x ∈ X its bounded value and as such can be specified by b A ∶= x∈X ⌈log(β(x) + 1)⌉ many bits. We think of the collection of b A bits as being composed of blocks, with a block of ⌈log(β(x) + 1)⌉ bits corresponding to the binary representation of the value of stopwatch x ∈ X under the assignment. A specific guard is a Boolean circuit with one output gate and b A many input gates. A specific guard determines an abstract one in the obvious way. A specific action is a Boolean circuit with b A many output gates and b A many input gates. On input an assignment, for each clock x ∈ X, it computes the binary representation of a value v x ∈ N in the block of ⌈log(β(x) + 1)⌉ output gates corresponding to x. Furthermore, we agree that the assignment computed by the circuit maps x to min{v x , β(x)} thereby mapping assignments to assignments. A specific action determines an abstract one in the obvious way. A specific stopwatch automaton is defined like an abstract one but with specific guards and actions replacing abstract ones. A specific stopwatch automaton determines an abstract one taking the abstract guards and actions as those determined by the specific ones. Computations of specific stopwatch automata and the language they accept are defined as those of the corresponding abstract one. The size A of specific stopwatch A automaton is the length of a reasonable binary encoding of it. We shall only be concerned with specific stopwatch automata and shall mostly omit the qualification 'specific'. A definitorial variation To showcase the robustness of our definition and for later use, we mention a natural variation of our definition and show it is inessential. is said to read any word a t 0 0 ⋯a t ℓ−1 ℓ−1 with a i ∈ λ(q i ) for every i < ℓ. The language L(A) of A is defined as before. A stopwatch automaton can be seen as a P (Σ)-labeled stopwatch automaton whose state labels are singletons. Conversely, given a P (Σ)-labeled stopwatch automaton A = (Q, Σ, X, λ, β, ζ, ∆) we define a stopwatch automaton A ′ = (Q ′ , Σ, X, λ ′ , β, ζ ′ , ∆ ′ ) as follows: its states Q ′ are pairs (q, a) ∈ Q × Σ such that a ∈ λ(q); for the start and accept states choose any (q, a) for q the start, resp., accept state of A. The λ ′ -label of (q, a) ∈ Q ′ is a and stopwatch x ∈ X is active (according ζ ′ ) in (q, a) if and only if it is active in q (according ζ). Further, we add transitions with trivial guards and actions from (q, Given a computation of A as above, choose any a i ∈ λ(q i ) for every i < ℓ. Then is a computation of A ′ . The choice of the a i can be made so that this computation reads the same word as the computation (1). If (1) is initial (accepting), make (2) initial (accepting) by adding a 0 →-transition from (to) the start (accept) state of A ′ . Conversely, if (2) is a computation of A ′ , then (1) is a computation of A that reads the same word. To sum up: Proposition 11. There is a polynomial time computable function that maps every P (Σ)labeled stopwatch automaton A to a stopwatch automaton A ′ with B A ′ = B A and L(A) = L(A ′ ). A stopwatch automaton for Regulation 561 Aside expressivity and tractability, we stressed naturality as a criterion of models for algorithmic law. In this section and the next section we make the point for stopwatch automata by implementing Regulation 561. As already mentioned, Regulation 561 is a complex set of articles concerning sequences of activities of truck drivers. Possible activities are driving, resting or other work. The activities over time are recorded by tachographs and formally understood as words over the alphabet Σ ∶= {d, r, w}. In the real world time units are minutes. Regulation 561 limits driving and work times by demanding breaks, daily rest periods and weekly rest periods, both of which can be regular or reduced under various conditions. We construct a stopwatch automaton that accepts precisely the words over Σ that represent activity sequences that are legal according to Regulation 561. The states Q of the automaton are: drive, break, other work, reduced daily, regular daily, reduced weekly, regular weekly, compensate1, compensate2, week, start, accept. The states in the first row have the obvious meaning. The states in the second row represent different kinds of rest periods. The function λ labels other work by w, drive by d and all other states by r. The states compensate1 and compensate2 are used for the most complicated part of Regulation 561 that demands certain compensating rest periods whenever a weekly rest period is reduced. The state week is auxiliary, and accepting computations spend 0 time in it. The same is true for start. So, the λ-labels of start and week do not matter. We construct the automaton stepwise implementing one article after the next, introducing stopwatches along the way. For each stopwatch x we state its bound β(x) and the states q in which it is active, i.e., specifying the pairs (x, q) ∈ ζ. We shall refer to stopwatches that are nowhere active as counters or registers, depending on their informal usage; a bit is a counter with bound 1. We describe a transition (q, g, α, q ′ ) saying that there is a transition from q to q ′ with guard g and action α. We specify guards by a list of expressions of the form z ⩽ r or z +z ′ > r or the like for r ∈ N; this is shorthand for a circuit that checks the conjunction of these conditions. We specify actions by lists of expressions of the form z ∶= r or z ∶= z ′ + r or the like for z, z ′ ∈ X and r ∈ N; this is shorthand for the action that carries out the stated re-assignments of values in the order given by the list. These lists are also described stepwise treating one article after the next. As a mode of speech, when treating a particular law, we shall say that a given transition has this or that action or guard: what we mean is that the actions or guards of the transition of the final automaton is given by the lists of these statements in order of appearance (mostly the order won't matter). We illustrate this mode of speech by describing the automaton around start: let x start be a stopwatch with bound 1 and active at start; there are no transitions to start and transitions from start to all other states except week; these transitions have guard x start = 0. Later these transitions shall get more guards and also some actions. These stipulations mean more precisely the following: the bound β satisfies β(x start ) = 1; the set ∆ contains for any state q ∉ {week, start} the transition (start, g, α, q) where the guard g checks the conjunction of x start = 0 and the other guards introduced later, and the action α carries out the assignments and re-assignments as specified later; further, (x start , q) ∈ ζ if and only if q = start. We loosely divide Regulation 561 into daily and weekly demands. We first describe how to implement the daily demands using the first 5 states and daily driving and accept. The other states will be used to implement the weekly demands. During the construction we shall explicitly collect the constants appearing in the articles and denote them by t 0 , . . . , t 16 . Our construction is such that these constants determine all guards, actions and bounds in an obvious way. Knowing this will be useful for the discussion in later sections. Daily demands We use the first 3 states to implement the the law about continuous driving: Article 7 (1st part): After a driving period of four and a half hours a driver shall take an uninterrupted break of not less than 45 minutes, unless he takes a rest period. We use a stopwatch x cd with bound 4.5h + 1 = 271 that is active in drive. Further, we use a stopwatch x break with bound 9h that is active in break. For the law under consideration we could use the bound of 4.5h + 1, the reason we use 9h will become clear later when implementing Article 8.7. There are transitions back and forth between any two of the states break, drive and other work. We give the transitions to break action x break ∶= 0, and the transitions from drive the guard x cd ⩽ 4.5h. This ensures that a computation staying in drive for more than 4.5h will not be able to leave this state, so cannot be accepting. We add two transitions from break to both drive and other work with guard x break ⩾ 45 and action x break ∶= 0; x cd ∶= 0. Transitions to regular daily and reduced daily have action x cd ∶= 0: this ensures the "unless. . . " statement in Article 7 (transitions to weekly rest periods described below will also have this action). The first part of this Article 7 uses constants t 0 ∶= 4.5h = 270; t 1 ∶= 45 (the constant 9h is denominated later by t 16 ). Article 7 allows to divide the demanded break into two shorter ones: Article 7 (2nd part): This break may be replaced by a break of at least 15 minutes followed by a break of at least 30 minutes each distributed over the period in such a way as to comply with the provisions of the first paragraph. To implement this possibility, we use a bit b rb that, intuitively, indicates a reduced break. We add transitions from break to other work and drive with guard 15 ⩽ x break < 45 and action b rb ∶= 1; x break ∶= 0. We note that these transitions do not have action x cd ∶= 0. We add transitions from break to other work and drive with guards b rb = 1 and 30 ⩽ x break and action b rb ∶= 0; x cd ∶= 0; x break ∶= 0. Transitions to states representing daily or weekly rests introduced below all get action b rb ∶= 0. The second part of Article 7 uses the constant t 2 ∶= 15; we do not introduce a name for 30 but view this constant as equal to t 1 − t 2 = 45 − 15. Article 4.(k) defines 'daily driving time' as the accumulated driving time between two daily rest periods. According to Article 4.(g) daily rest periods can be regular or reduced, the former meaning at least 11h of rest, the latter means less than 11h but at least 9h of rest. These are represented by the states regular daily and reduced daily. Article 8.1: A driver shall take daily and weekly rest periods. Article 8.2: Within each period of 24 hours after the end of the previous daily rest period or weekly rest period a driver shall have taken a new daily rest period. If the portion of the daily rest period which falls within that 24 hour period is at least nine hours but less than 11 hours, then the daily rest period in question shall be regarded as a reduced daily rest period. Weekly rest periods are treated in the next subsection. We use a stopwatch x day with bound 24h + 1 which is active in all states except accept and start, and a stopwatch x dr with bound 11h active in reduced daily and regular daily. We have transitions back and forth between the states break, drive, other work and the states regular daily, reduced daily. The transitions to regular daily are guarded by x day ⩽ 24h − 11h = 780; b rb = 0; transitions to reduced daily are guarded by x day ⩽ 24h − 9h = 900; b rb = 0. The transitions from regular daily are guarded by x dr ⩾ 11h, and the transitions from reduced daily are guarded by 11h > x dr ⩾ 9h -later we shall refer to these guards as definitorial for the states regular daily and reduced daily. Transitions from regular daily, reduced daily have action x dr ∶= 0, x day ∶= 0. All transitions to accept get guard x day ⩽ 24h. Note that an accepting computation cannot involve an assignment satisfying x day > 24h, so eventually has to visit and leave regular daily or reduced daily (or their weekly counterparts, see below). This ensures Article 8.1 for daily rest periods. These laws use constants t 3 ∶= 24h = 1440, t 4 ∶= 11h = 660, t 5 ∶= 9h = 540. Actually, the definition of regular daily rest periods in Article 4.(g) is more complicated: 'regular daily rest period' means any period of rest of at least 11 hours. Alternatively, this regular daily rest period may be taken in two periods, the first of which must be an uninterrupted period of at least 3 hours and the second an uninterrupted period of at least nine hours, To implement this we use a bit b dr indicating that a 3h part of a regular daily rest period has been taken. We duplicate the transitions from regular daily but replace the guard x dr ⩾ 11h by x dr ⩾ 9h, b dr = 1. To add the possibility of taking a partial regular daily rest period of at least 3h we add transitions from regular daily to drive and other work with guards b dr = 0, 3h ⩽ x dr < 11h and action b dr ∶= 1; note these transitions do not have action x day ∶= 0. All transitions with action x day ∶= 0 also get action b dr ∶= 0, including those modeling weekly rest periods described below. This uses the constants t 6 = 3h = 180, t 7 ∶= 9h = 540. The final daily demand constrains daily driving times: Article 6.1: The daily driving time shall not exceed nine hours. However, the daily driving time may be extended to at most 10 hours not more than twice during the week. To implement Article 6.1 we use a stopwatch x dd active at drive with bound 10h + 1 to measure the daily driving time. Additionally, we use a counter c dd with bound 3. As described later, this counter will be reset to 0 when the week changes. Duplicate the transitions to regular daily and reduced daily: one gets guard x dd ⩽ 9h, the other guard 9h < x dd ⩽ 10h and action c dd ∶= c dd + 1. Transitions from regular daily and reduced daily get guard c dd ⩽ 2. This used constants t 8 ∶= 10h = 600 and t 9 ∶= 9h = 540. Weekly demands Article 4(i) defines a week as a calendar week, i.e., as the time between Monday 00:00 and Sunday 24:00. Our formalization of real tachograph recordings by timed words replaces the time-points of tachograph recordings by numbers starting from 0. Hence time is shifted and the information of the beginning of weeks is lost. A possibility to remedy this is to use timed words where the beginnings of weeks are marked, or at least the first of them. For simplicity, we restrict attention to tachograph recordings starting at the beginning of a week, that is, we pretend that time-point 0 starts a week. We then leave it to the automaton to determine the time-points when weeks change. To this end, we use the auxiliary state week and a stopwatch x week with bound 7 ⋅ 24h+1 = 168h + 1 that is active at all states except accept and start. All transitions to accept are guarded by x week ⩽ 168h. The state week has incoming transitions from all states except accept and transitions to all states except start. All these transitions are guarded by x week = 168h and the outgoing transitions have actions x week ∶= 0 and c dd ∶= 0 (see the implementation of Article 6.1 above). This ensures that every accepting computation of A enters week for 0 time units exactly every week, i.e., every 168h. Additionally, we want the automaton to switch from week back to the state it came from. To this end we introduce a bit b q for each state q ≠ accept. We give the transition from q to week the action b q ∶= 1, and the transition from week to q the guard b q = 1 and the action b q ∶= 0. The transition from week to accept has no guard involving the bits b q . This uses the constant t 10 ∶= 168h = 10080. Much of the following implementation work is done by adding guards and actions to the transitions from and to week. For example, we can readily implement The time laid down by Directive 2002/15/EC is 60h. Use a stopwatch x ww with bound 60h + 1 that is active at drive and other work. Use a stopwatch x dw with bound 56h + 1 active at drive. To implement Article 6.2, the transitions to week and accept have guard x dw ⩽ 56h, x ww ⩽ 60h, and the transitions from week have action x dw ∶= 0, x ww ∶= 0. Note that accepting computations contain only nodes with assignments satisfying x dw ⩽ 56h and x ww ⩽ 60h. This implements Article 6.2. To implement Article 6.3 we have to remember the value x dw of the previous week. We use a register x ′ dw with the same bound as x dw and give the transitions from week the action x ′ dw ∶= x dw . Note x ′ dw functions like a register in that it just stores a value. We then guard all transitions to accept by x ′ dw + x dw ⩽ 90h. These articles use constants t 11 ∶= 56h = 3360, t 12 ∶= 60h = 3600 and t 13 ∶= 90h = 5400. We now treat the articles concerning weekly rest periods. According to Article 4.(h), weekly rest periods can be regular or reduced, the former meaning at least 45h of rest, the latter means less than 45h but at least 24h of rest. These rest periods are represented by the states regular weekly and reduced weekly. To implement their definition we use a stopwatch x wr with bound 45h active in these two states. For the two states we add transitions from and to drive and other work and transitions to accept: those from regular weekly have guard x wr ⩾ 45h and action x wr ∶= 0, and those from reduced weekly have guards 45h > x wr ⩾ 24h and action x wr ∶= 0. Later we shall refer to these guards as definitorial guards for regular weekly and reduced weekly, respectively. This uses the constants t 14 ∶= 45h = 2700, t 15 ∶= 24h = 1440. We start with some easy implementations: Article 8.6 (3rd part): A weekly rest period shall start no later than at the end of six 24-hour periods from the end of the previous weekly rest period. Article 8.3: A daily rest period may be extended to make a regular weekly rest period or a reduced weekly rest period. Article 8.4: A driver may have at most three reduced daily rest periods between any two weekly rest periods. Article 8.6 (3rd part) is implemented with the help of a stopwatch x pw that measures the time since the previous weekly rest period. It has bound 6 ⋅ 24h + 1 and is active in all states excepot start and accept. We give the transitions to regular weekly and reduced weekly the guard x pw ⩽ 6 ⋅ 24h, and the transitions from these two states the action x pw ∶= 0. This law uses constant t 16 ∶= 6 ⋅ 24h = 8640. For Article 8.3 we simply copy the guards and actions of the transitions from drive and other work to regular daily to the corresponding transitions to both regular weekly and reduced weekly. Below we shall add more guards and actions. For Article 8.4 we use a counter c rd with bound 4. We add guard c rd ⩽ 2 and action c rd ∶= c rd + 1 to the transitions to reduced daily and the action c rd ∶= 0 to the transitions leaving reduced weekly and regular weekly. We still have to implement Article 8.1 for weekly rest periods, and additionally Article 8.9: A weekly rest period that falls in two weeks may be counted in either week, but not in both. We use two bits b wr , b used meant to indicate whether a weekly rest period has been taken in the current week, and whether the current weekly rest period is used for this. The transitions from drive or other work to reduced weekly or regular weekly are duplicated: one gets guard b wr = 0 and action b used ∶= 1; b wr ∶= 1, the other gets no further guards and actions. Transitions from reduced weekly or regular weekly get action b used ∶= 0. The transitions to week get guard b wr = 1. Each transition from week to reduced weekly or regular weekly is triplicated: the first gets additional guard b used = 1 and action b used ∶= 0; b wr ∶= 0, the second gets guard b used = 0 and action b wr ∶= 0, and the third gets guard b used = 0 and action b used ∶= 1; b wr ∶= 1. This means that when the week changes during a weekly rest period and this rest period is not used, it can be used for the next week. The most complicated part of Regulation 561 are the rules governing reductions of weekly rest periods. The regulation starts as follows: Article 8.6 (1st part): In any two consecutive weeks a driver shall take at least two regular weekly rest periods, or one regular weekly rest period and one reduced weekly rest period of at least 24 hours. We use a bit b rw indicating whether the previous weekly rest period was reduced: transitions to reduced weekly have guard b rw = 0 and action b rw ∶= 1. Transitions to regular weekly have action b rw ∶= 0. The regulation continues as follows: Article 8.6 (2nd part): However, the reduction shall be compensated by an equivalent period of rest taken en bloc before the end of the third week following the week in question. Article 8.7: Any rest taken as compensation for a reduced weekly rest period shall be attached to another rest period of at least nine hours. We introduce two registers x c1 , x c2 with bounds 45h − 24h. We shall use the following informal mode of speech for the discussion: a reduced weekly rest period creates a 'compensation obligation', namely an additional resting time x c1 > 0 or x c2 > 1. The obligations are 'fulfilled' by setting these registers back to 0. Note that compensation obligations are created by reduced weekly rest periods and, by Article 8.6 (1st part), this can happen at most every other week. As obligations have to be fulfilled within 3 weeks, at any given time a legal driver can have at most two obligations. We now give the implementation. Obligations are produced by transitions from reduced weekly (recall x wr records the resting time in reduced weekly): duplicate each such transition, give one guard x c1 = 0 and action x c1 ∶= 45h − x wr , and the other guard x c1 > 0; x c2 = 0 and action x c2 ∶= 45h − x wr . The 3 week deadline to fulfill the obligations is implemented by two counters c c1 , c c2 with bound 4. These counters are increased by transitions from week but only if some obligation is actually recorded: transitions from week get action c c1 ∶= c c1 + sgn(x c1 ); c c2 ∶= c c2 + sgn(x c2 ). To ensure the deadline, transitions to week get guard c c1 ⩽ 3; c c2 ⩽ 3. We now implement a way to fullfill obligations, i.e., to set x c1 and x c2 back to 0. This is done with the states compensate1 and compensate2 whose λ-label is r. We use a stopwatch x cr with bound 45h − 24h active at these states. We describe the transitions involving compensate1. It receives transitions from the states with λ-label r, that is, regular daily, reduced daily, regular weekly, reduced weekly and break. The transition from break has guard x break ⩾ 9h, the others have their respective definitorial guards (e.g., the one from regular weekly has guard x wr ⩾ 45h). Transitions from compensate1 go to drive, other work and accept. These have guard x cr ⩾ x c1 and action x c1 ∶= 0 ∶ c c1 ∶= 0. Additionally, we already introduced transitions from and to week: the transition to week is duplicated, one gets guard x cr < x c1 , the other gets guard x cr ⩾ x c1 and action x c1 ∶= 0. Thus, when the week changes during compensation and at a time-point when the obligation is fulfilled, the counter is not increased. This finishes the definition of our automaton. We close this section with some remarks on the formalization: (a) Regulation 561 contains a few laws concerning multi-manning that gives rise to an additional activity available and a distinction between breaks and rests. This is omitted in our treatment. (b) Article 7.2 is formally unclean: the second paragraph allows an exception to the first that obviously cannot "comply with the provisions of the first paragraph". A reasonable formalization requires an interpretational change to the law as written. The following two points or [24,31] give more such examples. (c) The definition in Article 4.(k) forgets the boundary case of a new driver: without any (daily) rest period there cannot be any daily driving time. A similar problem appears with Article 8.6 (3rd part) when there is no previous weekly rest period. (d) Concerning Article 6.1, recall that daily driving times are periods delimited by daily rest periods and a week is defined as calendar week starting at Monday 00:00. Consider a 10h extended daily driving time starting on a Sunday and ending on a Monday. To which one of the two weeks should it be counted? The law seems underspecified here. Our formalization assigns it to the week that starts on Monday. Various tachograph readers make different choices. For example, the software Police Controller has an option to fix the choices or to choose the distribution as to minimize the fine [30]. (e) The nomenclature in Regulation 561 is confusing. A day is determined by daily rest periods, a week by the calendar, while weekly (e.g., in Article 8.9) does not refer to calendar weeks. Additionally, the regulation does not state what should be done when a leap second is added on a Sunday so that the time 24:00:01 exists. (f) For example, (dr) 270 is legal according to Article 7 but likely not in line with the spirit of the law. Another regulation ((EU) 2016/799) stipulates that any minute of rest between two minutes of driving will be considered as driving -outruling the above example. Then (ddrr) 135 is still legal. We expect that it is generally easy to construct artificial counterintuitive cases. Theory of stopwatch automata In this section we observe that stopwatch automata have the same expressive power as MSO over finite words but a relatively tame model-checking complexity. We also give efficient algorithms for consistency-checking and scheduling (see Section 1.2). Finally, we mention a version of stopwatch automata going beyond MSO. Expressivity Lemma 13. Every regular language is the language of some stopwatch automaton. The states Q of A are start and accept together with the states S ×Σ labeled λ (s, a) ∶= a (the labels of start and accept are irrelevant). We use a stopwatch x intended to force the automaton to spend 1 time unit in every state (s, a): it has bound β(x) ∶= 2 and is active everywhere, i.e., ζ ∶= {x} × Q. Each transition from some (s, a) to (s ′ , a ′ ) has condition x = 1 and action x ∶= 0. We only allow those transitions from (s, a) to (s ′ , a ′ ) when (s, a, s ′ ) ∈ Γ. This defines ∆ when the states are not start or accept. Transitions from start lead to I × Σ and have guard x = 0. Transitions to accept come from F × Σ and have guard x = 0. The converse of this lemma is based on the following definition. The proof of Lemma 13 gives a polynomial time computable function mapping every finite automaton to an equivalent stopwatch automaton. There is no such function for the converse translation, in fact, stopwatch automata are exponentially more succinct than finite automata. Proposition 16. For every k there is a stopwatch automaton A k of size O(log k) such that every finite automaton accepting L(A k ) has size at least k. We defer the proof to the end of Section 5.5. Consistency-checking By Theorem 15 we know that the languages accepted by stopwatch automata are exactly the regular languages. In particular, these languages are closed under intersections. We give an explicit construction of such an automaton computing an intersection because we shall need explicit bounds. Proof. Let A = (Q, Σ, X, λ, β, ζ, ∆) and A ′ = (Q ′ , Σ, X ′ , λ ′ , β ′ , ζ ′ , ∆ ′ ) be stopwatch automata. Without loss of generality, we can assume that (a) X and X ′ are disjoint; (b) neither A nor A ′ contains transitions from its accept state; (c) both ∆ and ∆ ′ contain for every state except the accept state a transition from the state to itself with trivial guard and action; We first define an automaton A × A ′ with alphabet Σ × Σ. Its states are Q, ×Q ′ with start and accept state the pair of corresponding states of A, A ′ . The stopwatches are X ∪ X ′ with the same bounds as in A, ). The transitions are ((q 0 , q ′ 0 ), g * , α * , (q 1 , q ′ 1 )) such that there are (q 0 , g, α, q 1 ) ∈ ∆ and (q ′ 0 , g ′ , α ′ , q ′ 1 ) ∈ ∆ ′ such that g * computes the conjunction of g and g ′ and α * executes α and α ′ in parallel. This is well-defined by (a). Also by (a) we can write assignments for A×A ′ as ξ ∪ξ ′ where ξ, ξ ′ are assignments for A, A ′ . We claim that A × A ′ accepts a word (a 0 , a ′ 0 )⋯(a n−1 , a ′ n−1 ) ∈ (Σ × Σ) n if and only if a 0 ⋯a n−1 ∈ L(A) and is an initial accepting run of A that reads a 0 ⋯a n−1 . Analogously, a ′ 0 ⋯a ′ n−1 ∈ L(A ′ ). Conversely, given a 0 ⋯a n−1 ∈ L(A) and a ′ 0 ⋯a ′ n−1 ∈ L(A ′ ) we can choose initial accepting runs of A and A ′ reading these words, respectively, and have the form: Here, 0 * → denotes the transitive closure of 0 → in TS (A) and TS (A ′ ). Then ℓ = ℓ ′ = n. By (c), we can assume that the 0 * →-paths have the same length. Then the runs have the same length. Then the runs can be combined in the obvious way to an initial accepting run of A × A ′ reading (a 0 , a ′ 0 )⋯(a n−1 , a ′ n−1 ). This proves the claim. The automaton A ⊗ A ′ is easily obtained from a modification of A × A ′ whose initial accepting runs are precisely those initial accepting runs of A × A ′ that read words over {(a, a) a ∈ Σ}. Such a modification is easy to obtain: add a new stopwatch y with bound 1 to A × A ′ that is active in all states; every transition gets action y ∶= 0 and every transition from a state (q, q ′ ) with λ(q) ≠ λ(q ′ ) gets guard y = 0. The claims about the bound of A ⊗ A ′ and the time needed to compute it are clear. The following algorithm can be used to check if the intersection of two languages is empty or not. Informally, we can perceive this as an algorithm that checks whether a certain type of behaviour is illegal according to a law when both the type of behaviour and the law are specified by stopwatch automata. Theorem 18. There is an algorithm that given stopwatch automata A, A ′ with bounds Proof. The algorithm first computes the product automaton A⊗A ′ from the previous lemma. Next, the algorithm computes the finite automaton B(A ⊗ A ′ ) = (S, Σ, I, F, Γ) as given in To compute Γ we first compute the graph on S with edges The algorithm solves the consistency problem for stopwatch automata by fixing input A ′ to some stopwatch automaton with L(A ′ ) = Σ * . Model-checking The Theorem 20. There is an algorithm that given a word w and a stopwatch automaton A with bound B A decides whether w ∈ L(A) in time Proof. Let A = (Q, Σ, X, λ, β, ζ, ∆) have bound B A . Let G = (V, E) be the directed graph whose vertices V are the nodes of TS (A) and whose directed edges E are given by We define a directed graph with vertices {0, . . . , t} × V and the following edges. Edges within each copy {i} × V are copies of E. So these account for at most (t + 1) ⋅ B A ⋅ ∆ many edges, each determined by evaluating guards and actions in time O( A ). Further edges lead from vertices in the i-th copy {i} × V to vertices in the (i + 1)th copy {i + 1} × V , namely from (i, (q, ξ)) to (i + 1, (q, ξ ′ )) if (q, ξ) 1 → (q, ξ ′ ) and q ≠ accept and λ(q) = w i . There are at most t ⋅ Q ⋅ B A such edges between copies. This graph has size O(t ⋅ A ⋅ B A ) and can be computed in time It is clear that w ∈ L(A) if and only if (t, (accept, ξ ′ )) for some assignment ξ ′ is reachable in the sense that there is a path from (0, (start , ξ 0 )) with ξ 0 constantly 0 to it. Checking this takes time linear in the size of the graph. Scheduling We strengthen the model-checker of Theorem 20 to solve the scheduling problem: the modelchecker treats the special case for inputs with n = 0. Theorem 21. There is an algorithm that given a stopwatch automaton A with bound B A and alphabet Σ, a word w ∈ Σ * , a letter a ∈ Σ and n ∈ N, rejects if there does not exist a word v over Σ of length n such that wv ∈ L(A) and otherwise computes such a word v with maximal # a (v). It runs in time Proof. Consider the graph constructed in the poof of Theorem 20 but with t + n instead of t and the following modification: in (3) for t ⩽ i < t + n drop the condition λ(q) = w i for edges between the i-th and the (i + 1)-th copy. In the resulting graph there is a reachable vertex (t + n, (accept, ξ ′ )) for some assignment ξ ′ if and only if there exists a length n word v such that wv ∈ L(A). We now show how to compute the maximum value # a (v) for such v. Successively for i = 0, . . . , n compute a label V i (q, ξ) for each vertex (t + i, (q, ξ)) in the (t + i)-th copy. For i = 0 all these labels are # a (w). For i > 0 label (t + i, (q, ξ)) with the maximum value taken over ξ ′ such that there is an edge from (t+i−1, (q, ξ ′ )) to (t+i, (q, ξ)). Then the desired maximum value # a (v) is the maximum label V n (q, ξ) such that q = accept and (t + n, (q, ξ)) is reachable. Additionally we are asked to compute a word v witnessing this value. To do so the labeling algorithm computes a set of directed edges, namely for each (t + i, (q, ξ)) with i > 0 to a vertex (t + i − 1, (q, ξ ′ )) witnessing the maximum value above. This set of edges defines a partial function that, for each i > 0, maps vertices in the (t + i)-th copy to vertices in the (t + i − 1)-th copy. To compute v as desired start at a vertex (t + n, (q n , ξ n )) witnessing the maximal value # a (v) and iterate this partial function to get a sequence of vertices (t + i, (q i , ξ i )). Then v ∶= λ(q 1 )⋯λ(q n ) is as desired. It is clear that all this can be done in time linear in the size of the graph. Beyond regularity A straightforward generalization of stopwatch automata allows β to take value ∞. An unbounded stopwatch automaton is a stopwatch automaton where β is the function constantly ∞. We note that model-checking is undecidable already for simple such automata (see [16,Proposition 1] for a similar proof). These simple automata use two stopwatches x, y that are nowhere active (i.e., ζ = ∅), all guards check z = 0 or z ≠ 0, and all actions are either z ∶= z + 1 or z ∶= z−1 = max{z − 1, 0} for some z ∈ {x, y}. Proposition 22. There is no algorithm that given a simple unbounded stopwatch automaton decides whether it accepts the empty word. Proof. Recall, a two counter machine operates two variables x, y called counters and is given by a finite non-empty sequence (π 0 , . . . , π ℓ ) of instructions π i , namely, either z ∶= z + 1, z ∶= z−1, "Halt" or "if z = 0, then goto j, else goto k" where z ∈ {x, y} and j, k ⩽ ℓ; exactly π ℓ is "Halt". The computation (without input) of the machine is straightforwardly explained. It is long known that it is undecidable whether a given two counter machine halts or not. Given such a machine (π 0 , . . . , π ℓ ) it is easy to construct a simple automaton that accepts the empty word if and only if the two counter machine halts. It has states Q = {0, 1, . . . , ℓ} understanding start = 0 and ℓ = accept; Σ and λ are unimportant, and ∆ is defined as follows. If π i is the instruction z ∶= z + 1, then add the edge (i, g, α, i + 1) where g is trivial and α changes z to z + 1. If π i is the instruction z ∶= z−1, proceed similarly. If π i is "if z = 0, then goto j, else goto k" add edges (i, g, α, j), (i, g ′ , α, k) where g checks z = 0 and g ′ checks z ≠ 0 and α computes the identity. What seems to be a middle ground between unbounded stopwatches and stopwatches with a constant bound is to let the bound grow with the length of the input word. The definition of a stopwatch automaton A = (Q, Σ, X, λ, β, ζ, ∆) can be generalized letting β ∶ X × N → N be monotone in the sense that β(x, n) ⩽ β(x, n ′ ) for all x ∈ X, n, n ′ ∈ N with n ⩽ n ′ . We call this a β-bounded stopwatch automaton and call B A ∶ N → N defined by the bound of A. For each n ∈ N we have a stopwatch automaton A(n) ∶= (Q, Σ, X, β n , ζ, λ, ∆) where β n ∶ X → N maps x ∈ X to β(x, n); note B A(n) = B A (n). The language L(A) accepted by a β-bounded stopwatch automaton A contains a word w over Σ if and only if w ∈ L(A( w )). Proposition 23. A language is accepted by some stopwatch automaton if and only if it is accepted by some β-bounded stopwatch automaton with bounded β. Proof. Let A be a β-bounded stopwatch automaton for bounded β. There is n 0 ∈ N such that β(x, n) = β(x, n 0 ) for all x ∈ X and n ⩾ n 0 . Hence L(A(n 0 )) and L(A) contain the same words of length at least n 0 . Since there are only finitely many shorter words, and L(A(n 0 )) is regular by Theorem 15, also L(A) is regular. Theorem 20 on feasible model checking generalizes: Corollary 24. Let X be a finite set and assume β ∶ X × N → N is such that β(x, n) is computable from (x, n) ∈ X × N in time O(n). Then there is an algorithm that given a word w and a β-bounded stopwatch automaton A with bound If β(x, n) grows slowly in n this can be considered tractable. Any growth, no matter how slow, leads to non-regularity: Proposition 25. Let f ∶ N → N be unbounded and non-decreasing. Then there is a β-bounded stopwatch automaton A = (Q, Σ, X, λ, β, ζ, ∆) with β(x, n) = f (n) for all x ∈ X and all n ∈ N such that L(A) is not regular. Proof. Let Σ be the three letter alphabet {a, b, c}, and let L contain a length t word over Σ if it has the form a s b s c * for some s < f (t). Since f is unbounded, L contains such words for arbitrarily large s. It thus follows from the Pumping Lemma, that L is not regular. It suffices to define a β-bounded stopwatch automaton A such that that accepts a word of sufficiently large length t if and only if it belong to L. The states are start, accept, q a , q b , q c with λ-labels a, a, a, b, c, respectively. We use stopwatches x a , y a , x b all with bound f (t) and declare x a , y a active in q a and start, and x b active in q b . There are transitions from start to q a , from q a to q b , from q b to q c , and from q c to accept -described next. The transition from start to q a has guard x a = 0 and action y a ∶= 1. For sufficiently large t, the bound f (t) of x a is positive. Then any initial accepting computation (of A on a word of length t) spends 0 time in start, and thus starts (start , [0, 0, 0]) 0 → (q a , [0, 1, 0]); we use a notation like [1,2,3] to denote the assignment that maps x a to 1, y a to 2, and x b to 3. The transition from q a to q b has guard x < y and trivial action. An initial accepting computation on a word of length t can stay in q a for some time r reaching (q a , [r, r + 1, 0]) for r < f (t), or reaching [f (t), f (t), 0] for r ⩾ f (t) due to the bound of x, y. In the latter case the transition to q b is disabled and accept cannot be reached. Staying in q a for any time s < f (t) allows the transition to q b . The transition from q b to q c has guard x a = x b and trivial action. The transition from q c to accept has trivial guard and action. We can now prove that stopwatch automata are exponentially more succinct than finite automata as was expressed in Proposition 16. Proof of Proposition 16. Consider the previous proof for the function f constantly k. Clearly, then L is regular. By the Pumping Lemma, a finite automaton accepting L has at least k states. The stopwatch automaton A accepts L and has size O(log k). Indeed, the size of a binary encoding of A is dominated by the bits required to write down the bound k of the stopwatches. Discussion and a lower bound We suggest the model-checking problem for stopwatch automata and finite words (over some finite alphabet) as an answer to our central question in Section 1.2, the quest for a model for algorithmic laws concerning activity sequences. This section discusses to what extent this model meets the three desiderata listed in Section 1.2, and mentions some open ends for future work. Summary Expressivity Stopwatch automata are highly expressive, namely, by Theorems 15 and 1, equally expressive as MSO (over finite words). In particular, [24] argued that Regulation 561 is expressible in MSO, so it is also expressible by stopwatch automata. In Section 5.5 we showed that a straightforward generalization of stopwatch automata can go even beyond MSO. Future research might show whether this is useful for modeling actual laws. Example 26. Imagine an employee who can freely schedule his work and choose among various activities Σ to execute at any given time point. The employer favors an activity a ∈ Σ and checks at random time-points that the employee used at least a third of his worktime on activity a since the previous check. The set of w ∈ Σ * with # a (w) ⩾ w 3 is not regular but is accepted by a simple β-bounded stopwatch automaton with one stopwatch x and bound β(x, t) = ⌈t 3⌉. Naturality We stressed that expressivity alone is not sufficient, natural expressivity is required. This is an informal requirement, roughly, it means that the specification of a law should be readable, and in particular, not too large. In particular, as emphasized in Section 2.1, constants appearing in laws bounding durations of certain activities should not blow up the size of the formalization (like it is the case for LTL). We suggest that our expression of Regulation 561 by a stopwatch automaton is natural. There is a possibility to use stopwatch automata as a law maker: an interface that allows to specify laws in a formally rigorous way without assuming much mathematical education. It is envisionable to use graphical interfaces akin to the one provided by UPPAAL 10 to draw stopwatch automata. A discussion of this possibility as well as the concept of "readability" is outside the scope of this paper. Tractability The main constraint of a model-checking problem as a formal model for algorithmic law is its computational tractability. In particular, the complexity of this problem should scale well with the constants appearing in the law. This asks for a fine-grained complexity analysis taking into account various aspects of a typical input, and, technically, calls for a complexity analysis in the framework of parameterized complexity theory. Theorem 20 gives a model-checker for stopwatch automata. Its worst case time complexity upper bound scales transparently with the involved constants, and, most importantly, the runtime is not exponential in these constants. This overcomes a bottleneck of many model-checkers designed in the context of system verification (see Section 2.2). Theorems 18 and 21 give similar algorithms for consistency-checking and scheduling. Parameterized model-checking We have an upper bound O( A 2 ⋅ B A ⋅ w ) to the worst case runtime of our model-checker. The troubling factor is B A : the runtime grows fast with the stopwatch bounds of the automaton. Intuitively, these bounds stem from the constants mentioned by the law as duration constraints on activities. At least, this is the case for our formalization of Regulation 561: we explicitly mentioned 17 constantst = (t 0 , . . . , t 16 ) which determine our automaton, specifically its bounds, guards and actions. To wit,t determines bounds on stopwatches as follows: The other stopwatches have bounds independent oft ∈ N 17 . For any choice oft we get an automaton A(t) that accepts exactly the words that represent activity sequences that are legal according to the variant of Regulations 561 obtained by changing these constants tot. It is a matter of no concern to us that not all choices fort lead to meaningful laws. We are interested in how the runtime of our model-checker for Regulation 561 depends on these constants. By Theorem 20 we obtain: Corollary 27. There is an algorithm that givent ∈ N 17 and a word w decides whether w ∈ L(A(t)) in time This casts doubts whether the factor B A in our worst-case runtime O( A 2 ⋅B A ⋅ w ) should be regarded tractable. Can we somehow improve the runtime dependence from the constants? For the sake of discussion, note that B A is trivially bounded by t c A A where c A is the number of stopwatches of A and t A is the largest bound of some stopwatch of A (as in Section 2.4). Intuitively, c A is "small" but t A is not. In the spirit of parameterized complexity theory it is natural to ask whether the factor ( for some computable function f ∶ N → N. We now formulate this question precisely in the framework of parameterized complexity theory. The canonical parameterized version of our model-checking problem is Input: a stopwatch automaton A = (Q, Σ, X, λ, β, ζ, ∆) and w ∈ Σ * . Parameter: A . Problem: w ∈ L(A) ? Our model-checker of Theorem 20 witnesses that this problem is fixed-parameter tractable. for some computable f ∶ N → N because the circuits in A have size ⩾ log B A . Intuitively, that B A is bounded in terms of the parameter A means that the parameterized problem above models instances where B A is "small", in particular β takes "small" values. But there are cases of interest where this is not true: the constant t 10 ∶= 10080 in Regulation 561 is not "small". In the situation of such an algorithmic law, the above parameterized problem is the wrong model. A better model parameterizes a model-checking instance (A, w) by the size of A but discounts the stopwatch bounds. More precisely, consider the following parameterized problem: a stopwatch automaton A = (Q, Σ, X, λ, β, ζ, ∆) and w ∈ Σ * . Parameter: Q + Σ + X + ∆ . Problem: w ∈ L(A) ? Note that the algorithm of Theorem 20 does not witness that this problem would be fixed-parameter tractable. We arrive at the precise question: Is p-SWA fixed-parameter tractable? A lower bound In this section we prove that the answer to the above question is likely negative: Theorem 28. p-SWA is not fixed-parameter tractable unless every problem in the Whierarchy is fixed-parameter tractable. We refer to any of the monographs [26,35,27] for a definition of the W-hierarchy W[1] ⊆ W[2] ⊆ ⋯. As mentioned in Section 1.2, the central hardness hypothesis of parameterized complexity theory is that already the first level W [1] contains problems that are not fixedparameter tractable. We thus consider Theorem 28 as strong evidence that the answer to our question is negative. We prove Theorem 28 by a reduction from a parameterized version of the Longest Common Subsequence Problem (LCS). This classical problem takes as inputs an alphabet Σ, finitely many words w 0 , . . . , w k−1 over Σ and a natural number m. The problem is to decide whether the given words have a common subsequence of length m: such a subsequence is a length m word a 0 ⋯a m−1 over Σ (the a i are letters from Σ) that can be obtained from every w i , i < k, by deleting some letters. In other words, for every i < k there are j i 0 < ⋯ < j i m−1 < w i such that for all ℓ < m the word w i has letter a ℓ at position j i ℓ . For example, both bbaccb and bbaacb are common subsequences of abbaaccb and bbacccacbb. The statement that p-LCS is fixed-parameter tractable means that it can be decided by an algorithm that on an instance (Σ, w 0 , . . . , w k−1 , m) runs in time for some computable function f ∶ N → N. The existence of such an algorithm is unlikely due to the following result: Theorem 29 ([8]). p-LCS is not fixed-parameter tractable unless every problem in the Whierarchy is fixed-parameter tractable. Proof of Theorem 28: Let (Σ, w 0 , . . . , w k−1 , m) be an instance of p-LCS, so Σ is an alphabet, w 0 , . . . , w k−1 ∈ Σ * and m ∈ N. Let w ∶= w 0 ⋯w k−1 be the concatenation of the given words, and consider w m , the concatenation of m copies of w. We construct a P (Σ)-labeled stopwatch automaton A = (Q, Σ, X, λ, β, ζ, ∆) that accepts w m if and only if w 0 , . . . , w k−1 have a common subsequence of length m. An initial accepting computation of A on w m proceeds in m rounds, each round reads a copy of w. In round ℓ < m the computation guesses a position within each of the words w 0 , . . . , w k−1 copied within w, and ensures they all carry the same letter. These positions are stored in registers (i.e., nowhere active stopwatches) x 0 , . . . , x k−1 with bounds w 0 + 1, . . . , w k−1 + 1, respectively. Our intention is that the value of x i after round ℓ < m equals the position j i ℓ in the definition of a common subsequence. Our intention is that an initial accepting computation in round ℓ < m cycles though k many guess parts of the automaton. Within guess part 0, the computation reads w 0 (within copy ℓ of w in w m ), within guess part 1 the computation reads w 1 and so on. The states of A are the states of the guess parts plus an an additional state accept. Each guess part consists of a copy of the states start, end, and guess(a) for a ∈ Σ. The λ-labels of start and end are Σ, the λ-label of guess(a) is {a}. The start state of A is start in guess part 0. We intend that the computation in guess part i < k spends some time t < w i in start, then spends exactly one time unit in some state guess(a), and then spends time w i − t in end before switching to the next guess part. The position guessed is t and stored as the value of x i . Writing momentarily w i = a 0 a 1 ⋯a w i −1 the computation reads the (possibly empty) word a 0 ⋯a t−1 in state start, then reads a t in state guess(a t ), and then reads the (possibly empty) word a t+1 ⋯a w i −1 in state end. We enforce this behavior as follows. There are transitions from start (in guess part i) to guess(a) for every a ∈ Σ, and for every a ∈ Σ from guess(a) to end. We use a stopwatch y i with bound w i + 1 active in all states of guess part i and a stopwatch z with bound 2 active in the states guess(a), a ∈ Σ, of any guess part. It will be clear that initial accepting computations enter guess part i with both y i and z having value 0. The transitions from start to guess(a), a ∈ Σ, have guard checking x i < y i < w i and action setting x i ∶= y i . The transitions from guess(a), a ∈ Σ, to end have guard checking z = 1 and action setting z ∶= 0. The state end in guess part i < k − 1 has a transition to start in guess part i + 1; for i = k − 1 this transition is to start in guess part 0. These transitions have guard checking y i = w i and action setting y i ∶= 0. Observe that the computation spends time w i in guess part i < k and increases the value of x i . Hence the values of x i after each round form an increasing sequence of positions < w i . We have to ensure that the values of x 0 , . . . , x k−1 after a round are positions in the words w 0 , . . . , w k−1 , respectively, that carry the same letter. Write Σ = {a 0 , . . . , a Σ −1 }. We use a registerx with bound Σ − 1. In guess part 0, the action of the transition from guess(a j ) to end also setsx ∶= j. In the guess parts i < k for i ≠ 0, the guards of the transitions from start to guess(a j ) check thatx = j. We count rounds using a registerỹ with bound m. We let the action of the transition from end in guess part k − 1 to start in guess part 0 setỹ ∶=ỹ + 1. From copy 0 of start there is a transition to accept with guardỹ = m. This completes the construction of A. To prove the theorem, assume p-SWA is fixed-parameter tractable, i.e., there is an algorithm deciding p-SWA that on an instance (A, w) runs in time f (k ′ ) ⋅ w O(1) where k ′ is the parameter of the instance, and f ∶ N → N is a nondecreasing computable function. By Theorem 29 is suffices to show that p-LCS is fixed-parameter tractable. Given an instance (Σ, w 0 , . . . , w k−1 , m) of p-LCS answer "no" if m > w 0 . Otherwise compute the automaton A as above and then compute an equivalent stopwatch automaton A ′ as in the construction behind Proposition 11. It is clear that (A ′ , w m ) is computable from (Σ, w 0 , . . . , w k−1 , m) in polynomial time (since m ⩽ w 0 ). Then (A ′ , w m ) is a "yes"-instance of p-SWA if and only if (Σ, w 0 , . . . , w k−1 , m) is a "yes"-instance of p-LCS. Hence to decide p-LCS it suffices to run the algorithm for p-SWA on (A ′ , w m ). This takes time f (k ′ )⋅ w m O (1) where k ′ is the parameter of (A ′ , w m ). By construction, it is clear that k ′ ⩽ g(k + Σ ) for some computable g ∶ N → N (in fact, k ′ ⩽ (k + Σ ) O(1) ). Since m ⩽ w 0 , the time f (k ′ ) ⋅ w m O(1) is bounded by f (g(k+ Σ ))⋅( w 0 +⋯+ w k−1 ) O(1) . Thus, p-LCS is fixed-parameter tractable. Recall, p-SWA is meant to formalize the computational problem to be solved by general purpose model-checkers in algorithmic law. Being general purpose, the set of activities Σ should be part of the input, it varies with the laws to be modeled. Nevertheless one might ask whether the hardness result in Theorem 28 might be side-stepped by restricting attention to some fixed alphabet Σ. This is unlikely to be the case. Proof. Note that the reduction (Σ, w 0 , . . . , w k−1 , m) ↦ (A ′ , w m ) (for m ⩽ w 0 ) in the proof above constructs an automaton A ′ over the same alphabet Σ. It is thus a reduction from the restriction of p-LCS to instances with Σ = {0, 1} to p-SWA({0, 1}). Now, [50] showed that this restriction is W[1]-hard.
2023-07-13T07:36:16.340Z
2023-07-11T00:00:00.000
{ "year": 2023, "sha1": "76cfe8ee3bdfd1aec75a8fad6435c2abf931e01e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "76cfe8ee3bdfd1aec75a8fad6435c2abf931e01e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
55596436
pes2o/s2orc
v3-fos-license
Meningiomas of the Craniovertebral Junction: A Review Meningiomas are tumors originating from the meninges of the brain and spine that are typically benign. Mass-effect and traction of nearby structures resulting in neurological sequelae often predicates surgical resection. Vital structures are unavoidably encountered en route to craniocervical junction meningiomas, posing a significant surgical challenge. The clinical presentation varies based on neoplasm size and location. Resection techniques also vary based on individual anatomy and the required exposure. *Corresponding author: Eric M Deshaies, Department of Neurosurgery, SUNY Upstate Neurovascular Institutel 750 East Adams Street Syracuse, NY 13210, USA, Tel: 315-464-5502; Fax: 315-464-6373; E-mail: deshaiee@upstate.edu Received November 21, 2013; Accepted December 26, 2013; Published December 31, 2013 Citation: Galgano MA, Beutler T, Brooking A, Deshaies EM (2013) Meningiomas of the Craniovertebral Junction: A Review. J Spine 3: 150. doi:10.4172/21657939.1000150 Copyright: © 2013 Galgano MA, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction Meningiomas are tumors originating from the meninges of the brain and spine that are typically benign; however, there are welldescribed subtypes that have a more aggressive nature, clinically and pathologically. Meningiomas have the potential to be located anywhere from the convexity of the brain to the most distal aspect of the spine. Although meningiomas often are pathologically benign, their ability to situate themselves in locations inhabited by vital central nervous system (CNS) structures often makes them a surgical challenge. In this paper, we will focus on meningiomas that predominate at the craniocervical junction. Pathology of Meningiomas The term meningioma represents a broad histological group of tumors with variable behavior, derived from meningothelial cells that are typically attached to the inner surface of the dura mater, and are classified by the World Health Organization (WHO) grades I, II and III [1]. Meningiomas can arise from any location where meninges or ectopic meninges may exist, and represent nearly one third of all primary intracranial neoplasms and one quarter of all primary spinal cord neoplasms [2]. In children (ages 0-19), the incidence of meningioma is 3.7% with a male to female ratio of 1:1. While 90% of meningiomas are benign (grade I), the incidence of atypical (grade II) and anaplastic (grade III) are 6-8% and 2-3% respectively [3,4]. Cushing described these tumors as arising from meningial origin. This served as a unifying description for a group of histologically and genetically diverse tumors, thought to be derived from arachnoid cap cells. Arachnoid cap cells function as a structural barrier, act as a conduit for cerebrospinal fluid (CSF) resorption, aid in monocytelike functions, participate in reactive gliosis, and aid in detoxification within the CNS [1,5]. The meninges comprise the dura mater and the leptomeninges (arachnoid and pia mater). Dura forms an outer endosteal layer related to the bones of the skull and spine and an inner layer closely applied to the arachnoid mater. Leptomeninges have multiple functions and anatomical relationships. The outer parietal layer of arachnoid is impermeable to CSF due to tight intercellular junctions; elsewhere leptomeningeal cells form demosomes and gap junctions. Trabeculae of leptomeninges compartmentalize the subarachnoid space and join the pia to arachnoid mater. In bacterial meningitis leptomeningeal cells secrete cytokines. Pia mater is reflected from the surface of the brain and spinal cord onto arteries and veins, thus separating the subarachnoid space from the brain and cord. A sheath of leptomeninges accompanies arteries into the brain and is related to the pathways for the drainage of interstitial fluid that play a role in inflammatory responses in the brain and appear to be blocked by amyloid-beta in Alzheimer's disease. Specialised leptomeningeal cells in the stroma of the choroid plexus form collagen whorls that become calcified with age. Leptomeningeal cells also form channels in the core and apical cap of arachnoid granulations for the drainage of CSF into venous sinuses. In the spine, leptomeninges form highly perforated intermediate sheets of arachnoid and delicate ligaments that compartmentalize the subarachnoid space; dentate ligaments anchor subpial collagen to the dura mater and stabilize the spinal cord. Despite the multiple anatomical arrangements and physiological functions, leptomeningeal cells retain many histological features that are similar from site to site [1,5] . Arachnoid cap cells demonstrate mixed epithelial and mesenchymal origins, appearing normally as a single layer resembling fibroblasts, or epitheliod nests composed of multiple layers. Clustering of arachnoidal cap cells and the concomitant formation of whorles and psammoma bodies become increasingly prominent with age, and are identical with the characteristics of meningiomas. These cytological similarities, and the behavior of the cells, serve as the basis for arachnoid cap cells being favored as the likely cell origin of meningiomas [1]. Loss of heterozygosity of the neurofibromin 2 gene (NF2), located at the 12.2 region on chromosome 22, is the most common genetic abnormality found in meningiomas, and is responsible for 60% of sporadic as well as the majority of NF2 associated meningiomas. The NF2 gene codes for the protein Merlin and although its function is poorly understood, is thought to play a role in the 2 hit model of tumor inactivation. Consistent with tumor suppressor gene inactivation theory, up to 60% of meningiomas have an associated NF2 gene mutation, which potentially serves as the initiating event of tumorigenesis, particularly those demonstrating a mesenchymal phenotype [1]. Tumor suppressor gene DAL-1/4.1B, located on chromosome 18, is a member of the 4.1 protein superfamily and is another gene that has commonly been associated with meningiomas. The role of this gene may contribute less to the initiation of tumorigenesis as initially thought, and may serve as a progression locus for sporadic tumors [6]. Alterations of the 1p, 14q, chromosome 10 and 18q have been associated with atypical and anaplastic grades to a higher degree. Multiple meningiomas are found in about 1-2% of patients with an already known meningioma [7]. This statistic rises in individuals with neurofibromatosis type 2 [8]. There are some reports in the literature that make reference to the fact that some meningiomas may actually spread through the CSF pathways. The majority of these cases described the meningiomas pathologically as malignant or angioblastic [8][9][10][11][12]. Clinical presentation Patients typically become symptomatic from craniovertebral junction meningiomas between 35 and 60 years of age. Pediatric cases of craniovertebral junction meningiomas are well-described, although not as common as in the adult population. As with meningiomas elsewhere in the brain and spine, females tend to be more often afflicted. In fact, around 66-73% of craniovertebral junction meningiomas are represented by females [21][22][23]. Clinically, craniovertebral junction meningiomas present with neurological sequelae resulting from mass effect and traction of nearby structures. They may also cause vascular compromise, hydrocephalus, and syringohydromyelia [15,24]. These lesions are often attached to the anterior ring of the foramen magnum. Frequently they invade the region of the entrance around the vertebral artery and the exit of the cervical roots [17,20,25]. They may cause location-specific symptoms based on whether they are predominantly intracranial, extracranial, or a combination. Craniocervical junction meningiomas most frequently extend above and below the foramen magnum equally, although they can be predominantly intracranial or intraspinal. Intracranial components of the meningioma may cause brainstem dysfunction and lower cranial nerve neuropathies, as well as occasional cerebellar signs. Cranial nerve palsies may be the result of traction, or brainstem nuclear involvement via mass compression [26]. The vagus, glossopharyngeal, and hypoglossal are the most commonly affected cranial nerves. Some patients may describe slurred speech, difficulty swallowing, and recurrent pneumonia, from repeated episodes of aspiration. The dysphagia may lead to unwanted weight loss. High cervical meningiomas produce myelopathic features, and occasionally cause spinal accessory nerve dysfunction with the potential to cause torticollis. They can also cause sensory aberrations to the face from descending trigeminal nerve tracks being affected. In addition, lower decussations of motor and sensory tracks may be affected [26]. So-called "straddle lesions" have a tendency to produce minimal dysfunction to the lower cranial nerves, but predominantly cause a high cervical myelopathy. At the craniocervical junction, there tends to be a rather sizeable area of subarachnoid space that is accommodating to a growing mass. This generally allows these meningiomas to grow to significant sizes before they become symptomatic and discovered. Pain is often the primary complaint. It is often localized to the second cervical dermatome, involving the suboccipital region. The pain often is aggravated by head and neck motion. The patient may actually present with posterior headaches, and a condition resembling torticollis, with the head held in flexion [27]. Some patients with meningiomas, as well as other tumors in this region involving the high cervical spinal cord, may describe an abnormal cold sensation in their lower extremities. This may in fact be pathognomonic of high cervical cord lesions [24,28]. Pain and temperature sensation is often affected, preceding loss of proprioception. Some patients describe a "dissociated" sensory loss, with areas of sensory preservation in the upper extremities. If the sensory decussation of the medial lemniscus is affected, a varied pattern of sensory deficits may be encountered. Painful dysesthesias and paresthesias of the hands, limbs, and face may be encountered. Meningiomas at the foramen magnum may actually produce a combination of long track signs in both the upper and lower extremities. Spastic weakness is common. A distinct pattern of motor involvement is also evident. Typically, motor deficits progress in a "circular" fashion, first affecting the ipsilateral upper extremity, followed by the ipsilateral lower extremity, contralateral lower extremity, and finally the contralateral upper extremity [29]. Localized ipsilateral intrinsic hand muscle weakness and atrophy may develop. One of the proposed mechanisms is decreased venous drainage and subsequent venous stasis of the anterior horn cells. Other theories include CSF obstruction with resultant hydromyelia, spinal cord edema from venous obstruction, anterior spinal artery compression, and rotation of the spinal cord from contralateral traction [30]. On occasion, symptoms such as drop attacks, paralysis, and vertebrobasilar syndromes may arise. These can be from vascular changes or biomechanical craniocervical junction instability [31]. Surgical resection The primary goal of treating foramen magnum meningiomas is to preserve and improve neurological function via surgical resection [18]. Magnetic resonance imaging (MRI) provides high-resolution pictures of the tumor and gives important information for surgical planning, such as its relationship to the brainstem and surrounding structures including vital vasculature. Typically, meningiomas are isointense on T1-weighted imaging and isointense to hypointense on T2-weighted imaging. With gadolinium administration, they avidly enhance. Magnetic resonance angiography (MRA) may be used pre-operatively for surgical planning as an alternative to standard angiography. MRA can allow clinicians to assess the patency of the vertebral artery. MRV (magnetic resonance venogram) may also be used to evaluate vital dural venous sinuses that may be in the proposed operative trajectory. Plain x-rays of the craniocervical junction may reveal subtle changes such as calcification or erosion around the foramen magnum. They may also show widening of the interpedicular spaces of upper cervical vertebrae [32]. Craniovertebral junction meningiomas may be situated in the ventral foramen magnum, as well as the posterior foramen magnum. Anteriorly located lesions are certainly a greater surgical challenge. Critical neural structures such as the brainstem, lower cranial nerves, vertebral artery, in addition to the occipital condyle, are a hindrance to exposure of the ventral foramen magnum. Surgical exposure of the ventral foramen magnum typically requires a large and complex posterolateral approach, which also raises the likelihood of subsequent morbidity and mortality given the abovementioned structures that are necessary to navigate around [32]. One of the most frequent complications of surgery on foramen magnum meningiomas are lower cranial nerve palsies. These palsies are quite significant to the patient, because they sometimes necessitate further procedures such as feeding tubes and tracheostomies. This contributes to a longer length of hospital stay, and may contribute to postoperative mortality by increased risk of aspiration pneumonia [23,33]. Other post-operative complications include hydrocephalus requiring CSF diversion, meningitis, and CSF leak. Occipital-cervical instability can also occur in the post-op period, however, this complication may be avoided by resecting no more than one third to one half of the occipital condyle [32]. Several surgical approaches have been proposed for the removal of craniovertebral junction meningiomas including the stardard midline suboccipital craniotomy and high cervical laminectomy [34][35][36][37][38][39]. Modified approaches have been developed to forge better outcomes in situations where a standard approach would be difficult. Such approaches include an anterior transoral [40], endonasal transclival [41], lateral transcervical [42], and posterolateral suboccipital approaches [38]. In order to accomplish a complete, but safe resection of meningiomas situated in the ventral foramen magnum, techniques involving partial removal of the occipital condyle in combination with transposition of the vertebral artery, have been established [10,15,17,18,20,29,[43][44][45][46]. Avoidance of occipital condyle resection and manipulation of vital structures such as the vertebral artery may be accomplished through anterior approaches for ventrally located meningiomas. Some authors have proposed an expanded endonasal transclival approach for anterior craniocervical junction lesions, such as meningiomas [47,48]. A transoral transclival approach has also been described for resection of anterior craniocervical junction meningiomas [40]. Spinal stability Attempting to determine specific factors leading to instability, as well as the actual stabilizing structures in the occipital-cervical region, can be challenging and complex. "Clinical stability" is dependent on the spine being able to maintain its pattern of displacement, such that there is no additional neurological deficit, major deformity, or significant incapacitating pain and discomfort [49]. The articular stability between the occiput and C1 depends on various factors including the cupshaped occipital-atlantal joint configuration, the fibrous capsule of this joint, and the anterior and posterior atlanto-occipital membranes [49]. The tectorial membrane, as well as the alar, cruciate, and apical ligaments, all play additional roles in occipital-atlantal articular stability [49]. It has been postulated by some that less than 70% removal of the condyle allows adequate occipital-atlantal stability, while greater than 70% removal often necessitates a stabilizing fusion procedure [50]. This "70% limit" proposed by Sekhar corresponds to resection beyond the hypoglossal canal and the tubercle, where the alar ligament inserts over the medial aspect of the condyle [50]. A biomechanical study by Vishteh et al., however, showed that removal of greater than 50% of the condyle produced statistically significant hypermobility at the occipital-C1 junction. They also showed that after a 75% resection, the biomechanics of the Occiput-C1 and C1-2 motion segments changed considerably [51]. Based on clinical experience and other biomechanical studies, other authors also feel that occipital-cervical fusion be performed when greater than 50% of the condyle is resected [31]. The extreme lateral transcondylar approach entails removing the condyle either partially or completely. This technique may lead to post-operative instability. One of the factors leading to this instability includes loss of the insertion of the transverse ligament over the lateral mass of C1. Loss of the insertion of the alar ligament over the occipital condyle also contributes, as does loss of part, or the entire cup-shaped articular surface of the occipital condyle. Lastly, destruction of the fibrous capsule around the occipito-atlantal articulation may also contribute, especially if it is violated posteriorly and laterally, where it is the thickest, and provides the most reinforcement [50]. Stabilization procedures of the occipital-cervical region typically involve a plate/rod system for the occipital bone, with occipital bone screws in the midline keel, C2 pedicle screws, and lateral mass screw fixation of the sub-axial spine. Conclusion Meningiomas of the craniocervical junction are undoubtedly a fascinating surgical challenge faced by neurosurgeons. Although they are typically pathologically benign, their locations, and the approaches used to access them, pose significant risks to the patient. Diligent pre-operative planning and meticulous surgical technique must be employed to yield the best surgical outcome for the patient.
2018-12-07T00:08:51.631Z
2013-12-31T00:00:00.000
{ "year": 2013, "sha1": "85d93169f912f5e89568800dbe6c8b7dc82299a2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2165-7939.1000150", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0ece9e29666fad8665037d6562a9c8fd0b0aa9cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55582596
pes2o/s2orc
v3-fos-license
Non-conserved dynamics of steps on vicinal surfaces during electromigration-induced step bunching We report new results on the non-conserved dynamics of parallel steps on vicinal surfaces in the case of sublimation with electromigration and step-step interactions. The derived equations are valid in the quasistatic approximation and in the limit $f^{-1}\gg l_D\gg l_{\pm} \gg l_i$, where $f$ is the inverse electromigration length, $l_D$ the diffusion length, $l_{\pm}$ the kinetic lengths and $l_i$ the terrace widths. The coupling between crystal sublimation and step-step interactions induces non-linear, non-conservative terms in the equations of motion. Depending on the initial conditions, this leads to interrupted coarsening, anticoarsening of step bunches or periodic switching between step trains of different numbers of bunches. Introduction For the theoretical study of homoepitaxial growth and sublimation of a crystal in contact with the gas phase it is important to have a model, which includes the kinetic processes and the different effects existing on the crystal surface. The classical model for the evolution of vicinal surfaces was introduced by Burton, Cabrera and Frank (BCF) [1]. It is based on the observation that the kink sites are those positions at the surface steps where the exchange between the adatom layer on the terraces and the solid phase takes place. On the mesoscopic scale the change of the crystal volume is a result of the movement of the steps. On this scale we can reduce a surface with straight steps to a one-dimensional step train. Such a surface may undergo step bunching, an instability where the steps move close to each other and form groups, called step bunches [2,3,4,5]. The theoretical description of step bunching instabilities within the framework of the BCF-model and its extensions has been the subject of much recent interest [6,7,8,9,10,11,12,13,14]. Here we focus specifically on the effect of non-conservative processes on the non-linear evolution of a step train. As we reported in [14] for the problem of sublimation in the presence of Ehrlich-Schwoebel (ES) barriers [15,16], non-conservative terms violating volume conservation in the co-moving frame arise generically from the interplay of sublimation and step-step interactions, and cause the interruption of the coarsening of the growing bunches or splitting of a large bunch into several smaller bunches. In the present paper we expand this analysis to include the experimentally relevant effect of surface electromigration [2,5,17,18,19,20,21,22,23,24,25,26,27]. In 1989, Latyshev and collaborators discovered that by changing the direction of the direct heating current, a vicinal Si (111) surface switches between bunching and debunching [17]. Additionally, they observed several distinct temperature regimes. In the so called regimes I and III [5] the bunching instability occurs only if the heating current is applied in the down-step direction. On the other hand, for the same direction in regime II debunching occurs, and bunching requires an up-step current. Here, we consider the first temperature regime, where the temperature is low enough in order to neglect step transparency (the motion of adatoms across steps) [5,8,25]. Interrupted coarsening of electromigration-induced step bunches in the presence of sublimation was previously observed numerically by Sato and Uwaha [6], however a detailed analysis of the phenomenon was not carried out due to the complexity of their model. Other studies have approached the problem within the framework of weakly nonlinear amplitude equations, which can be systematically derived by an expansion around the instability threshold [5]. In this setting the non-conserved dynamics is described on large scales by the Benney equation, which displays either spatio-temporal chaos or an ordered array of bunches, but no coarsening [28,29]. This macroscopic behavior is consistent with the complex mesoscopic step dynamics revealed in the present work. Model We consider an ascending one-dimensional step train with step edges located at positions x i (Fig.1). The starting point for the derivation of the equations of motion for the steps is the balance equation for the concentration of adatoms n i (x, t) on the i-th ter- instantaneously to the slowly moving steps, an assumption that is know as the quasistatic approximation and amounts to setting ∂ t n i (x, t) = 0 in (1). An approach that goes beyond this approximation was recently presented by Ranguelov and Stoyanov [24,25]. The general solution n i (x) of the ordinary differential equation (1) can be specified using mass conservation at the steps as boundary conditions. A terrace of width l is bounded by two steps with positions x = ±l/2, at which the flux continuity conditions must hold, where Ω is the cross section of an atomic site at the step. The labels +/− refer to quantities corresponding to the lower/upper terrace of a step. The fluxes f ± depend on both the difference of the adatom concentration n(x) compared to its equilibrium value n eq and on the attachment/detachment to the steps with kinetic coefficients k ± . If the condition k + > k − is fulfilled we speak about a standard ES effect [15]. It induces an asymmetry in the concentration profiles n i (x) quantified by the asym- where l ± = D s /k ± are called kinetic lengths. Apart from the attachment kinetics, a second effect incorporated into the boundary conditions (2) is the stepstep repulsion. The equilibrium concentration n eq is determined by the chemical potential ∆µ i at the ith step through the relation n eq ≈ n 0 eq (1 + △µ i /k B T ), and △µ i depends on the widths of the two neighboring terraces l i and l i−1 according to [2,31] where l is the mean terrace spacing and g is a dimensionless measure for the strength of repulsion between the steps [2,32]. Step equations of motion Using Eqs. (1,2,4) we find the concentrations n i (x) for all terraces. The velocity of the i−th step is then given by the superposition of the fluxes coming from the two neighboring terraces as dx i /dt = f − + f + . Since the nonconservative terms of primary interest here arise from sublimation, we discuss separately the limiting cases of pure sublimation (F = 0, 1 τ > 0) and pure growth (F > 0, 1 τ = 0); of course, in a typical experimental setup both processes may proceed simultaneously. For the case of pure sublimation and in the limit f −1 ≫ l D , we obtain the non-linear system where R = n 0 eq ΩD s , s i = sinh(l i /l D ), c i = cosh(l i /l D ) and (5) contains all four length scales and illustrates the complicated functional dependence for a simple one-dimensional step train. To simplify these expressions we use the approximation of attachment-detachment limited kinetics, l D ≫ l ± ≫ l [32,33]. After some calculations along the lines of [14] we arrive at Here a second asymmetry parameter incorporating the strength of electromigration has been introduced, R e = (n 0 eq ΩD s )/l 2 D = n 0 eq Ω/τ is the constant rate with which the surface changes volume in a unit time (in the absence of non-linear, non-conservative terms, see below) and U = (gl 2 D )/(l − + l + ). Linear stability Equations similar to (6) can be derived when the surface is subject to a growth flux but sublimation is absent In that case the factor γ i in front of the square bracket on the right hand side of (6), which depends nonlinearly on the step coordinates, is replaced by the constant Analogous to the problem considered in [14], this implies qualitatively different instability conditions for growth and sublimation. Performing a standard linear stability analysis, in the limit of large wavelength perturbations we find the instability conditions In the case of growth step bunching merely requires the compound asymmetry parameter b gr to be positive, whereas for sublimation the corresponding quantity b sub needs to exceed a positive threshold value 6g. This is an important consequence of the qualitatively different contributions to the balance eq. (1) that arise from desorption and deposition, respectively. Note that in a general situation the instability conditions (9,10) can be combined into the , which was already obtained in [12]. Conservative and nonconservative dynamics Beyond the linear stability properties, a fundamental difference between the scenarios of pure growth and sublimation is that the surface dynamics is conservative during growth but not during sublimation [8,14]. Here conservative dynamics implies that the rate of volume change of the crystal, obtained by summing the equations of motion over all steps x i , is independent of the surface configuration [34]. Indeed, replacing the γ i in front of the square brackets on the right hand side of (6) by the constant (8) and summing over i, one readily obtains iẋ i = −F ΩL, where L is the total length of the crystal. It is instructive to compare the structure of the nonconservative contributions induced during sublimation by the configuration-dependent factors γ i in (6) for the two step bunching instabilities driven by electromigration and by an ES-effect, respectively. First we neglect the ESeffect, setting b ES = 0, which simplifies (6) into the form For comparison, setting b el = 0 eq.(6) reduces to the equations derived in [14], The difference between the two cases is that the terms proportional to gb el on the RHS of eq. (12) do not cancel under summation with respect to i, and thus are nonconservative. As will be shown in the next section, this gives rise to distinct contributions in the continuum limit. Continuum equations In previous work a systematic method for deriving continuum equations of motion from the discrete step dynamics was developed [9,10] which was applied to the model (12) in [14]. Briefly, the method can be seen as a kind of Lagrange transformation [35] which replaces the 'Lagrangian' dynamics of particle-like steps by the 'Eulerian' evolution of the step density m(x, t). The latter in turn defines a continuous height profile h(x, t) through m(x, t) = ∂h ∂x . Here we wish to compare the two instability mechanisms described by eqs. (11) and (12), respectively, on the continuum level. Following the procedure outlined in [14] for both models, we find that the continuum evolution equation takes the general form where primes denote spatial derivatives. Here time t is rescaled by R e , length x by the average step distance l, and height h is measured in units of the monoatomic step height. The terms inside the square brackets on the LHS are conservative, and the non-conservative contributions are collected on the RHS of eq.(13). The two models (11) and (12) and those due to electromigration by el, respectively, the conservative terms are given by and the non-conserved contributions are As was discussed above, the terms in eq.(11) proportional to gb el give rise to a conservative contribution, whereas the terms in (12) proportional to gb ES contribute to the non-conservative part of the continuum equation. In earlier work based on the continuum approach [9, 10] the non-conservative contributions were generally neglected because of the smallness of g [14], and it was therefore concluded that step bunching phenomena induced by electromigration and by the ES-effect belong to the same universality class [9,36]. However, it has subsequently be-come clear that small non-conservative terms may qualitatively change the nonlinear dynamics of surface steps [14], and the fact that these terms are of different form for the two instability mechanisms implies that their equivalence needs to be reexamined. In the following we therefore explore the nonlinear behavior of the electromigration model (11) using numerical simulations. Nonlinear step dynamics Numerical simulations of eq. (11) were carried out using an odeint-type procedure [37] for systems of M steps with periodic boundary conditions. We consider the following ranges for the four independent parameters of the model: where an initial large step bunch splits into smaller bunches [14]. In fig. 2 Finally, in fig. 4 we plot the behavior of the maximal slope as a function of the number of steps for two differ- We see that m max generally increases with M , but this behavior is interrupted by downward jumps every time the number of bunches that can fit into the system increases by one. This shows that the existence of stationary solutions with multiple bunches can be seen as a consequence of the fact that, in the presence of non-conservative processes, the maximal slope is bounded from above [14]. Near the transition between different numbers of bunches the system behavior depends very sensitively on the amplitude of fluctuations in the initial configuration, an effect that is particularly pronounced around M = 70. Conclusion In this work we have extended the non-conservative step bunching model presented in [14] [9,10,11,32] non-conservative contributions were neglected, because of the experimentally small prefactor g [2,14]. Those terms were now taken into account and some important consequences were identified. First, on the level of linear stability analysis, they shift the instability condition on the dimensionless asymmetry parameter b by 6g, as was first pointed out in [12]. This shift is present in the case of sublimation, but not in the case of growth [14]. Moreover, in the case of sublimation the structure of the non-conservative terms differs depending on the underlying mechanism inducing the asymmetry between ascending and descending steps. This leads to different continuum equations for step bunching caused by an ES-effect or by electromigration, respectively. Nevertheless, the numerical integration of the discrete step equations for the case with sublimation and electromigration reproduces qualitatively the results of [14]. The non-linear, non-conservative terms supply a richness of dynamical behaviors in this simple one-dimensional step model. There are steady solutions which contain more than one bunch, periodic switching between step trains of different numbers of bunches, and a sensitive dependence on the initial condition. This shows that the notion of universality between different types of step bunching mechanisms, which was originally formulated on the basis of conservative continuum equations [9,36], can be applied also in the presence of non-conservative dynamics. In previous work on the conservative version of (6) a dynamical phase transition was identified which separates two qualitatively different regimes of step bunching distinguished by the presence or absence of crossing steps between bunches [11]. In our units this transition occurs at b el = 1/2, and experimental evidence for its existence in the Si(111) system has recently been reported [26]. In order to clearly bring out the effects due to the non-conservative nature of the dynamics, in the present study we have restricted ourselves to the parameter range b el ∈ [0, 0.5], but the influence of non-conservative terms on the phase transition reported in [11] is clearly an interesting topic for future work. We thank V. Popkov for useful discussions.
2012-01-06T12:53:07.000Z
2011-09-19T00:00:00.000
{ "year": 2011, "sha1": "ececf584f2257002905365963a785a0ebf726bee", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1109.4046", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ececf584f2257002905365963a785a0ebf726bee", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
251875628
pes2o/s2orc
v3-fos-license
European : The decline in student character is the result of low student learning outcomes. The common student learning outcomes are influenced by several factors, and one of them is teacher-centered, monotonous learning model. For this reason, it is deemed necessary to conduct research that aims to determine the effect of project-based assessment on values clarification technique (VCT) learning on improving students’ learning outcomes by controlling the family environment. This study uses a 2x2 factorial experimental design. The sample was selected through multistage random sampling with 120 students. The two-way ANCOVA data analysis technique was used to analyze the data. The findings obtained after controlling the family environment are: 1) civics learning outcomes from the group of students who used value clarification techniques are higher than those using conventional learning models and 2) civics learning outcomes from the group of students who were given project-based assessments are higher than the group who are given conventional assessments. Thus, it can be recommended that civics education teachers used appropriate VCT and project-based assessments to improve learning outcomes. Introduction Quality education is needed to support intelligent people who can compete in the future.Education that can support development in the future is education that can develop students' potential so that students can face and solve life problems they face.Education is a process that can form a complete human being, namely a person who has noble and intelligent character.One of the educations provided is civics learning.Civics learning is a subject that focuses on forming citizens to understand and be able to use their rights and obligations to become good, intelligent, skilled, and characterized citizens as mandated by Pancasila and the 1945 constitution (Marzuki & Basariah, 2015;Tambusai, 2018).Thus, civics learning is expected to give Indonesian people noble character, culture, and dignity.In the end, they can bring this nation to stand firm in a constellation that is increasingly expanding able to compete in the global world.Therefore, civics learning must be packaged in such a way as to suit its characteristics, namely value investment.Also, to determine the success of civics learning, and assessment activity is needed. The effectiveness of learning is influenced by the assessment process carried out.With the results of the assessment of the learning process, educators will be able to manage to learn well (Uttl et al., 2017).Assessment is one of the methods to determine students' progress in participating in learning and managing learning carried out to make learning improvements (Zhang, 2020).Assessment in education is not a tool used to increase the value of students, but rather to equip students with knowledge that can be used actively to solve problems or assignments given (Black & Wiliam, 2018;Leong et al., 2018).Students need to assess the material they are learning, which will impact their confidence and motivation in the learning process (Carpenter et al., 2020;Chen et al., 2020).Assessment will give students a choice to study or not (Saenz & Smith, 2018).A good assessment is an assessment that is carried out thoroughly, which includes the attitudes, skills, and knowledge of students (Machts et al., 2020).With a thorough assessment, educators will know the picture of learning.Of course, to make this happen, educators must be given an understanding of assessment so that educators can apply it in the learning process.Ways can increase educators' knowledge through workshops and training (Crusan et al., 2016;Mak & Lee, 2014).Assessment in Indonesia is still used to determine students' progress in learning.However, in education in European countries and several Asian countries, assessments have been collaborated in the learning process, resulting in conducive learning and supporting the learning process.An assessment provides opportunities for students to be involved in the evaluation process, both making and conducting assessments, improving the learning process (Lee & Coniam, 2013;Sulistyawati, 2020), and improving writing skills (Mak & Lee, 2014;Ranalli et al., 2018).So, based on this description, it can be told that an assessment is not only used to measure student learning progress.However, assessment can collaborate with learning, which positively impacts learning. However, the conditions on the ground were different from what was expected.The implementation of civics learning is still slightly shifted from the government's expectations.Civics learning can be misinterpreted as one of the sciences that need to be memorized without understanding the meaning of the values studied (Wijayanti & Wasitohadi, 2015).There has not been an increase in the quality of self as human beings in students.Teachers rarely emphasize sense and understanding of values in the learning process.The civics learning process is generally taught using a conventional model so that it has little impact on the quality of learning and learning outcomes (Fitriani & Sundawa, 2016).In addition, the assessment process carried out by educators cannot carry out complex or comprehensive assessments.Educators can only assess the results of the assignments given, the midterm exams, and the final semester exams without knowing the process.It will also harm students.A good assessment process will have a positive impact on students.It is the task of educators to think of efforts that can create a learning environment by current conditions, the characteristics of students, realize the goals of National Education, and produce humans who are ready to compete in the global era.A learning environment is needed to support this achievement to make this happen.For this reason, a change is required for the current learning process.More learning leads to activities that support the cognitive processes of students.Learners and educators construct their meanings from learning activities and events in the classroom, and their constructions about the subject matter can differ from normative and authentic concepts.One of the learning models that can be used to make the learning environment more conducive is the values clarification technique (VCT) learning (Ekasari, 2017;Risvanelli, 2017). VCT is one approach to value education.Students are given the freedom to determine their values based on their experience in their environment (Ekasari, 2017;Lisievici & Andronie, 2016;Risvanelli, 2017).VCT is a teaching technique to assist students in finding and determining a value that is considered good in dealing with a problem by analyzing existing values (Wijayanti & Wasitohadi, 2015).VCT will provide many opportunities for students to learn about values (Ekasari, 2017).VCT is value learning that can direct students to have the skills or ability to determine the correct values of life according to their life goals (Prihandoko & Wasitohadi, 2015).In learning using VCT, students are not asked to memorize the values that other parties have selected.On the other hand, learning using VCT helps discover, analyze, take responsibility, develop, choose, take a stand, and live their values (Risvanelli, 2017).Several studies that have been carried out related to the VCT model include research that states that VCT affects the emergence of attitudes of religiosity, honesty, intelligence, toughness, caring, democracy (Fitriani & Sundawa, 2016).Research states that the learning treatment with the VCT model has a significantly higher impact on attitude development than the conventional learning model (Tyas & Mawardi, 2016).Other research states a significant difference between the VCT and the expository models on students' social skills (Sukmawati & Nashir, 2021).The study notes a significant difference in the attitude of patriotism in VCT model learning and conventional learning (Dewantoro & Sartono, 2019).A study says that using the VCT learning model role-playing model can reduce bullying behavior (Wiradimadja, 2017).So, based on this description, it can be said that VCT has a positive impact on the formation of values in students. In addition to the VCT model, project assessment is offered to improve learning.Project-based assessment is an assessment development that is sourced from project-based learning.Project-based assessment involves completing tasks within a specific time frame that emphasizes processes and products.It is used to develop and monitor students' planning, investigating, and analyzing projects.As a result, project tasks begin with planning, data collection, organizing, processing, and presenting.Project-based assessment includes the assessment of tasks that contain investigative activities and must be completed within a specific time by students in groups.Project-based assessment requires students to solve various problems (Ali et al., 2018;Amri & Tharihk, 2018).In addition, this assessment can guide students in conducting inquiry activities to gain new insights and solve problems with the knowledge they build themselves (Asikin et al., 2017;Sukmasari & Rosana, 2017).In project-based assessment, students must use various skills, concepts, and learning.So, it can be concluded that project-based assessment enhances the skills and abilities needed to carry out the actual task.Therefore, performance appraisal is required to achieve learning outcomes.This statement is supported by research that states a significant difference in science critical thinking skills between groups of students taught using a project appraisal-based Problem-Based Learning model and those taught using conventional learning in the classroom.Research states a simultaneous and partial influence of attitudes and skills in science processes between students taught by project-based assessment based on local culture and learning by conventional assessment (Parmiti et al., 2021;Safaruddin et al., 2020).So, a project-based assessment can affect students' attitudes towards the material being studied. Based on these descriptions, it is known that both the VCT model and project-based assessment have a positive influence on the learning process.Therefore, in this study, these two things were collaborated in the civics learning process to see their impact on student learning outcomes.This research is different from previous research because this research will also control the family environment.Family environment is one of the determining factors for student success in the learning process.The positive family environment shown by parents can help children shape and develop their character (Novita et al., 2015;Suriansyah & Aslamiah, 2015).Therefore, this study examines the impact of project-based assessment in VCT learning on student learning outcomes with a controlled family environment.This research is expected to contribute to innovative learning models used in the learning process. Research Design This research was conducted at a Junior High School in Singaraja in the first semester of the 2013/2014 academic year.This research started in January and ended in December 2013.This study used a quasi-experimental design.The design used was a 2X2 covariance factorial design.The factorial design was used to simultaneously investigate the effect of a treatment variable on the investigated sample group.The use of factorial analysis design in this study is based on the assumption that two variables influence other variables and the interaction of the two independent variables on the dependent variable.This design provides an opportunity to determine the main effect, the interaction effect, and the simple result of the independent variables on the dependent variable.civics learning outcomes of students in this study were taken from the post-test scores only at the end of the study.In other words, without taking into account the pretest scores.This study uses only a post-test design without taking into account pre-test scores.Internal validity threat factors can be reduced to a minimum and controlled, such as history, maturity, tests, instruments, regression, mortality, and implementation. Sample and Data Collection The target population in this study were all eighth-grade students of State junior high schools in Buleleng Regency.At the same time, the affordable population is all eighth-grade students of junior high school in Singaraja City, totaling 1723 people consisting of 5 schools.In each school, there are 10-14 classes.The sample was selected using a multistage random sampling technique with 120 students.The students were divided into two groups, with 60 students.The first group is the experimental group taught using VCT, and the second group is called the control group, which is prepared using the conventional learning model.Both groups were divided into 30 students.Furthermore, random sampling of two groups of 30 students was given formative assessments, and two groups of 30 students were given conventional assessments. The instrument used is a civics learning outcome test which includes the cognitive domain.Two types of tests were developed to measure civics learning outcomes, namely a test to measure the cognitive domain and a test to measure the affective domain.The civics learning outcomes test for the cognitive domain is based on the civics education curriculum in junior high school.This measuring tool is suitable for measuring children's abilities related to cognitive skills.The civics learning outcomes test in the cognitive domain uses a multiple-choice test.The correct answer will be given a score of 1, and the wrong answer will be given a 0. The question developed from the competence of the Indonesia constitutions, deviations from the constitution, the amendment result to the 1945 constitution, and a positive attitude by implementing the law.This competency developed into 30 multiple choice questions consisting of cognitive levels C1-C6.The instrument for calculating cognitive domain learning outcomes is presented in the Table .1.The next stage is the instrument validation test.Based on the results of calculating the biserial point correlation coefficient for the cognitive domain, it was found that from 40 items, 30 were declared valid, and 10 were declared invalid.The instrument reliability coefficient is calculated after obtaining a good instrument of as many as 30 items for the cognitive domain.Based on the calculation of the reliability of the civics learning outcomes test using the Microsoft Excel 2007 program, the reliability coefficient of r1.1 = 0.75 means that the reliability coefficient of the civics learning outcome test is high and can be used to measure research data.The family environment instrument uses a Likert scale questionnaire.The family environment statement has been written according to writing rules to scale with this model.The information will be based on a predetermined scale design.Respondents will be asked to state the conditions they feel in five different categories of answers, namely: "always", "often", "sometimes", "rarely", and "never".This questionnaire was developed from 3 family dimensions, namely; 1) children's perception of the family's physical environment, which consists of a comfortable, safe, clean, healthy, orderly, shady, calm, and beautiful environment, 2) children's perceptions of parental behavior consisting of four dimensions, namely: personal work, distance, consideration, and encouragement, 3) the child's perception of the family relationship itself which consists of obstacles, freedom, intimacy, and family spirit.The details are presented in Table 2.The results of calculating the correlation coefficient, from 46 items, 40 items were declared valid, and 6 were declared invalid.And the calculation results obtained an instrument reliability coefficient of 0.804.Thus, the test results show that the family environment reliability coefficient is very high and can measure research data. Analyzing of Data This study uses descriptive and inferential data analysis.The descriptive analysis describes student learning outcomes in the presentation of mean and standard deviation data.While the inferential analysis used is two-way ANCOVA.All hypotheses were tested at a significance level of 5%-inferential analysis in the form of two-way ANCOVA to test the theory, followed by further testing.To test, it used SPSS version 17.0 software.The analytical model used is a two-way analysis of covariance.This model tests the differences in the average parameters of civics learning outcomes for all groups of students formed by the formative learning and assessment model by controlling for the student's family environment.The two-way analysis of covariance procedure was used to: (1) examine the differences in civics learning outcomes for all student groups formed by the learning model factors; (2) examine the differences in civics learning outcomes for all groups of students formed by formative assessment factors; (3) examine the effect of the interaction of learning model factors and formative assessment on civics learning outcomes; and (4) examine differences in civics learning outcomes in groups of students formed by the main factors of the learning model and formative assessment. After controlling the family environment, the four testing processes were carried out, serving as a statistically controlled covariance.Thus, the parameters tested in the analysis of covariance (ANCOVA) are differences in the average deviation (adjusted means) or differences in constants from homogeneous regression.Before conducting data analysis to test hypotheses, prerequisite tests are undertaken first.Prerequisite tests for ANCOVA are normality test, homogeneity test, regression linearity test, and significance test of the effect of regression. Findings / Results The results of the descriptive analysis show that the VCT learning model and project-based assessment have more influence on learning outcomes after the learning environment is controlled.The results of the descriptive study are shown in Table 3.The results show that the VCT learning model has a better effect than the conventional one.Learning with project-based assessment produces better learning outcomes than learning with traditional assessment.In addition, the VCT learning model and project-based assessment have the most significant influence compared to other learning models.It can be seen from the average value of learning outcomes of 56.90.So, the VCT learning model and project-based assessment can positively contribute to student learning outcomes.= 3.98.Thus, the conclusion is that X has a linear effect on Y.Following the conditions related to parametric statistics on ANCOVA as above have been fulfilled so that further inferential analysis can be continued in the context of testing research hypotheses using statistical techniques ANCOVA.This hypothesis test was carried out by ANCOVA analysis.The analysis results are shown in Table 4, Table 5, and Table 6.The results of the hypothesis analysis in Table 4 show that (1) the learning outcomes of civics and academic education of the group of students who studied through VCTLM (A) were higher than the group of students who studied through CLM (A2).The results of the analysis show that the score of F-count = 5.42 is greater than the score of F-table = 3.92, with the corrected mean value of A 1 = 50.51> the corrected mean value of A 2 = 49.49;(2) civics learning outcomes for the group of students who were given formative assessment projects were higher than the group of students who were given conventional formative assessments after controlling for the family environment.The results of the analysis showed that the F-count score = 25.04 was higher than the F table score = 3.92, with the corrected mean A 1 = 51.07> the corrected mean A 2 = 49.93;(3) There is an interactive effect between the learning model and formative assessment on civics learning outcomes after controlling the family environment.The results of the analysis show the F-count score.= 137.92> F-table score = 3.92.The results of the hypothesis analysis in Table 5 show that in the group of students who learn to follow the value clarification model, the civics learning outcomes of the group of students who are given formative assessment projects are higher than the group of students who are given conventional formative assessments after controlling for the family environment, the results of the analysis show the value of t-count.= 11.83> t-table = 1.66, and in the group of students who learn to follow the conventional learning model.civics learning outcomes for the group of students who were given a formative assessment project were lower than those given a conventional formative assessment after controlling for the family environment.The analysis results show that the value of t-count = -4.78 is smaller than ttable(60) = -1.66.It means that specifically for the group of students who were given the conventional learning model, the civics learning outcomes in the group of students given the formative project assessment was lower than those given the conventional formative assessment after controlling for the family.The results of the hypothesis analysis in Table 6 show that in the group of students who were given a formative assessment project, the civics learning outcomes of the group of students who learned to follow the value clarification model were higher than the group of students who learned to follow the conventional learning model after controlling for the family environment, the results of the analysis show the value of t-count = 9.82 is greater than t-table (60) = 1.66.And in the group of students who were given conventional formative assessments, the civics learning outcomes of the group of students who learned to follow the value clarification model were lower than those who learned to follow the conventional learning model after controlling for the t-count family environment = -6.60 > t-table = -1.66.From the results of hypothesis testing, it can be concluded that the learning model and formative assessment have a significantly greater effect on civics learning achievement after controlling the family environment.Therefore, teachers need to choose and use appropriate formative learning and assessment models in civics learning. Discussion The results showed that the VCT learning model influenced learning outcomes.The VCT learning model can affect learning outcomes that cannot be separated from the learning syntax.The syntax in VCT learning consists of 7 syntaxes grouped into 3.The first stage is freedom.At this stage, students are free to choose what to do in the learning process. Giving students freedom in the learning process will make students more motivated in learning process.VCT learning provides opportunities for students to explore and build their knowledge.In a significant learning process, students will gain more experience, which can later be used in everyday life (Bressington et al., 2018;Kostiainen et al., 2018).In addition, learning that provides freedom will enable students to develop independent learning attitudes.In addition, knowing that gives space will allow students to develop independent learning Independent learning confidence in the ability to achieve learning goals that involve students independently (Henri et al., 2018;Nguyen & Habók, 2021), to make a decision, choose the methods and techniques used to monitor the acquisition procedure, and evaluate all that has been obtained (Tseng et al., 2020).This condition will certainly have a good impact on the learning process.Namely, there will be an increase in student learning outcomes. The second stage is valuing.The VCT model provides opportunities for students to build students' knowledge about how to respect others.Respecting others will undoubtedly create a good relationship between students and teachers and students and students.A good relationship between students will make the learning process more comfortable and improve learning more conducive.Learning with peers will encourage students to play an active role in learning (Oh, 2019).Peers help, guide, and support their peers to build learning through interaction and collaboration (Andersen & Watkins, 2018).By being guided, assisted, and given feedback by peers, students will increase their self-confidence (Han et al., 2015;Stone et al., 2013).So, it can be said that an attitude of respect between students will build a good relationship between students, which will impact the learning process.Students will find it easier to share the material they are learning, and students will learn from their friends more easily, considering they are at the same stage of development.Establishing a good relationship will make it easy for students to share in the learning process, which will undoubtedly impact student learning outcomes. The third stage is action.Students are allowed to do or try something done in the learning process.In this case, students can repeat the material they have learned.Repeating something they do will affect student learning outcomes.Students will understand more about the material they learn because they do it themselves.Doing it yourself will undoubtedly impact what they get in the learning process.As we know that VCT learning is learning, that emphasizes value learning.Learning Values Clarification Technique (VCT) is a valuable education approach.Students are given the freedom to determine their values based on their experience in their environment (Ekasari, 2017;Lisievici & Andronie, 2016;Risvanelli, 2017).Learning using VCT, students are not asked to memorize the values that other parties have chosen. On the other hand, this kind of learning helps discover, analyze, take responsibility, develop, choose, stand, and live their values (Risvanelli, 2017).Several studies that have been carried out related to the VCT model include research that states that VCT affects the emergence of attitudes of religiosity, honesty, intelligence, toughness, caring, democracy (Fitriani & Sundawa, 2016).These descriptions illustrate that the VCT learning model has more influence on student learning outcomes because the learning process is student-centered.In addition, students who study with VCT are allowed to freely choose the lessons or values they want to learn, learn to appreciate them more, and repeat or do something that has been previously known.In addition, with the family environment control.By controlling the family environment, the learning process with the VCT model becomes better.The learning process will not take place well without the role of parents.The positive family environment shown by parents can help children shape and develop their character (Novita et al., 2015).So, a good family environment will make students learn more comfortably without fear and anxiety. A project-based assessment provides opportunities for students to be more active in the learning process.Projectbased assessment involves completing tasks within a specific time frame that emphasizes processes and products.It is used to develop and monitor students' planning, investigating, and analyzing projects.As a result, project tasks begin with planning, data collection, organizing, processing, and presenting.Project-based assessment includes an assessment of assignments that contain investigative activities and must be completed within a specific time for students in groups.Project-based assessment requires students to solve various problems (Amri & Tharihk, 2018).In addition, this assessment can guide students in conducting inquiry activities to gain new insights and solve problems with the knowledge they build themselves (Sukmasari & Rosana, 2017).In addition, project-based assessments will provide feedback to students directly, which makes students more motivated to complete the given project.However, this feedback is often forgotten and rarely done for some teachers. In project-based assessments, feedback is obtained in making assignments and the final results of the projects.Thus, students feel valued in completing the assigned tasks.Giving feedback improves student performance.Feedback affects the learning process; further feedback will determine its effectiveness (Wang et al., 2019).Feedback is not just a truefalse statement but contains more detailed responses in the form of an explanation of what the students made.Students will get more precise information (Cheng, 2017;Finn et al., 2018;Lachner et al., 2017).Providing good feedback will undoubtedly have a positive influence on student learning outcomes.In addition, the existence of a family environment will make students learn better. The results also show that the VCT learning model and project-based assessment affect civics learning outcomes after controlling the family environment.As described by the interaction of the two variables, it makes the learning process more effective because students learn more centered on students and active learning.Active learning students certainly create a better learning environment that increases their interest in learning.Interest in learning affects the quality of student learning.Interest in learning as a product and perception of self-efficacy will affect student motivation in the learning process (Tamardiyah, 2017).Students interested in learning activities will try harder than students who are less interested in learning (Aprijal et al., 2020;Nurfadilla & Rosleny, 2018).In other words, interest in learning is a motivating factor for students to learn because their interest will grow the pleasure and willingness of students to learn (Yunitasari & Hanifah, 2020).Students with a high interest in learning will ultimately achieve better learning outcomes than students with low interest.Students who are not interested in the subject matter will be less sympathetic, lazy, and not passionate (Lisma et al., 2019).High interest will make students more enthusiastic about learning and solving the problems that impact student learning outcomes.These descriptions illustrate that the VCT model and projectbased assessment positively influence student learning outcomes both partially and interactively.In this case, students learn actively in building the values obtained in the learning process.The interaction of these two variables makes students learn to respect the opinions of others.With this learning model, learning can be recommended as an alternative to improve learning outcomes. Conclusion VCT and project-based assessment significantly affect student civics learning outcomes after controlling the student's family environment.It can be seen from the results of civics learning.The students taught using VCT had a higher score than those taught using the conventional learning model.civics learning outcomes of groups given formative assessments were higher than those shown conventional assessments after controlling for the family environment. After controlling the family environment, an interactive effect between the learning model and project-based assessment on civics learning outcomes.In the group of students taught using VCT, the learning outcomes of students who were given project-based assessments were higher than those given conventional reviews after controlling for the family environment.In the group of students who were given project-based assessments, the learning outcomes of students who were taught using VCT were higher than those who were taught using conventional learning models after controlling for the family environment. Recommendations Based on the findings in this study, it is recommended for practitioners to apply project-based assessment with the VCT learning model in learning to improve students' civics learning outcomes.In addition, this model can be applied continuously in other subjects.This research is also expected to be used as input for other researchers to conduct similar research or even with other methods for research. Limitations The limitations of this study are the limitations of the population and sample, and the limitations of the dependent variable studied.Therefore, further research is expected to reach a larger population, and measure other variables related to civics learning. Table 1 . Indicators of Civics Learning Outcomes in the Cognitive Domain Table 2 . The Indicators Family Environment Table . 3 Results of descriptive analysis of learning outcomesThe family environment variance homogeneity test was carried out using the Bartlett test.The analysis process was carried out using Microsoft Excel 2007 to test the homogeneity.The test is carried out at the significance level = 0.05 by comparing the calculated value and the table value with the test criteria, namely: accept H0 if count < table and vice versa reject H0 if count > table.By using the significance level = 0.05 and df = 3, the hit score = 5.40 and table = 7.82.Thus, hit = 5.40 < table = 7.82, it can be said that H0 is accepted, which means that the data comes from a homogeneous data distribution. The result of the conditional test performed is the Normality test with the Lilliefors test.The normality test results showed that the research data were normally distributed.It can be seen that the value of L-count is smaller, L-table for Lilliefors test at significance level = 0.05 with n = 30, L-table = 0.16, and n = 60, L-table = 0.11.Thus, it can be concluded that all subpopulations of civics learning outcomes and student family environments in this study were normally distributed.Thus, the data normality conditions can be met.The next test is the homogeneity test.The calculation of homogeneity using the Bartlett test for the variance of civics learning outcomes was carried out using the Bartlett test with the help of the Microsoft Excel 2007 program, or the F-test formula could be used.Using the significance level = 0.05 and df = 3, the hit score = 1.111 and the table = 7.815.Thus, hit = 1.11 < table = 7.82, so it can be said that H0 is accepted, which means the data comes from a homogeneous data distribution. Table 4 . F-test Statistics on AB, A*B on Civics Learning Outcomes by Controlling Family Table 5 . t-Test Statistics on the Average Parameters of Civics Learning Outcomes (Y) between All Levels of Formative Assessment Factors (B) for Each Level of Learning Model Factors (A) by Controlling Family (X) Table 6 . t-Test Statistics About the Parameters of the Average Civics Learning Outcomes (Y) Between All Levels of Learning Model Factors (A) for Each Level of Formative Assessment Factors (B) by Controlling Family (X)
2022-08-28T15:16:27.386Z
2022-10-15T00:00:00.000
{ "year": 2022, "sha1": "d024a4edebd38450deba79ac994aab651de0e8bd", "oa_license": "CCBY", "oa_url": "https://pdf.eu-jer.com/EU-JER_11_4_1969.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "17b682a564d2a9a1faf2cd68c93dc6f23f4e8d9b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
6157762
pes2o/s2orc
v3-fos-license
Calcium phosphate-hybridized tendon graft to enhance tendon-bone healing two years after ACL reconstruction in goats Background We developed a novel technique to improve tendon-bone attachment by hybridizing calcium phosphate (CaP) with a tendon graft using an alternate soaking process. However, the long-term result with regard to the interface between the tendon graft and the bone is unclear. Methods We analyzed bone tunnel enlargement by computed tomography and histological observation of the interface and the tendon graft with and without the CaP hybridization 2 years after anterior cruciate ligament (ACL) reconstruction in goats using EndoButton and the postscrew technique (CaP, n = 4; control, n = 4). Results The tibial bone tunnel enlargement rates in the CaP group were lower than those in the control group (p < 0.05). In the CaP group, in the femoral and tibial bone tunnels at the anterior and posterior of the joint aperture site, direct insertion-like formation that contained a cartilage layer without tidemarks was more observed at the tendon-bone interface than in the control group (p < 0.05). Moreover, the gap area between the tendon graft and the bone was more observed at the femoral bone tunnel of the joint aperture site in the control group than in the CaP group (p < 0.05). The maturation of the tendon grafts determined using the ligament tissue maturation index was similar in both groups. Conclusions The CaP-hybridized tendon graft enhanced the tendon-bone healing 2 years after ACL reconstruction in goats. The use of CaP-hybridized tendon grafts can reduce the bone tunnel enlargement and gap area associated with the direct insertion-like formation in the interface near the joint. Introduction The anterior cruciate ligament (ACL) is the most frequently injured ligament in the knee. Surgical reconstruction using a replacement graft is the preferred method of treatment. A semitendinosus-gracilis (STG) tendon graft, the so-called soft tissue graft, is commonly used [1,2]. However, the STG tendon graft requires softtissue-to-bone healing within both bone tunnels. Grana et al. [3] observed indirect bonding with fibrous tissue between a hamstring tendon autograft and bone tunnels in a rabbit ACL reconstruction model. The indirect bonding formation at the interface is similar to that observed after a long period [4]. Many studies attempted to improve the healing of tendon to bone with different therapeutic modalities including application of periosteum augmentation, bone morphogenetic protein, calcium-phosphate cement, granulocyte colony-stimulating factor, gene transfer, and so on [5][6][7][8][9][10][11]. We developed a novel technique to improve tendon-bone attachment by hybridizing calcium phosphate (CaP) with tendons using an alternate soaking process [12]. Using the CaP-hybridized tendon, we observed a scarless direct bonding area between the tendon graft and the bone without inflammation two to three weeks after ACL reconstruction in rabbits [13,14], which was also observed in goats [15]. The CaP-hybridized tendon graft reduced bone tunnel enlargement in the femoral side 6 months after ACL reconstruction in goats [16]. Moreover, the anterior-posterior translations in the reconstructed knees in the CaP group were shorter and the corresponding in situ forces were greater than those in the control group at full extension and 60°of knee flexion 1 year after the ACL reconstruction in goats [17]. The new bone formation at the bone tunnel and cartilage layer formation at the tendon-bone interface near the joint in the CaP group were more observed than those in the control group [17]. Commonly, a minimum of two years of follow-up after ACL reconstruction is recommended in order to evaluate the clinical results. However, the longterm effect of the CaP-hybridized tendon after ACL reconstruction in animal experiments is unclear. To clarify this issue, we used a goat model of ACL reconstruction, because long-term studies of goat knees have shown the effective restoration of knee stability after ACL reconstruction [18,19]. We considered that an appropriate mechanical stress at the interface after bonding between the tendon graft and the bone, and a prerequisite firm anchoring may promote direct insertion-like formation at the tendonbone interface and the maturation of the tendon graft [13][14][15][16][17]. Therefore, we hypothesized that cartilaginous anchoring formation between the tendon graft and the bone is more mature, and both the bone tunnel enlargement and the percentage of the gap area at the interface that is loosening 2 years after ACL reconstruction are smaller when using the CaP hybridization method than when using the conventional method, because the anchoring formation in the CaP group (direct bonding and cartilaginous anchoring) is different from that in the control group (fibrous bonding) from 2 weeks to 1 year after the operation [13][14][15][16][17]. Moreover, the graft in the CaP group may be more mature than that in the control group, because of its anchoring formation associated with an appropriate mechanical stress. The objective of this study was to analyze bone tunnel enlargement by computed tomography (CT) and histological observation of the interface and the tendon graft in the group with the CaP-hybridized tendon graft and in the group with untreated tendon graft 2 years after ACL reconstruction using a goat model. CaP hybridization method Eight skeletally mature female Saanen breed goats (50-70 kg) were used in this study. The goats were maintained in accordance with the guidelines of the Ethical Committee of the Biomaterial Center of the National Institute for Materials Science and the National Institutes of Health guidelines for the care and use of laboratory animals (NIH Pub. No. 85-23 Rev. 1985). Flexor digitorum longus (FDL) tendons were used in this study. Double-strand FDL tendons of 45 mm length and 5.5 mm diameter were prepared. The tibial end of the grafts was secured using the Krackow technique with No. 2 nonabsorbable sutures (ETHIBOND* EXCEL, ETHICON, INC., USA), and a polyester tape suture tied over the EndoButton (Smith & Nephew, Andover, MA, USA) was passed through the looped femoral end of the grafts. Then, the central third of the grafts, considered as the intra-articular portion, was covered with the sleeve of a rubber glove tied on each side with the No. 2 nonabsorbable sutures to prevent CaP hybridization [15][16][17]. After these procedures, the grafts were soaked in 100 ml of a Ca solution (100 mM CaCl 2 + 30 mM L-histidine, pH 7.4, 280 mOsm/l). The grafts were subsequently soaked in 100 ml of a NaHPO 4 solution (116.4 mM NaH 2 PO 4 :128.7 mM Na 2 HPO 4 12H 2 O = 15%:85%, pH 7.4, 280 mOsm/l) ( Figure 1). The temperatures of the solution and room were both 25°C. Before each soaking, the grafts were washed in a saline solution. This cycle was repeated ten times [13][14][15][16][17]. As the control, the tendon was soaked in a saline solution for 10 minutes. Surgical procedures All the surgical procedures were performed under sterile conditions with the animals under general anesthesia. On the right knee, an anterior lateral skin incision was made. The ACL was then completely transected and the gross anterior subluxation of the tibia was confirmed by manual examination. We drilled using a Kirschner wire from the anteromedial surface of the proximal tibia to the tibial insertion of the ACL. Then we drilled femoral side straightly with the knee positioned in 45°flexion to ensure consistency of surgical technique. The bone tunnel was reamed using a 5.5-mm-diameter canulated drill, resulting in the bone tunnel in the femoral side opening anterior of the femoral insertion. The length of the tunnel was at least 20 mm. The graft described above was passed through the femoral and tibial tunnels and fixed to the anteromedial surface of the tibia with 20 N as the initial tension using a 4.5-mm-diameter cortex screw (MEIRA Corporation, Nagoya, Japan) [15][16][17]. Postoperatively, all the goats were allowed free cage activity (cage area, 50 m 2 ). All the goats tolerated the operation well and were partially weightbearing within a few hours after surgery. However, visual inspection revealed that normal gait patterns did not return until 3 to 4 weeks after the surgery. All ACL replacement grafts remained 2 years after the operations ( Figure 2). Ex vivo CT The four frozen specimens of goats from each group were used for CT (Brilliance CT 64, Philips, Amsterdam, Netherlands) to assess femoral and tibial bone tunnels. The CT (voltage: 120 kV, current: 230 mA) in the full extension knee position was supervised and the obtained CT images were analyzed by a single radiologist. A standard protocol was used consistently throughout the study. Initial volume acquisition was with 0.9 mm slices from 30 mm above the femoral tunnel to 30 mm below the tibial tunnel. Using the work station of Virtual Place Lexus (AZE Ltd., Tokyo, Japan), three-dimensional images were reconstructed. Axial images of the femoral and tibial bone tunnels were obtained using the reconstructed three-dimensional images. We measured the tunnel cross-sectional area (CSA) of the femur and tibia at the main joint aperture site using the axial images of the femoral and tibial bone tunnels. The rate of increase in tunnel CSA was calculated using the following formula: CSA increase rate (%) = (CSA at 2 years -Initial CSA) × 100/Initial CSA [16]. Initial CSA is calculated on the basis of the diameter of the reamer. Histological analysis After the CT analysis, the femur-ACL graft and ACL graft-tibia complex were harvested. At each period, four specimens from each group were fixed in 10% neutral buffered formalin, decalcified, and embedded in paraffin. The specimens were sliced 5 μm thick parallel to the long axis of the bone tunnel and then stained with hematoxylin and eosin (H-E) and safranin-O to identify the cartilage layer in the interface. The specimens were examined by light microscopy after staining (BX-51, Olympus Optical Co., Ltd., Tokyo, Japan). The tendonbone interface was histologically compared between the CaP and control groups. The interface between the tendon and the bone tunnel was assessed by observing its formation (fibrous bonding, cartilaginous insertion and gap area at joint aperture site) at both the anterior and posterior bone tunnels and on both femoral and tibial sides. Moreover, the percentage of the gap area in the bone tunnel was calculated using the following formula: Gap area percentage (%) = (length of the gap area from the joint aperture site -length of the tendon graft in the bone tunnel) × 100/length of the tendon graft in the bone tunnel ( Figure 3). The ligament tissue maturation index (LTMI) of Murray et al. [20] was used to evaluate the maturation of tendon grafts according to the following 3 criteria: [1] cellular aspects including cell density, nuclear shape, and orientation; [2] extracellular matrix characteristics, such as crimp; and [3] vascular features including blood vessel density and maturity (total score, 28 points). Statistical Analyses The bone tunnel enlargement data and histological analysis of the two groups were compared using Student's t-test at a p < 0.05 significance value. To compare the histological difference between the CaP and control groups, Mann-Whitney's U test was used at a p < 0.05 significance value. Ex vivo CT Bone tunnel enlargement in the control group was greater than that in the CaP group. The bone tunnel enlargement rate in the tibial bone tunnel in the CaP group was significantly smaller than that in the control group. The bone tunnel enlargement rate in the femoral bone tunnel CSA with the untreated tendon graft was 143.3 ± 116.8% and that with the CaP-hybridized tendon graft was 122.1 ± 77.9 (p = 0.3861). That in the tibial bone tunnel CSA with the untreated tendon graft was 215.9 ± 106.2% and that with the CaP-hybridized tendon graft was 87.9 ± 64.8 (p = 0.0427) (Figure 4). Histological Findings The results of histological analysis (CaP, n = 4; control, n = 4) are shown in Tables 1, 2, and 3. The formation of a cartilage layer in the interface in the CaP group was more observed than in the control group (p = 0.0352). In the femoral and tibial bone tunnels at the anterior and posterior of the aperture site, the cartilage layers were on average 150 μm to 2.6 mm in length and 80 μm to 550 μm in thickness, and the staining of glycosaminoglycan by safranin-O was observed. In each site, the length and the thickness were not significant difference between the CaP and control groups (p > 0.05). The cartilage layer showed two distinct layers, namely, uncalcified fibrocartilage and calcified fibrocartilage layers without tidemarks (Figures 5, 6). site, cartilage layers between the tendon and the bone were observed at the anterior surface of the tibial bone tunnel in all 4 specimens, at the posterior surface of the tibial bone tunnel in 3 specimens, at the anterior surfaces of the femoral bone tunnel in 2 specimens, and at the posterior surface of the femoral bone tunnel in 2 specimens. In the tibial bone tunnel, the cartilage layer was more prominent than that in the femoral bone tunnel. In the control group, Sharpey's fibers were dense and penetrated the bone perpendicular to the direction of the lamellae (Figure 7). Sharpey's fibers observed at the anterior bone tunnel were long. On the other hand, those at the posterior bone tunnel were short and dense. At the aperture site, cartilage layers between the tendon and the bone were observed at the anterior surface of the tibial bone tunnel in 3 specimens, at the posterior surface of the tibial bone tunnel in 1 specimen, at the anterior surfaces of the femoral bone tunnel in none of the specimens, and at the posterior surface of the femoral bone tunnel in 1 specimen. The gap area in the femoral side in the control group was more observed than that in the CaP group (p = 0.0178). At the gap area, a synovial tissue cover was observed on the tendon and bone tunnel surface ( Figure 8). The gap area between the tendon and the bone was observed at the anterior surface of the tibial bone tunnel in none of the specimens, at the posterior surface of the tibial bone tunnel in 2 specimens, at the anterior surfaces of the femoral bone tunnel in all 4 specimens, and at the posterior surface of the femoral bone tunnel in 3 specimens in the control group. The gap area rates were 0% in the anterior tibial bone tunnel, 14.4 ± 18.7% in the posterior tibial bone tunnel, 42.4 ± 20.4% in the anterior femoral bone tunnel, and 13.0 ± 13.4% in the posterior femoral bone tunnel. In the CaP group, the gap area at the interface was small at the anterior surface of the tibial bone tunnel in none of the specimens, at the posterior surface of the tibial bone tunnel in none of the specimens, at the anterior surfaces of the femoral bone tunnel in 1 specimen, and at the posterior surface of the femoral bone tunnel in 1 specimen. The gap area rates were 0% in the anterior tibial bone tunnel, 0% in the posterior tibial bone tunnel, 4.3 ± 8.7% in the anterior femoral bone tunnel, and 4.5 ± 9.0% in the posterior femoral bone tunnel. The gap area rate in the anterior femoral bone tunnel in the CaP group was significantly smaller than that in the control group (p = 0.0069). Two years after ACL reconstruction, the LTMIs of the tendon graft were similar in the CaP and control groups, in terms of the uniform linear collagen orientation and the spindle-shaped nuclear morphology of the graft (Figure 9). The LTMIs were 19.3 ± 3.9 for the CaP group and 18.5 ± 1.7 for the control group. There was no significant difference between the CaP and control groups (p = 0.7398). Discussion We found direct insertion-like formation at the interface in the CaP group 2 years after ACL reconstruction. However, the structure was different from a normal insertion structure because it lacked tidemarks. The cartilage layer in the femoral and tibial bone tunnels at the anterior and posterior of the aperture site was more prominent between the tendon graft and the bone in the CaP group than in the control group. Mechanical load has significant effects on the formation, degradation, regeneration, and tissue composition of tendons in vitro and in vivo [21][22][23][24][25]. Yamakado et al. [26] demonstrated the regeneration of tendon-bone junctions at the entrance of bone tunnels after extensor tendons were grafted extra-articularly. They suggested that tension and/or compression enhances the healing of tendonbone junctions and chondroid formation. In the CaP method, scarless direct bonding, which is due to the fact that the bonelike microstructure contains low-crystalline apatite and type I collagen, and reduction of inflammation are achieved in the early postoperative phase [13][14][15]. Then, the interface becomes a cartilaginous insertion 6 months after ACL reconstruction [16]. Moreover, the in situ force in the tendon graft is greater in the CaP method than in the conventional method 1 year after ACL reconstruction [17]. The interface with a CaP-hybridized tendon graft after direct bonding can react to tension and/or compression. The mechanical environment (tensile and/or compressive force) in this direct bonding probably further may promote the differentiation of the interface zone to a cartilage layer. Therefore, cartilaginous direct insertion-like formation can be realized 2 years after ACL reconstruction in the CaP group. However, we were unable to regenerate tidemarks, because the tendon-bone alignment in the reconstructed knee differs from that in the knee with normal ACL insertion. ACL insertion consists of four distinguishable tissue layers in transition, that is, ligaments, fibrocartilage, mineralized fibrocartilage, and bone [27,28]. The fibrocartilage and mineralized cartilage function as a stress or shock absorber by reducing the stiffness gradient between the ligament and the bone [29,30]. Reconstructing the highly specialized structure of cartilaginous insertion is necessary to reduce the transverse contraction of the ligament during tensile loading, which therefore acts as a stretching brake [29]. Joint aperture site healing with two distinguishable cartilaginous insertion layers (direct insertion-like formation) in the CaP group can neutralize graft motion within the tunnel. On the other hand, in the control group, mainly fibrous tissue intervention was observed at the interface between the tendon graft and the bone tunnel. A prolonged instability of the interface formed using an untreated tendon graft in the early postoperative period could lead to fibrous insertion formation owing to micromotion and inflammation at the interface for a long time. Shear stress at the interface associated with the bungee effect (longitudinal micromotion) [31], rotational micromotion, and transverse micromotion (windshield wiper effect) promotes fibrous insertion formation (Sharpey's fibers) in the control group, as similarly previously reported [3,13]. Bone tunnel enlargement in the CaP group was lesser in both the femoral and tibial bone tunnels than that in the control group. The possible etiologies for tunnel widening include synovial fluid cytokines, graft-tunnel micromotion, and inflammatory mediators. Synovial fluid influx into a bone tunnel may also affect healing. Berg et al. [32] studied the healing of empty bone tunnels in rabbit knees and found rapid bone formation at the extra-articular exit of a femoral tunnel, whereas bone formation was delayed at the intra-articular exit of the tunnel. They postulated that synovial fluid cytokines delay the healing of the intra-articular exit of the tunnel. Moreover, enlargement of the articular end of the bone tunnels is a common problem in ACL reconstruction [33], because the graft-tunnel motion can be greater at the tunnel aperture site than at the extra-articular end of the tunnel for grafts fixed by suspensory fixation. In the control group, a marked tunnel enlargement was observed by CT, and a histologically large gap area with synovial tissue covering the tendon graft and bone tunnel surface was also observed. Therefore, the continued micromotion at the interface can lead to bone tunnel enlargement not only on the femoral side but also on the tibial side when using untreated tendon grafts in our animal experiment 2 years after the operation. Moreover, bone tunnel enlargement on the femoral side 6 months after ACL reconstruction [9] could increase the gap area at the interface on the femoral side 2 years after the operation, because of micromotion and joint fluid. Synovial fluid pumping with cytokines caused by graft motion in the bone tunnel can promote synovial membrane formation on the tendon graft and bone tunnel surface. These phenomena may account for the loosening at the interface and knee instability in the control group. On the other hand, in the CaP group, the cartilage layer in the interface on the joint aperture site side can prevent the influx of synovial fluid and absorb the graft-tunnel motion. Therefore, bone tunnel enlargement was lesser and the gap area was smaller in the CaP group than in the control group. The enhanced tendon-bone healing in the CaP group did not result in the maturation of the tendon graft. We considered that firm anchoring would be important for the maturation of the tendon graft. The midsubstance of the graft prepared using the CaPhybridization method may promote the recovery of mechanical strength owing to its effective load transfer through the tibia-graft-femur construct after anchoring 1 year after ACL reconstruction in goats [10]. However, the maturation of the tendon graft in both groups was similar 2 years after ACL reconstruction. It may be difficult in terms of long-term results in animals to set appropriate mechanical conditions for ACL reconstruction by joint geometry. Regarding the limitation of this study, since we did not perform mechanical analysis 2 years after ACL reconstruction, the mechanical strength in both groups is unclear in this study. Further study to clarify the mechanical properties of the tendon graft and to clarify the effect of mechanical stress on graft maturation is required. Examination of non-significant results of this study may be necessary to check for possible false negatives due to the small number of samples. If we use a larger number of specimens, a clearer significant difference may have appeared. The CaP-hybridized tendon graft enhanced the tendon-bone healing 2 years after ACL reconstruction in goats. The CaP-hybridized tendon graft can reduce the bone tunnel enlargement and the gap area associated with the direct insertion-like formation in the interface at the joint aperture site. Histological sections stained with H-E for the CaP group (a) and control group (b) (x 400). This area is the intra-articular portion. The microstructure of the tendon graft showed a uniform highly oriented collagenous matrix with interspersed spindle cells.
2016-05-17T15:02:15.504Z
2011-12-14T00:00:00.000
{ "year": 2011, "sha1": "6bc5f29b31f22c54bdd2c48b121cad7ad78c1fb1", "oa_license": "CCBY", "oa_url": "https://bmcsportsscimedrehabil.biomedcentral.com/track/pdf/10.1186/1758-2555-3-31", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a70b3244aa9aa8eaeed469f447b0ce863591e37f", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51759900
pes2o/s2orc
v3-fos-license
Phenomenological analysis of densification mechanism during spark plasma sintering of MgAl2O4 Spark plasma sintering (SPS) of MgAl2O4 powder was investigated at temperatures between 1200 and 1300{\deg}C. A significant grain growth was observed during densification. The densification rate always exhibits at least one strong minimum, and resumes after an incubation period. Transmission electron microscopy investigations performed on sintered samples never revealed extensive dislocation activity in the elemental grains. The densification mechanism involved during SPS was determined by anisothermal (investigation of the heating stage of a SPS run) and isothermal methods (investigation at given soak temperatures). Grain-boundary sliding, accommodated by an in-series {interface-reaction/lattice diffusion of the O$^2$-anions} mechanism controlled by the interface-reaction step, governs densification. The zero-densification-rate period, detected for all soak temperatures, arise from the difficulty of annealing vacancies, necessary for the densification to proceed. The detection of atomic ledges at grain boundaries and the modification of the stoichiometry of spinel during SPS could be related to the difficulty to anneal vacancies at temperature soaks. I. Introduction Fully dense polycrystalline alumina-magnesia spinel (referred to as spinel hereafter), MgAl2O4, is an attractive material for its excellent optical properties (in-line transmittance) in the visible to mid-infrared ranges. 1,2 It is currently considered as a cost-effective alternative to monocrystalline sapphire for the manufacturing of infrared-domes, intended to be mounted on the new generation of high speed air-to-air/ground-to-air missiles coming onto the market. However, most of the polycrystalline materials developed up to now exhibit a grain size in the 10-150 µm range, explaining the disappointing mechanical/thermomechanical properties reported. 3,4 A different approach has recently proved possible to obtain fully dense polycrystalline spinel with a grain size well below the micrometer, 5,6 using a sinter/ hot-isostatic pressing (HIP) strategy. a) Address all correspondence to this author. To simultaneously limit grain growth and obtain nearly maximum densification, spark plasma sintering (SPS) has been successfully applied to other materials, such as TiN, 7 Al2O3, 8,9 Si3N4, 10 3, and 8 mol% yttria-stabilized ZrO2 11,12 and b-SiC. 13 Dense polycrystalline spinel with acceptable optical properties (residual pores are still present in the material) can be processed using SPS. 14 The grain size obtained is nevertheless still in the order of tens of micrometers and should be defined as coarse. 14 We therefore decided to investigate SPS as a densification method for spinel, aiming at combining high densification and grain size below the micrometer. The sintering behavior of a commercially available spinel powder was investigated for a soak temperature in the range 1200-1300°C, a soak time of 15 min, a heating rate of 100 °C/min, and an applied macroscopic compaction pressure of 25 MPa. The relative density of the sintered samples has been correlated with their average grain size. The mechanism controlling densification during the SPS experiments are investigated and discussed. TABLE I. Impurities, humidity level, and specific surface area of the raw powder. Note: ppm in weight. II. Raw powder and green samples ready for SPS experiments The commercially available S30CR raw powder (Baikowski Chimie, La Balme de Sillingy, France) was selected as the starting material. The main impurities determined by inductively coupled plasma spectrometry (ICP; Varian Vista Pro, Varian Inc., Palo Alto, CA) are listed in Table I. Because the raw powder is crystallized from an alumbased process, the residual sulfur concentration appears relatively high (around 400 ppm in weight). Using Brunauer-Emmett-Teller analysis (BET) measurements (Nova 2000, Quantachrome Instruments, Boynton Beach, FL) the specific surface area (SSA) of the raw powder was found in the range 30-31 m 2 /g. Scanning electron microscope (SEM; JSM-6301F, JEOL Ltd., Tokyo, Japan) examinations showed aggregation of the raw powder. The elemental crystallites constituting the aggregates have a spherical shape and an average diameter in the range 55-70 nm, in good agreement with the specific surface area. High solids loading (53 wt%) water-based slurries were prepared from the S30CR raw powder (ammonium polyacrylate was incorporated as a dispersant). After deagglomeration, optimal blending, and degassing, samples were slip-casted in porous plaster molds. Once setting was completed, green samples were left in a drying oven for a few hours. After a debinding step in air (480°C/3 h), samples were ready for the SPS experiments. The diameter of the samples was typically 19.7 mm and their thickness 6 mm (the diameter/thickness ratio was always above 2.5, which strongly minimized the axial density gradient during the SPS experiments). For all samples, the relative green density was around 42% (the theoretical density for spinel has been calculated to be 3.579 g/cc from the elemental lattice). The fracture surface in Fig. 1 shows the typical microstructure observed in green samples. III. Experimental Procedure All the tests were conducted in vacuum, with the slipcasted debinded samples, on equipment (SPS-2080, SPS Syntex Inc., Kanagawa, Japan) located at the Arrhenius Laboratory (Stockholm University, Sweden). For each test, a graphite die (internal diameter of 20 mm, thickness of 15 mm) was filled with one green sample and mounted on the SPS equipment (graphite punches). A heating rate of 100°C/min, a macroscopic compaction pressure of 25 MPa (applied at room temperature), and the standard 12:2 pulse sequence for the direct current (dc) 8 were chosen. The temperature was obtained from an optical pyrometer focused on the outer surface of the graphite die. This temperature was therefore not the one actually seen by the powder. Modeling of the temperature distribution during field-assisted sintering, performed on a TZ3Y raw powder, has been recently conducted. 15 For graphite die geometry similar to that used here, the temperature difference between the specimen center (4.25 mm thick) and the external pyrometer focused on the outer die wall surface was calculated to be around 80°C. 15 The thermal conductivity of spinel is much higher than that of TZ3Y (respectively 14.6 and 2.5 W/m/K). The maximum temperature difference between the outer surface of the die and the powder compact should therefore be around 80°C (probably less) during the SPS experiments performed on spinel samples. During all tests (heating, soak, and cooling), the height variation of the powder bed (DL = L-L0 < 0, L is the instantaneous height and L0 the initial height of the powder bed when the macroscopic pressure is applied at room temperature) was precisely measured. Each test was corrected to account for the dimensional variations of the SPS equipment (a blank test with a fully dense polycrystalline spinel sample positioned in the die was performed and then subtracted to the test result). The instantaneous sample height variation and the relative density D are linked by the following relationship: where Lf is the final height, L the instantaneous height, and Df the final relative density. The apparent density of the sintered samples was measured using the Archimedes method with deionized water (three measurements were made for each sample). The final relative density, Df, was obtained using a theoretical density of 3.579 g/cc for stoichiometric spinel. Thin foils were prepared from the central zone of as sintered samples by slicing and mechanical polishing, followed by ion milling. The foils were covered with a thin layer of graphite and observed using a Philips CM30 microscope (Philips Research Laboratories, Eindhoven, The Netherlands, acceleration voltage of 300 kV, point-to-point resolution of 0.19 nm) equipped with an energy dispersive spectroscopy (EDS) microanalysis system (Thermo Electron Corporation, Waltham, MA, Noran system equipped with an ultrathin window). The general microstructure was observed in bright field mode. Local EDS analyses were performed on one sample using the scanning transmission electron microscopy (STEM) mode in the center of the elemental grains (10 measurements), at grain boundaries (10 measurements) and at triple points (9 measurements) using a probe size of 5.6 nm. Quantitative analyses were carried out using the Doukhan-Van Cappellen method, based on electroneutrality of the specimen to access the local thin foil thickness. 16 Additional investigations were also performed using the high-resolution transmission electron microscopy (HRTEM) mode at grain boundaries. TEM was also used to evaluate the grain size for each sintered sample. A line-intercept method taking into account at least 150 grains (with a three-dimensional correction factor determined to be 1.2, approximating the grains to spheres 17 ) was used. The densification rate variation (relative to temperature), 1/D dD/dT, is shown as a function of temperature up to 1400°C in Fig. 2 For this study, the 1200 to 1300°C temperature range, where the densification rate (relative to temperature) is always around 2x10 -3 /°C, was selected. Even if the real temperature in the compact is not precisely known [finite element modeling (FEM) calculations are required] we assume that the temperature difference between the matrix, where the pyrometer is focalized, and the powder bed is at maximum 80°C and constant in the range 1200-1300°C. The following SPS conditions in vacuum were then selected: (i) soak temperature = 1200-1225-1250-1275-1300°C; (ii) soak time = 15 min; (iii) heating rate (HR) = 100°C/min; (iv) macroscopic compaction stress = 25 MPa (applied at room temperature); and (v) 12:2 pulse configuration. In the past, mass transport during sintering, with or without an external load, has been considered close to that occurring in high temperature creep. 18,19 Assuming an approach similar to Mukherjee for the creep of dense metals, 20 it has been proposed that the SPS kinetic equation can be written as 11,21 : where D is the instantaneous relative density of the compact, t the time, µeff the instantaneous shear modulus of the compact, K a constant, R the gas constant, T the absolute temperature, Qd the apparent activation energy of the mechanism controlling densification, b the Burgers vector (close to the lattice parameter), G the grain size, and seff the instantaneous effective stress acting on the compact. It was also proposed that µeff, and seff can be written as 11,21 : where Eth is the Young's modulus of the theoretically dense MgAl2O4 material (its variation as a function of temperature can be found in Ref. 22), neff the effective Poisson's ratio (a value of 0.26 is chosen for all experimental conditions), D0 the starting green density of the powder compact (42%), and smac the macroscopic compaction pressure (25 MPa). Equation (4), proposed for the effective stress, approximates the individual grains to spheres, independently of the relative density value. To be perfectly correct, the real effective stress should also incorporate an additional term related to a pressureless kind of driving force, which becomes prominent when the relative density approaches 1. An intuitive expression could be 2g/r, the same as for pressureless sintering, where g is the surface energy and r the pore radius. It is also possible that this pressureless kind of driving force is counterbalanced by another force related to gas pressure that develops within closing pores during the SPS runs. More investigations (theoretical and experimental) are needed to develop a more precise expression for the effective stress. In the meantime, the macroscopic compaction pressure alone is assumed to be responsible for the driving force operating during the SPS experiments we completed. From Eq. (2), following the procedure suggested by Brook, 18 Qd, p, and n can be determined. These are the key parameters enabling the identification of the mechanisms controlling densification of the powder bed during the SPS experiments. Figure 3 shows the densification curves at soak. The higher the temperature, the higher the relative density is after sintering. The shape of the curves is nevertheless uncommon and different to that reported for TZ3Y. 11 The densification curves for all temperatures, but 1200°C, adopt a wavy shape (the temperature regulation of the SPS equipment is good enough to ensure it has no influence on the curves shape). Some "plateaus," where the relative density remains almost constant during a certain period of time, are observed before densification resumes. This behavior is very similar to that observed during hot pressing of spinel powders. 23 It is also close to that observed during creep experiments on fully dense polycrystalline spinel. 24,25 The variations of 1/D dD/dt as a function of D for all the test temperatures are shown in Fig. 4. When SPS was performed between 1225 and 1300°C, strong minima where densification of the compact is strongly reduced are detected. This corresponds to the "plateau" periods observed in Fig. 3. After the minima, 1/D dD/dt increased again and finally collapsed at the end of the test. When the SPS temperature is 1200°C, the same behavior is observed, though clearly less pronounced. It is also interesting to note that the minima move to higher relative densities with an increase in the test temperature. To understand this behavior, it could be interesting to perform interrupted SPS experiments (with a high cooling rate to "freeze" the structure) and observe the resulting microstructure by TEM. IV. Results The grain size versus relative density trajectory, referred to as sintering path, and obtained with results from the different samples sintered by SPS is shown in Fig. 5. It is compared to the sintering path obtained for the same material (slip-casted samples) densified using pressureless sintering (PS) in air (the heating rate is only 10°C/min, in comparison to 100°C/min for the SPS experiments) at 1500 °C. In the green samples, the elemental crystallite size is around 55 to 70 nm (Fig. 1). A significant grain growth is observed during both the SPS and PS runs. The sintering path of the SPS material is below the one of the PS material. For a given relative density, the final grain size obtained using SPS will be smaller. A good approximation for the grain size/relative density trajectory obtained by SPS is given by the dashed line visible in Fig. 5. Subsequently, for the rest of the work, the following expression linking G to D will be used: Even if the residual porosity level is different, the typical microstructures for samples sintered at 1200°C (76.58% relative density, 110-nm grain size), 1250°C (85.08% relative density, 177-nm grain size), and 1275°C (93.99% relative density, 260-nm grain size) for 15 min are qualitatively similar. As an example, the microstructure observed on the sample sintered at 1250°C is shown in Fig. 6. For all samples (i) the residual porosity is located at grain boundaries (intergranular porosity), and is homogeneous in size (no aggregation to form large pores). Its spatial distribution is also homogeneous; (ii) the spinel grains constituting the matrix exhibit an equiaxed shape and a narrow grain size distribution; and (iii) no dislocation activity was detected in the elemental grains or at grain boundaries (multi two-axis tilting of the thin foils). The typical microstructure developed in the sample sintered at 1300°C for 15 min (98.82% relative density, 379-nm grain size) is shown in Figs. 7(a) and 7(b). Most of the porosity has disappeared, only a few residual pores are observed at triple points and homogenously distributed throughout the thin foil. Some intragranular pores are detected [see white arrow in Fig. 7(a)], though their total content is low. The grains still exhibit an equiaxed shape but it seems that the average grain size from one elemental crystal to another one is more variable, as compared to previous samples. Some grains also exhibit an intragranular dislocation activity [ Fig. 7(b)]. The slip systems activated were not investigated (use of the weak-beam method, determination of the g vector). Such dislocations are nevertheless clearly homogeneously detected in the sample sintered at 1300°C and not in the other ones sintered at a lower temperature. However, despite the clear presence of such events, the general dislocation activity is quantitatively considered as low. At that time, it is not clear why a dislocation activity is only observed for a SPS temperature of 1300°C. It could be interesting in the future to investigate SPS temperatures above 1300°C to analyze if dislocation motion becomes a prominent phenomenon that could have an influence on the control of densification. For all sintering temperatures, densification during SPS of the spinel samples investigated is not controlled by a mechanism involving dislocations (gliding or climbing) contribution. In fact, the microstructures observed are in good agreement with a densification mechanism based on a grain-boundary sliding/diffusion accommodated process. Figure 8(a) shows the typical aspect of grain boundaries observed using HRTEM. Grain boundaries appear depleted of any amorphous thin film. Using higher magnification, with a nonoptimal focus, most of the grain boundaries do not appear perfectly flat and exhibit atomic ledges, as shown in Fig. 8(b). EDS nanoanalyses (Fig. 9) Using similar experimental conditions, EDS nanoanalyses have been conducted on the elemental crystallites constituting the raw powder. Results are also shown in Fig. 9. Therefore, the average composition Mg0.939Al2O3.939 for crystallite centers has been determined. The O/Al ratio is constant for all crystallites with an average value of 1.98 0.02. Clearly, there is a change of stoichiometry during SPS of spinel and an impoverishment in MgO is observed, especially at grain boundaries and at triple points. However, at that time, the modification of stoichiometry was not understood (critical temperature, critical relative density, pressure effect, etc). This change of stoichiometry might also have an influence on the space charge at grain boundaries and, consequently, on the diffusion process that controls densification and grain growth. We will come back later on to that point in Sec. V. Finally, it is also possible that the slight brown color of the sintered samples does matter with the astoichiometry finally obtained. But carbon contamination, from a CO containing residual atmosphere during the SPS experiments, cannot be excluded (graphite heating die; a rotary pump is generating vacuum in which the residual gases can give rise to an atmosphere containing CO2 possibly transforming to CO at high pressure within the shrinking pores). If such a contamination occurs during the SPS tests, the expression proposed for the densification rate should be modified accordingly (in that case the driving force is not the effective pressure alone anymore, a contribution from gas pressure developing within closed pores has to be subtracted). Complementary investigations are needed to clarify this point. V. Discussion To discriminate the mechanisms controlling the densification of the spinel powder during SPS, it is necessary to determine the values of the Qd, p, and n parameters in Eq. (2). Rearranging Eq. (2) yields: where G is given by relation (5) and dT/dt is the heating rate during the SPS experiment. Phenomenological models have been developed to describe high-temperature creep behavior for ceramic polycrystals. 26,27 Such models can be adapted to an SPS problematic, where the densification mechanism is not based on dislocations activity, as it is the case for the runs performed on spinel (see TEM observations reported in Sec. IV). If the grain boundaries are perfect sources/sinks of vacancies, the n and p parameters in relations (2) and (6) can have the following values 26 : (i) n = 1, p = 2: the densification mechanism is grain-boundary sliding accommodated by volume diffusion and the apparent activation energy has a bulk character; and (ii) n = 1, p = 3: the densification mechanism is grain-boundary sliding accommodated by grain-boundary diffusion and the apparent activation energy has a grain-boundary character. If the grain boundaries are not perfect sources/sinks of vacancies, the n and p parameters in relations (2) and (6) have the following values27: (i) n = 2, p = 1: the densification mechanism is grain-boundary sliding accommodated by an in-series {interfacereaction/lattice diffusion} mechanism controlled by the interface-reaction step and the apparent activation energy has a bulk character; and (ii) n = 2, p = 2: the densification mechanism is grain boundary sliding accommodated by an in-series {interface reaction/grain-boundary diffusion} mechanism controlled by the interface-reaction step and the apparent activation energy has a grain-boundary character. The heating part of an SPS run, for temperatures between 940 and 1300°C, can also be exploited. Imposing a given value for the activation energy Q, it is possible, using the Excel Solver function (Microsoft Excel, Microsoft France, Courtaboeuf, France), to calculate the corresponding p, n, and K0 parameters that enable the left side of relation (6) to be equal to its right side by minimization of the residual sum of squares (RSS). FIG . 10. Determination of the densification mechanism using an anisothermal method. The heating rate is fixed to 100°C/min and the applied macroscopic compaction pressure to 25 MPa. The heating portion of a SPS experiment is used, activation energy values are imposed (Q) and the corresponding p, n, and K0 parameters involved in relation (6) are calculated using Excel Solver function. Best result is for the lowest RSS (residual sum of squares). The p, n, K0, and RSS values, obtained for each value of Q imposed, are summarized in Fig. 10. Clearly, the minimum RSS value is obtained when n and p have a value of 2.0 ±0.1 and 1.2±0.0, respectively. In that case, K0 is 51.1±2.1 and Qd has a value of 500±20 kJ/ mol. Because of the obtained values for n (around 2) and p (close to 1), the apparent activation energy has a bulk character. In the past, it has been shown that the activation energy for oxygen self-diffusion in monocrystalline stoichiometric spinel is in the range 415-500 kJ/mol, [28][29][30] comparable to the value of 500 kJ/mol obtained there. For comparison, the activation energies for the selfdiffusion of Mg 2+ cations, in the same kind of spinel monocrystal, has been determined to be around 200 kJ/ mol. 31 No value was found in the literature for the selfdiffusion of the Al 3+ cations, although it can be indirectly estimated. Martinelli et al. 32 concluded from conductivity experiments that, at 1000°C, magnesium is the more mobile cation. Independently, a Mg 2+ «Al 3+ interdiffusion activation energy of 235 kJ/mol was determined in by Watson. 33 Assuming that migration of the aluminum cation is the rate limiting step in this process, as suggested by Martinelli, then Murphy et al. 34 concluded that 235 kJ/mol could be a good evaluation for the activation energy for Al 3+ self-diffusion in spinel. Both values reported for Mg 2+ and Al 3+ cations are much lower than the apparent activation energy for densification obtained from these SPS experiments on spinel. We therefore propose that grain-boundary sliding, accommodated by an in-series {interface-reaction/lattice diffusion of the O2 anions} mechanism controlled by the interface reaction step, governs densification of our spinel samples during the heating portion of the SPS experiments we performed (at least between 940 and 1300°C). Regarding the densification curves obtained for the different soak temperatures, combining relations (2) and (5) yields: where K0 is a constant. Assuming only one constant value of Qd (coherency with the results obtained using the anisothermal method, no reason for a change in densification mechanism between the heating portion and the early stages of the soaks), the slope of the straight line obtained when plotting corresponds to the n value. Knowing the n value, the slope of the straight line obtained when plotting Using a fixed value of 1 for p, Fig. 11 shows the variations of 1 1 + as a function of Ln(seff/µeff) for the different soak temperatures selected. It seems that n exhibits an average value of 2 when densification progresses at the beginning of the soaks, for temperatures between 1200 and 1275°C, in good agreement with the ideal combination (n = 2, p = 1) determined from the heating portion of an SPS experiment (see above). But in all cases the 1 1 + = trajectories differ from a straight line after some period of time at soak. When the soak temperature is fixed to 1300°C, the apparent value of n is lower than 2. Fig. 4), which allow us to calculate the corresponding values of seff and µeff. Finally the variation of 2 1 + as a function of 1/T is plotted (Fig. 12). According to relation (10), a value of Qd around 530±30 kJ/mol is calculated (XLSTAT solver, Addinsoft France, Paris, France). This value is similar to what has been obtained using the heating portion of an SPS experiment (500±20 kJ/mol). Therefore, it is proposed that grain-boundary sliding, accommodated by an in-series {interface-reaction/lattice diffusion of the O2-anions} mechanism controlled by the interface-reaction step, is still governing densification of our spinel samples at the beginning of the soak, for SPS temperatures in the range 1200-1275°C. For soak temperatures of 1200, 1225, and 1250°C, the period where n is close to 2 is followed by an abrupt densification hardening regime (the instantaneous value of n increases continuously), corresponding to a strong decrease of the instantaneous relative densification rate (Fig. 4). A similar trend is observed for a soak temperature of 1300°C, where n has a value between 1 and 2. When the soak temperature is 1275°C, the densification hardening regime is preceded by a period where n is close to 0. After the hardening period, densification resumes, for all soak temperatures. This corresponds to a densification softening period. Such densification or strain hardening/densification or strain softening behavior has already been observed during hot pressing of spinel powder 24 and high temperature creep experiments on polycrystalline spinel. 25 The softening and hardening were related to a change in the internal stresses, depending on a decrease and increase in the density of the intragranular dislocations, respectively, whose motions contribute to the relaxation of stress concentrations exerted through the predominant mechanism of grainboundary sliding. It was proposed that densification/deformation was controlled by the continuous recovery of the dislocations, limited by lattice diffusion of the oxygen ions. Clearly, this hypothesis is not in agreement with the typical microstructures observed for the different samples obtained here by SPS. For all temperatures, no extensive dislocation activity has been reported in the elemental grains constituting the sintered samples (see Sec. IV). It is proposed here that the densification hardening, corresponding to a zero-densification-rate period, is originating from the difficulty to anneal vacancies, which is the driving force for the densification to proceed. At each soak temperature, after a certain period of time, vacancies are accumulating because the annealing step stops. Then densification also stops. An incubation time is then necessary to anneal enough vacancies to resume densification. Atomic ledges have been detected at grain boundaries using HRTEM [ Fig. 8(b)]. The EDS nanoanalyses have also shown that the stoichiometry of spinel is changing during SPS (Fig. 9). Both events, at the nanoscale, could be related to the difficulty to anneal vacancies at the soak temperatures. The astoichiometry is amplified at grain boundaries and at triple points, in comparison to the center of the grains. The EDS analyses suggest an excess of Mg and O vacancies in these areas. If the concentrations for each kind of vacancy are not the same, the consequence will be an excess of negative positive charge at grain boundaries and triple points, depending on which concentration is the highest. Therefore, the residual electrical charge will be compensated by an opposite electrical charge cloud in the surrounding grains, at the close vicinity of the grain boundaries/triple points, called space charge. 35 The space-charge region creates an electric potential and therefore modifies the conditions of the charged defect formation and diffusion that can explain the perturbation of the interface-reaction (zero densification-rate period), claimed to control densification during our SPS experiments. Interesting works have been published on the stoichiometry variation of polycrystalline spinel at grain boundaries. 36,37 No successes have been reported for the characterization of the space charge region alone in such materials. In fact, such investigations are difficult, since the space-charge dimension could be lower than the lowest spot size available on the best TEM/STEM equipments. The fact that the SPS technology also involves the submission of the compact to an electric field has perhaps also an influence on the properties of a possible spacecharge region and consequently on the possible perturbation of the interface-reaction. To investigate such a last effect, the same kind of spinel samples will be densified using standard hot pressing in the future. Another comment may be added, regarding the determination of n, p, and Qd. The method investigating the heating part of a SPS run is fairly straightforward. Inversely, more questionable are the results obtained when investigating the densification curves at soak. In such case, it may be difficult to determine a precise value for n because the hardening phenomenon appears rapidly at soak (Fig. 11, 1225 and 1250°C). At least an approximated value may be extrapolated using few points at the beginning of the soak. VI. Conclusions SPS of a stoichiometric alumina-magnesia spinel powder, shaped by slip casting, has been investigated in vacuum, in the 1200 to 1300°C temperature range. The other experimental parameters were a heating rate of 100°C/min, an applied macroscopic compaction pressure of 25 MPa, a soak time of 15 min, and the use of the standard 12:2 pulse configuration. For all soak temperatures, grain growth is significant during densification. Accounting for this phenomenon, the mechanism controlling the densification of spinel powder during the SPS experiments has been identified. It is proposed that grain-boundary sliding, accommodated by an in-series {interface-reaction/lattice diffusion of the O 2 -anions} mechanism controlled by the interface reaction step, governs densification. This hypothesis is in good agreement with microstructural observations, performed on the SPS samples, using TEM. For each soak temperature, a zero-densification-rate period is observed. In our case, the lack of dislocation activity in the elemental grains after SPS implies that the zero-densification-rate period is related to the stop of the interface reaction that controls the annealing of vacancies. More investigations are now required to identify precisely the mechanisms responsible for such a zero-densification-rate period.
2018-07-20T00:04:10.346Z
2009-06-01T00:00:00.000
{ "year": 2018, "sha1": "41d8cfe49c5878f6e2028001c212528e0e65ce87", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1804.00001", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "41d8cfe49c5878f6e2028001c212528e0e65ce87", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
71101211
pes2o/s2orc
v3-fos-license
Cerebral Palsy and Accessible Housing Healthcare systems in transition countries must adapt to the many changes occurring in so‐ ciety as a whole. This is a very serious process requiring reforms that will significantly change the management and organization of healthcare at all levels. Bosnia and Herzegovi‐ na has suffered large scale destruction during the war (1992-1995), and all the medical ca‐ pacities in the country sustained significant damage. The causes of demographic changes in transition countries include: the growth of urban populations, expansion of education, the modernization of society, the disintegration of the family, medical advances, increased in‐ come, decreased fertility and increased mortality (Loga S, 2011). Introduction Healthcare systems in transition countries must adapt to the many changes occurring in society as a whole. This is a very serious process requiring reforms that will significantly change the management and organization of healthcare at all levels. Bosnia and Herzegovina has suffered large scale destruction during the war (1992)(1993)(1994)(1995), and all the medical capacities in the country sustained significant damage. The causes of demographic changes in transition countries include: the growth of urban populations, expansion of education, the modernization of society, the disintegration of the family, medical advances, increased income, decreased fertility and increased mortality (Loga S, 2011). Community Based Rehabilitation (CBR) is strategy for rehabilitation, equal possibilities and social integration of all persons with disabilities. CBR program is implementing with joint effort of persons with disabilities, their families, community and related health, educational and social institutions. Before the 1992, medical rehabilitation in Bosnia and Herzegovina had been provided at the level of institutions, usually after the hospital or ambulant treatments. Model of Community Based Rehabilitation (CBR), which practically tested in all parts of Bosnia and Herzegovina, suggests numerous advantages when compared to the previous period, until 1992. There is large number of health and educational institutions in the Canton of Sarajevo which are working on re/habilitation and education of children and adolescents with disabilities, but there isn't unique database about the people with disabilities, as well as with cerebral palsy. Lack of unique database indicates poor network among the institutions and Associations in the Canton of Sarajevo. A number of issues arise from the study "Family Quality of Life: Adult School Children with Intellectual Disabilities". Four of the seemingly most important are: lack of organized community services for adults after they leave school; lack of a cantonal, state, or federal registration program that would improve coordination of health and social services and link to the European Register; necessity of conducting continuous education for the teaching staff at schools regarding effective curricula, for parents, and for health professionals; and the possibility of developing occupational and physical therapy programs for children, adolescent, and adults. The degree to which improvements such as these might affect family quality of life also needs to be examined in future study (Švraka E, Loga S, Brown I, 2011). The goals of education and rehabilitation in Bosnia and Herzegovina, similar to most other countries of the world, are to work toward community inclusion, acceptance of diversity, optimal physical and mental health, and personal and social well-being. The focus on family quality of life is a step toward understanding how we can move closer to achieving these goals (Švraka, Loga, Brown, 2011). The fact that concerns the most in the South-Eastern Europe is that many people with disabilities are isolated in their homes. One reason for this isolation is huge barriers that must face when they try to go out of their homes. Common premises, such as elevators, corridors and passages often are inaccessible. This only reinforces the fact that most laws on accessibility applies only to public buildings, so that investors who invest in private buildings can go unpunished for not fulfilling these regulations (Sestranetz, Adams, 2006). Cerebral palsy Cerebral palsy (CP) is characterized by nonprogressive abnormalities in the developing brain that create a cascade of neurologic, motor and postural deficit in the developing child. Cognitive, sensory and psychosocial deficits often compound motor impairments and subsequent functioning. Characteristically, the child with CP shows impaired ability to maintain normal posture because of a lack of muscle coactivation and the development of abnormal movement compensations. These compensatory patterns develop in certain muscle groups to maintain upright postures and move against gravity. Hyperactive responses to tactile, visual or auditory stimuli may result in fluctuations of muscle tone that often adversely affect postural control and further diminish coordinated responses in everyday activities (Rogers, Gordon, Schanzenbacher, Case-Smith, 2001). Cerebral palsy (CP) occurs at present in about 2,2 per 1000 live born children in Sweden. Epilepsy occurs in 15% to more than 60% of children with CP, depending on the type of CP and the origin of the series, compared with 0,5% in the general population (Carlsson, Hagberg, Olsson, 2003). According to the time of influence, causes of cerebral palsy can be divided to prenatal (from conception until beginning of the delivery), perinatal (beginning of the delivery until age of 28 days) and postnatal (from 29 th day of age until two years of age). The majority of international studies indicates that the prevalence of the cerebral palsy is about 2-2,5 cases per 1000 born, although there are some reports about lower and higher prevalence rates (Nordmark, Hagglund, Lagergren, 2001). Occupational therapy for persons with cerebral palsy in the Canton of Sarajevo The research was conducted through Project: "Occupational therapy for persons with cerebral palsy", in homes of participants. The aim was to determine accessible housing for persons with cerebral palsy. The client was Association of persons with cerebral palsy in the Canton of Sarajevo. The Association includes 315 members. Of that number, 123 (47,13%) are children and adolescents, age 4 up to 20 years, and 138 (52,87%) are adults. Nine participants had private houses, and 21 were living in flats. The principal measure used was the Environmental Assessment -Home assessment form. The first part should deal with accessibility of the dwelling's exterior, and the second half should be concerned with an assessment of the home's interior. During the On-Site visit a tape measure and home assessment form are tools (Schmitz, 1988), translated and modified by the author (Švraka, 2007). The part about accessibility of the dwelling's exterior is made of 36 items: type of home, entrances to building or home, approach to apartment or living area (hallway, steps, door, and elevator). "Inside home" part consists of bedroom, bathroom, living room area, dining room, kitchen, laundry, cleaning, emergency and few other items. The study was approved by parents of children with CP, or adults with CP, and president of the Association of persons with cerebral palsy in the Canton of Sarajevo. Before starting the data collection, the research aim and Environmental Assessment -Home assessment form were explained to parents and they agree to participate by signing consent. Ideally, the physical and occupational therapists should accompany the patient on the home visit. They assume shared responsibility for assessing the patient's functional level at home. Depending of the specific needs of the patient and/or family, a speech therapist, social worker, or nurse also may be included on the home visit (Shmitz, 1988). Research was conducted during 3 months period trough home visits to clients. Basic inclusion criteria were: 1. Association members with severe motor disability, 2. Lower community engagement or majority of clients are not involved in some form of institution, continuous forms of education and/or re/habilitation. Students of Department of physiotherapy, at Faculty of Health Studies in Sarajevo, who have completed their course of studies, apply the Environment assessment -Home assessment form in patient's home as part of practical education, in an environment that does not have the occupational therapy program. Supervision was performed by assistant professor of Faculty of Health Studies. Based on the initial assessment of the patient in the house/ On-site assessment, individual therapeutic programs/interventions were made in order to improve occupational performance. 1. Interventions which changed requirements of occupation was bringing large gymnastic ball in the home of all 30 patients. 2. Interventions that want to affect the environment, followed after the evaluation. In cooperation with the police and local community, students were working on improvement of accessibility: free parking places in front of the building, entrance ramps, accessible elevators. 3. Interventions that want to improve the ability of the person was the education in certain exercises for the improvement and preservation of posture, balance, coordination, increase the mobility and prevention of deformities deterioration, which influenced the personal competencies, i.e. skills related to motor performance, sensor abilities, cognitive ability and general health condition. People with CP can lead active lives and make a valuable contribution to society. Art workshop of the Association of persons with cerebral palsy in the Canton of Sarajevo consists of 9 female members, 7 with CP and 2 with paraplegia. Middle age is 37,7 years; two youngest members are 27 years old, and oldest one is 58 year old. Five members use wheelchairs (3 with CP and 2 with paraplegia), one cane, and three of them are walking independently. It is necessary to reduce the numbers of sheltered workshops, and develop supported employment and self-employment, in other to reduce segregation of persons with disabilities and give support to social inclusion. Assistive technology Assistive technology (AT) is an umbrella term for a wide range of products. A commonly accepted definition is "any item, piece of equipment or product system whether acquired commercially off the shelf, modified or customized that is used to increase, maintain or improve functional capabilities of individuals with disabilities" (US Statute, 1988). Therefore in terms of devices or equipment it includes from walking sticks to environmental control systems (ECS), or simple dressing aids to communication aids (Cowan & Wintergold, 2007). Assistive devices include ortho-prosthetic devices, wheel chairs, walking aids, technical aids and adapted controls for cars. Adequate assistive devices are often financially inaccessible to many users because of their high cost despite the fact that they should be covered by social and insurance schemes. Under the current system, most assistive devices are covered only partially by the state and require user co-payments, which can be exorbitant in cost. Within the socialist system, assistive devices were generally provided for free within the public health care system. This is a crucial issue in South East Europe as one of the largest barriers to accessing assistive devices is financial. Ortho-prosthetic devices are partially subsidized by the state and in most countries, co-payments have been set up but the financial burden is still heavy, especially for mid to low-income households. For example, in Bosnia and Herzegovina, co-payments can range from 10-50%, which can range from EUR 100-1,000 depending on the device. In the UN administered province of Kosovo there is an absence of a health care financing system so patients must pay the full price for their wheelchairs or other devices (Handicap International, 2004). Mechanical assistive technology includes equipment such as manual wheelchair, postural management equipment, equipment for active exercise, protective devices, orthoses and aids for daily living. Provision of mechanical AT for children has its own unique challenges. Children are constantly changing as they grow and their abilities change and develop. Equipment therefore needs to be chosen with these aspects in mind. Adjustable equipment enables changes to be made according to a child's needs. Adjustability within a device does tend to make equipment heavier, more complex and expensive but it will last longer and may be adjusted to fit the constantly changing needs of a child (Cowan & Wintergold, 2007). The study of the influence of prenatal etiological factors on learning disabilities of children and adolescents with cerebral palsy in the Canton of Sarajevo was conducted with sample of 80 participants, children and adolescents with cerebral palsy in the Canton of Sarajevo, age from 6 up to 20 years; 25 children (age 6-11), and 75 adolescents (age 12-20). Mean age was 13,94 years, 47 male (58,75%) and 33 (41,25%) female. The sample was divided in two subgroups, first includes 30 participants whose mothers had problems during the pregnancy, and second includes 50 participants whose mothers didn't have problems during the pregnancy. Of 33 children with cerebral palsy and epilepsy, 14 (42,4%) were able to walk independently, 1 (3%) child needs to hold a mother's or friend's hand, 2 (6%) children walks with assistive device (walker), and 16 (48,5%) children were unable to walk, in need of wheelchair. In the group of 30 participants, with illnesses during pregnancy, 13 (43,3%) were in need of wheelchair, and 17 (56,7%) were not. In the group of 50 participants, without illnesses during pregnancy, 21 (42%) were in need of wheelchair, and 29 (58%) were not. Accessible home Accessibility for all is a fundamental right, and any environmental barrier which denies access and free movement for persons with disabilities and other persons with reduced mobility is and must be recognized as discrimination (Howitt, 2003). An accessible home is a pre-condition for independent living or self-determined living as it enables individuals to do what they need and desire to do as independently as possible within their living space. This definition is addressed to all people meeting difficulties in performing daily activities at home as a result of a disability. It means that not only people with physical disabilities, people who we automatically have in mind when talking about accessibility, but also people with sensory or intellectual disabilities or even elderly people who might have lost certain capacities and therefore meet obstacles in their homes -all need accessible housing. Another common problem people face in the region when adapting an inaccessible dwelling is that there are no services available to provide guidance and consultation on making the adaptations. In Calgary Canada, there is an Accessible Housing Society providing consultation services to people who wish to adapt their home. With this service, an occupational therapist and an architect visit individual homes to assess what needs to be adapted to suit the needs of the person and then draw up plans for modifications. They also provide information such as names of the relevant vendors and contractors, accessibility products and standards. There is no charge for the service if the client qualifies for income-tested government funding programs that include: Residential Access Modification Program, Residential Rehabilitation Assistance Program, Home Adaptations for Senior's Independence under the Alberta government housing support programs. Under these programs, applicants who qualify receive a grant to make proper adaptations. The government housing support programs contain an accessible housing registry for people seeking barrier-free dwellings. This registry refers clients to available accessible housing while documenting housing needs for future planning and construction (Disability Monitor Initiative South East Europe, 2007). Exterior accessibility The entrance should be well lighted and provide adequate cover from adverse weather conditions. If a ramp is to be installed, there should be adequate space. The recommended grade for wheelchair ramps is 12 inches in ramp length for every inch of threshold height. Ramps should be a minimum of 48 inches (121,9 cm) wide with a nonslip surface. Handrails also should be included on the ramp, 32 inches (81,3 cm) in height and extend 12 inches (30,5 cm) beyond the top and bottom of the ramp (Schmitz, 1988). Seven studies focused on aspects of the physical environment as it relates to accessibility issues and wheelchair accident. The most wheelchair accidents occur outdoors or on ramps. There remains a need for public buildings to implement barrier-free access changes for wheelchair users. Wheelchair users voiced concern about not being included in decisions regarding the design ( If there is raised threshold in the doorway, it should be removed. If removal is not possible, the threshold should be lowered to no greater than 0,5 inch (1,27 cm) in height, with beveled edges (Schmitz, 1988 (3), and one made of marble; 1 cm to 7 cm in height. Interior accessibility Inaccessible buildings and rooms crowded with furniture limit how children in wheelchairs move throughout the environment. Differences in the terrain or room surface also affect mobility. For example, a child who can run outdoors on an asphalt playground may trip and fall inside when walking on a rug. Other physical characteristics that the occupational therapist assesses relate to the type of furniture, objects, or assistive devices in the environment and whether they are usable and accessible. This includes the type of equipment, household items, clothing or toys. Sensory aspects of the physical environment often influence performance, e.g. the type of lighting, noise level, visual stimulation, and tactile or vestibular input of tasks (Shepherd, 2001). Sufficient room should be made available for maneuvering or ambulating with an assistive device. Clear passage must be allowed from one room to the next. Unrestricted access should be provided to electrical outlets, telephones and wall switches. All floor coverings should be glued or tacked to the floor. This will prevent bunching or rippling under wheelchair use. Scatter rugs should be removed. Use of nonskid waxes should be encouraged. Raised thresholds should be removed to provide a flush, level surface. Doorways may need to be widened to allow clearance for a wheelchair or assistive device. Doors may have to be removed, reversed, or replaced with curtains or folding doors. All indoor stairwells should have handrails and should be well lighted. For patients with decreased visual acuity or agerelated visual changes, contrasting textures on the surface of the top and bottom stair/s will alert them that the end of the stairwells is near. Circular band or tape also can be placed at the top and bottom of the handrail for the same purpose (Schmitz, 1988). Bedroom The bed should be stationary and positioned to provide ample space for transfers. Stability may be improved by placing the bed against the wall or in the corner of the room. The height of the sleeping surface must be considered to facilitate transfer activities. The mattress should be carefully assessed; it should provide a firm, comfortable surface. If the mattress is in relatively good condition, a bed board inserted between the mattress and box spring may suffice to improve the sleeping surface adequately. If the mattress is badly worn, a new one should be suggested. A bed side table or cabinet might be suggested; it will be useful to hold a lamp, a telephone, necessary medications, and a call bell if assistance is needed (Schmitz, 1988 The range of the height of the bed of children was 40 cm to 50 cm. The range of the height of the bed of adolescents was 20 cm to 55 cm. The range of the height of the bed of adults was 21 cm to 60 cm. Range of the width of the bed was 55 cm to 220 cm. The width of the bed of 5 participants was 100 cm. Width of the bed (cm) Total The range of the width of the bed of children was 55 cm to 162 cm. The range of the width of the bed of adolescents was 68 cm to 170 cm. The range of the width of the bed of adults was 90 cm to 220 cm. Bathroom If door frame prohibits passage of a wheelchair, the patient may transfer at the door to a chair with casters attached. An elevated toilet seat will facilitate transfer activities (Schmitz, 1988). Special equipment that gives support can help the child feel safe and secure. Bath hammocks fully hold the body and enable the parent to wash the child thoroughly. A simple, inexpensive way for giving security is to use a plastic laundry basket lined with foam at its bottom. Commercially, alight, inconspicuous bath support offers good design features. The front half of the padded support ring swings open for easy entry and then locks securely, holding the child at the chest to give trunk stability. Various kinds of bath seats and shower benches are available for the older child to aid bathtub seating transfers. For the child with severe motor limitations who is lying supine in the tub in shallow water, a horseshoe-shaped inflatable bath collar serves to support the neck and keep the child's head above water level. A bath stretcher is constructed like a cot and fits inside the bathtub rim level or mid tub to minimize the caregiver's bending while transferring and bathing the child ( Range of toilet seat height was 39 cm to 45 cm. Toilet seat height for twelve clients (40%) was 40 cm, for 7 (23,3%) was 39 cm, for 4 clients (13,3 %) was 41 cm, for other 4 was 42 cm, for one 43cm and for other one was 45 cm. Kitchen The height of counter tops (work space) should be appropriate for the wheelchair user; the armrests should be able to fit under the working surface. The ideal height of counter surfaces should be no greater than 31 inches (79 cm) from the floor with a knee clearance of 27,5 inches (69,8 cm) to 30 inches (76,2 cm). Counter space should provide a depth of at least 24 inches (61 cm). All surfaces should be smooth to facilitate sliding of heavy items from one area to another. Slide out counter spaces are useful in providing an over-the-lap working surface. For ambulatory patients, stools (preferably with back and foot rests) may be placed strategically at the main work area/s (Schmitz, 1988). The sink may be equipped with large blade-tape handles, and a spray-hose fixture often provides improved function. Shallow sink 5 to 6 inches (12,7 cm to 15,2 cm) in depth will improve knee clearance below. As in the bathroom, hot-water pipes under the kitchen sink should be insulated to prevent burns (Schmitz, 1988 Equipment and food storage areas should be selected with optimum energy conservation in mind. All frequently used articles should be within easy reach, and unnecessary items should be eliminated. Additional storage space may be achieved by installation of open shelving or use of peg boards for pots and pans. If shelving is added, adjustable shelves are preferable and should be placed 16 inches (41 cm) above counter top (Schmitz, 1988). Conclusion Based on the results of the evaluation with Environmental Assessment -Home assessment form, the study Occupation therapy for persons with cerebral palsy, in the Canton of Sarajevo, made proposals for changes in the environment to improve the accessibility of housing. Educating persons with cerebral palsy and members of their families, with specific exercises to improve and preserve posture, balance, and coordination, increase the volume of mobility and prevent deformities deterioration, had an impact on the personal competencies, i.e. skills related to motor performance, sensor capabilities, cognitive ability and general health. Private homes need to be converted according to the individual needs of tenants. As for the individual adaptation, arrangement of private space so that it is accessible, it requires precise planning according to the needs of people. Multidisciplinary team should lead that planning, and find such design solutions that overcome the problem of architectural barriers for people with disabilities to improve their quality of life. Ideally, the physical and occupational therapists should accompany the patient on the home visit. They assume shared responsibility for assessing the patient's functional level at home. Depending of the specific needs of the patient and/or family, a speech therapist, social worker, or nurse also may be included on the home visit. It is necessary to open the Services or Counseling centers for accessible housing. As part of these services, occupational therapist and an architect should visit homes of persons with disabilities and assess what needs to be adapted to meet the needs of that person. The significance of this research for the community is multiple: educational, scientific, humane and promotional. The results provide a basis for further research in needs of these families and improvement of their quality of life. Summary Accessible design generally refers to houses or other dwellings that meet specific requirements for accessibility. The laws dictate standards dimensions and characteristics for such features as door widths, clear space for wheelchair mobility, audible and visual signals, grab bars switch and outlet height, and more. The research was conducted through Project: "Occupational therapy for persons with cerebral palsy", in homes of participants. The aim was to determine accessible housing for persons with cerebral palsy. The principal measure used was the International Environmental Assessment -Home assessment form. The first part should deal with accessibility of the dwelling's exterior, and the second half should be concerned with an assessment of the home's interior. During the On-Site visit a tape measure and home assessment form are tools (Schmitz, 1988), translated and modified by the author (Švraka, 2007 Assistive devices include ortho-prosthetic devices, wheel chairs, walking aids, technical aids and adapted controls for cars. Adequate assistive devices are often financially inaccessible to many users because of their high cost despite the fact that they should be covered by social and insurance schemes. Under the current system, most assistive devices are covered only partially by the state and require user co-payments, which can be exorbitant in cost. Within the socialist system, assistive devices were generally provided for free within the public health care system. This is a crucial issue in South East Europe as one of the largest barriers to accessing assistive devices is financial. Of 30 persons with cerebral palsy 20 (66.7%) use wheelchairs, 7 (23.3%) have independent mobility, and 3 (10%) persons require the use of particular device. Client with triparetic CP use a walking tripod. For 14 patients (46.7%) inaccessible are stove switches, which means they cannot use them, and for 10 (33.3%) are not. Four patients (13.3%) do not use the kitchen, and two patients (6.7%) did not give an answer. Eleven patients (36.7%) can operate stove doors, 12 (40%) cannot, 4 (13.3%) does not use the kitchen, and 3 (10%) of patients did not answered Ideally, the physical and occupational therapists should accompany the patient on the home visit. They assume shared responsibility for assessing the patient's functional level at home. Depending of the specific needs of the patient and/or family, a speech therapist, social worker, or nurse also may be included on the home visit. It is necessary to open the Services or Counseling centers for accessible housing. As part of these services, occupational therapist and an architect should visit homes of persons with disabilities and assess what needs to be adapted to meet the needs of that person. The results provide a basis for further research in needs of these families and improvement of their quality of life.
2018-12-05T20:14:43.094Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "6dad9b9234cd30697925f4775ebef34113bbd31f", "oa_license": "CCBY", "oa_url": "https://cdn.intechopen.com/pdfs/45754.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "3a33e65ea94c879951407ed1e2723df1fc942e18", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Political Science" ] }
238999740
pes2o/s2orc
v3-fos-license
Observation on Curative Effect and Adverse Reaction of Acupuncture and Moxibustion in Treating Chronic Functional Constipation Objective — To investigate the efficacy of acupuncture and moxibustion in the treatment of chronic functional constipation and the observation of adverse reactions in patients. Methods — A total of 88 patients with chronic functional constipation who were treated from June 2019 to March 2021 were selected as the subjects, and the patients were divided into control group with 44 patients and observation group with 44 patients according to a random number table.The control group was given conventional western medication, and the observation group was given acupuncture based treatment. The scores for clinical symptom and the incidence of adverse reactions were compared between the two groups after treatment. Results — After treatment, the symptom scores of frequency of abdominal pain, incomplete sense of defecation, abdominal distension and difficulty in defecation in the observation group were all lower than those in the control group (P < 0.05). After treatment, the total incidence of adverse reactions of abdominal distension, spasmodic abdominal pain, borborygmus and dizziness in observation group was 4.55%, which was significantly lower than that in control group, 22.73% (P < 0.05). Conclusion — Acupuncture and moxibustion therapy is more effective than conventional western medicine therapy in treatment of chronic functional constipation,which effectively improve the clinical symptoms of patients, reduce the occurrence of adverse reactions, as as result, it is worthy of promotion and universal application. Compared with organic constipation, functional constipation mainly refers to abnormal humoral regulation or neuroregulation caused by poor defecation, leading to changes in intestinal and gastrointestinal functions, which are manifested as difficult defecation, reduced frequency of defecation, long duration of defecation, endless frequency of defecation, etc., seriously affecting the work and life of patients [1] . Functional constipation not only affects the patient's appetite and mental state, but also affects the patient's quality of life. With the continuous development of medical technology, acupuncture and moxibustion has been widely used in the treatment of patients with functional constipation with ideal therapeutic effects [2] , but few relevant studies were conducted. Therefore, this study conducted a study on patients with chronic functional constipation under treatment, and discussion was conducted on the efficacy of acupuncture and moxibustion in the treatment of chronic functional constipation and the observation of adverse reactions in patients, which is reported as follows. Clinical data A total of 88 patients with chronic functional constipation who were treated from June 2019 to March 2021 were selected as the subjects, and the patients were divided into control group with 44 patients and observation group with 44 patients using a random number table. In the control group, there were 19 males and 25 females, aged (43-76) years old, with an average of (62.49±5.71) years old. The course of disease was (6d-3) years, with an average of (1.35±0.44) years. The body mass index was (20-26) kg/m 2 , with an average of (23.10±0.71) kg/m 2 . In observation group, there were 20 males and 24 females, aged (45-75) years old, with an average of (61.81±5.51) years old.The course of disease was (7d-3) years, with an average of (1.69±0.25) years. The body mass index was (21-25) kg/m 2 , with an average of (23.46±0.78) kg/m 2 . Inclusion and exclusion criteria Inclusion criteria: (1) All were clinical diagnostic criteria for functional constipation, with clinical symptoms such as reduced frequency of defecation, labored defecation, and sense of incomplete defecation [3] ; (2) The condition was stable without other cardiovascular diseases. Exclusion criteria: (1) Patients with other immune diseases, malignant tumors or incomplete medical records; (2) People with infectious diseases. Methods Control group was given oral cisapride tablet (State Food and Drug Administration Approval Number: H20020345, manufacturer: Zhejiang Jingxin Pharmaceutical Co.,Ltd.) for treatment, the daily total of 15-30mg, 2-3 times before meals / 15min before going to bed, 3 weeks as a course of treatment according to the severity of the patient's condition. The observation group was treated with acupuncture and moxibustion, and Tianshu, Zhigou, Yaodi, Erbai, Shangjuxu, Dachang Shu and Zusanli acupoints were selected. Hot secret type plus Hegu and Quchi points. Hegu and Quchi acupoints were selected in terms of Remi. Zhongwan and Qihai acupoints were selected in terms of Qimi acupoints. Shi Guan acupoints and Zhao Hai acupoints were selected for Lengmi. According to the deficiency and deficiency of patients, acupuncture and moxibustion were performed by twirling and tonifying method, reducing method and flattening and reducing method respectively. For the deficiency and cold secret type, warm acupuncture and moxibustion were used, leaving the needle for 30 minutes, once a day, 10 times as a course of treatment, and the interval of the course of treatment was 2 to 3 days. After 3 courses of treatment, the curative effect was statistically analyzed. Observation Indicators (1) Scores for clinical symptom: frequency of abdominal pain, feeling of incomplete defecation, abdominal distension and difficulty in defecation were observed in the two groups. A 4-grade scoring method was adopted, and the higher the score, the more serious the clinical symptoms were [4] . (2) Incidence of adverse reactions. The incidence of abdominal distension, spasmodic abdominal pain, borborygmus, dizziness and other adverse reactions in patients of the two groups after treatment was recorded and statistically analyzed. Statistical analysis SPSS21.0 software was used for processing. The statistical data were represented by x 2 test and n (%), while the measurement data were tested by t test and represented by( x s ± ). The difference was statistically significant (P<0.05). Comparison of scores for clinical symptom between the two groups There was no significant difference in clinical symptom scores between the two groups before treatment (P>0.05. After treatment, the symptom scores of frequency of abdominal pain, endless defecation, abdominal distension and difficulty in defecation in the observation group were all lower than those in the control group (P<0.05), as shown in Table 1. Comparison of the incidence of adverse reactions between the two groups After treatment, the total incidence of adverse reactions of abdominal distension, spasmodic abdominal pain, enteritis and dizziness was 4.55% in the observation, which was significantly lower than that in the control group, 22.73% (P < 0.05), as shown in Table 2. Discussion With the rapid development of society, the incidence of functional constipation is increasing year by year due to factors such as fast pace of life and irregular diet. The incidence of this disease is high among the elderly population with complex pathogenesis, and the inducing factors are mostly related to the poor living habits, dietary structure and physical factors of the elderly [5] . Functional constipation in young and middle-aged adults can be alleviated by improving their living and dietary habits, while functional constipation in the elderly is stubborn and prone to recurrent episodes, which is difficult to be alleviated by improving their living and dietary habits. Therefore, reasonable clinical treatment is required [6] . Traditional Chinese medicine believes that if the disease is caused by the stomach and intestine, Neijie, Qi deficiency, blood deficiency or intestinal dryness and other symptoms, constipation will occur. Its pathogenic site is in the intestine with various etiology, but all is caused by the conduction dysfunction of the large intestine. Treatment should be given with adjustment of Fu Qi and relaxing bowels defecation. Tianshu acupoint and large intestine are closely related, which can Jie Yang Ming Qi and alleviate stagnation.The upper Juxu acupoint is the lower acupoint, which regulates the stomach and promotes the Sanjiao Qi mechanism. Tianshu acupoint can regulate spleen and stomach function, increase intestinal peristalsis. Chigou acpoints, large intestine acpoints can adjust the Sanjiao Qi machine, which can circulate the meridians. Yaoqi acupoint and Erbai acupoint are extra nerve points, which can which can improve constipation and hemorrhoids. The shu point and quchi point of the large intestine together with the original point can play the role of clearing the fire of the stomach and intestines, reducing heat and relieving constipation. Fu Hui Zhong Wan point with Qi sea point can pass down Fu Qi; Pishu point and Wei Yu point together, toning can regulate the spleen and stomach. Shiguan acupoint and Zhaohai acupoint together, can contact the surface, clear and reduce turbidity. Warm acupuncture and moxibustion can effectively enhance the temperature-tonic effect [7] . In this study, after treatment, the symptom scores of frequency of abdominal pain, insufficiency of defecation, abdominal distension and difficulty in defecation in the observation group were all lower than those in the control group (P < 0.05), indicating that acupuncture and moxibustion treatment of chronic functional constipation can effectively improve the clinical symptoms of patients and promote recovery. Among the traditional western medicines, cisapride is a gastrointestinal motility drug, which can selectively promote the release of acetylcholine after the intestinal myogonia, thus enhancing gastrointestinal motility and playing a certain role in the treatment of constipation. However, it can cause different degrees of adverse reactions in patients [8] . In this study, the observation group of patients after treatment of abdominal distension, spastic abdominal pain, bowel, dizziness adverse reactions to the total incidence was 4.55%, significantly lower than the control group 22.73% (P < 0.05), the acupuncture treatment of chronic functional constipation higher than conventional medication safety, produce less adverse reaction, can effectively improve the patients' treatment compliance. In conclusion, the curative effect of acupuncture on chronic functional constipation is more obvious than that of conventional western medicine, which can effectively improve the clinical symptoms of patients, reduce the occurrence of adverse reactions, as a result, it is worthy of promotion and application.
2021-10-15T15:18:30.065Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "6c8eb1269737fc96a24bdb13afc77be5a45fe674", "oa_license": "CCBYNC", "oa_url": "https://en.front-sci.com/index.php/jcmr/article/view/456/546", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "530e6614d74231bccf6f4705a2c81e42a3cdf277", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
1640480
pes2o/s2orc
v3-fos-license
The Enigmatic Conservation of a Rap1 Binding Site in the Saccharomyces cerevisiae HMR-E Silencer Silencing at the HMR and HML loci in Saccharomyces cerevisiae requires recruitment of Sir proteins to the HML and HMR silencers. The silencers are regulatory sites flanking both loci and consisting of binding sites for the Rap1, Abf1, and ORC proteins, each of which also functions at hundreds of sites throughout the genome in processes unrelated to silencing. Interestingly, the sequence of the binding site for Rap1 at the silencers is distinct from the genome-wide binding profile of Rap1, being a weaker match to the consensus, and indeed is bound with low affinity relative to the consensus sequence. Remarkably, this low-affinity Rap1 binding site variant was conserved among silencers of the sensu stricto Saccharomyces species, maintained as a poor match to the Rap1 genome-wide consensus sequence in all of them. We tested multiple predictions about the possible role of this binding-site variant in silencing by substituting the native Rap1 binding site at the HMR-E silencer with the genome-wide consensus sequence for Rap1. Contrary to the predictions from the current models of Rap1, we found no influence of the Rap1 binding site version on the kinetics of establishing silencing, nor on the maintenance of silencing, nor the extent of silencing. We further explored implications of these findings with regard to prevention of ectopic silencing, and deduced that the selective pressure for the unprecedented conservation of this binding site variant may not be related to silencing. HMR-E silencer even after mutating of any one of the three binding sites (Brand et al. 1987). Synthetic silencers are more sensitive to the loss of silencing function upon mutation of individual silencer elements (McNally and Rine 1991;Weber and Ehrenhofer-Murray 2010). Furthermore, arrays of Rap1 binding sites are able to establish Sirbased silencing at telomeres, and synthetic arrays of Rap1 binding sites exhibit weak silencing ability (Cockell et al. 1995;Hecht et al. 1996;Stavenhagen and Zakian 1998). The proteins that directly bind silencer DNA sequences all have additional individual roles in euchromatin. The Abf1 and Rap1 proteins are transcription factors involved in regulating the transcription of hundreds of genes (Lee et al. 2002;Lieb et al. 2001). The Origin Recognition Complex (ORC) binds to each of the approximately 400 yeast origins of replication and plays an essential role in initiating DNA replication (Breier et al. 2004;Poloumienko et al. 2001;Wyrick et al. 2001). Genome-wide binding profiles of the Sir2, Sir3, and Sir4 proteins and expression profiles of sir2 mutants indicate that they typically are present only at HML, HMR, and chromosome ends, where silencing also takes place (Barton and Kaback 2006;Gottschling et al. 1990;Lieb et al. 2001;Vega-Palas et al. 2000;Wyrick et al. 1999). Silencing appears to be tightly restricted to the aforementioned regions, and reports of Sir-mediated silencing of euchromatic genes have proven unreliable (Marchfelder et al. 2003). The lack of concordance in the genomic distributions of ORC, Rap1, and Abf1 and of the Sir complex raises two questions that are fundamental to the organization of euchromatin and heterochromatin: (1) How does the cell prevent ectopic silencing from happening throughout the genome, for example, wherever Rap1 and Abf1 bind? (2) What is special about the binding sites for these proteins or their organization in the silencers that results in the recruitment of Sir proteins instead of transcription proteins or DNA replication components? We addressed these questions by perturbation of the HMR-E silencer in S. cerevisiae, by studying the evolution of silencers across the closely related sensu stricto species, and by analyzing the genomic distributions of the individual binding sites of Rap1 and Abf1 in budding yeasts. In particular we asked whether the ultra-conserved Rap1 binding site in silencers, if substituted with the genome-wide consensus binding site of Rap1, could maintain its native level of performance in silencing. Strains and primers Yeast strains and primers used in this study are listed in Table 1 and Table 2. Strain construction Site-directed mutagenesis (Goldstein and McCusker 1999;Longtine et al. 1998) was used to replace the AAACCCATCAACC HMR-E native Rap1 binding site with the genome-wide Rap1 consensus sequence: ACACCCATACATT. DNA sequencing confirmed the changes. To introduce this altered HMR-E sequence into the genome, the native HMR-E was first replaced with the Kluyveromyces lactis URA3 (pUG72) in JRY3009 (Gueldener et al. 2002), resulting in isogenic HMR-eD::KL_URA3 strains (JRY8991, JRY8992). The HMR-E sequence containing the Rap1 consensus sequence was then transformed into this strain and successful replacements were identified by counter selection against URA3 using 5-Fluoro-orotic acid, producing the consensus-Rap1-HMR-E strains (JRY8994, JRY8995). Correct integration was confirmed by PCR and sequencing with primers flanking HMR-E. The sir1 mutant allele was generated by replacing all but 12 codons of the SIR1 ORF with the Kluyveromyces lactis URA3 [pUG72 (Gueldener et al. 2002)]. The resulting sir1 mutants phenocopied cells with the sir1 null. Rap1 was tagged on the C-terminus with 13xMyc:: KanMX (Longtine et al. 1998) and transformed into JRY2334. This strain was crossed into JRY8994 to create JRY9021 and JRY9023. Because Rap1 is essential, the viability of cells with only the tagged form of Rap1 established that the tagged Rap1 was functional. Assay for the establishment of silencing The parental strain (JRY3009) and the two independent mutant strains bearing the same consensus-Rap1-HMR-E mutations (JRY8994, JRY8995) were grown overnight in the presence of 10mM nicotinamide, a potent inhibitor of the Sir2 deacetylase, in 100 mL of rich medium at 30°to a density of approximately 2 · 10 7 cells per milliliter. Each of the three cultures was harvested by centrifugation, and the media with nicotinamide removed and replaced with 100 mL of the rich media. Immediately after resuspending the cells, 10-mL samples of each culture were pelleted, frozen in liquid nitrogen, and then stored at 280°. This sample represented time point 0, postnicotinamide. Subsequently, after 7, 17, 24, 32, and 45 min of incubation at 30°, aliquots of cells were collected in the same manner. The 10-mL samples were extracted without dilution of the main cultures. An additional sample of silenced cells (JRY3009), grown overnight without nicotinamide, served as reference of fully-silenced a1 levels. After collection, all samples are processed with the QIAGEN RNeasy Kit to extract the RNA (mechanical disruption protocol with on-column DNase digestion). Oligo-dT primer-directed cDNA was synthesized using n Abf1 and Rap1 binding site conservation Potential proto-silencers of Saccharomyces, defined as intergenic regions in which the Rap1 and Abf1 binding sites occur within 50 base pairs of each other, were identified using the map of Rap1 and Abf1 binding site matches from published work (MacIsaac et al. 2006). Percent of all binding-site matches conserved in three or more sensu stricto species was calculated for the binding sites genome-wide and in the proto-silencers, again using the conservation data from the aforementioned study. Abf1 and Rap1 binding site frequency by transcription-factor-specific intergenic regions Transcription-factor-specific intergenic regions were defined based on the performed chromatin immunoprecipitation (ChIP)-chip dataset (Harbison et al. 2004). Only factors with P , 0.05 binding to 60 or more distinct intergenic regions were considered. For each transcription factor, we identified Abf1 binding site matches by using PATSER (Hertz and Stormo 1999), by searching with the Abf1 position weight matrix (Harbison et al. 2004) in all of the intergenic regions bound by the given factor. Matches with PATSER p-value , 10 29 were selected. Abf1-site frequency per transcription factor was calculated as the total number of PATSER matches, divided by the sum of the lengths of the intergenic regions bound by the transcription factor, and multiplied by 10,000 (resulting in number of Abf1-matches per 10 kilobases of sequence). The same approach was used for Rap1 binding-site frequencies. Binding-site profiles for Rap1 and Abf1 in sensu stricto species For the sensu stricto species analysis, orthologous intergenic regions were identified by best-reciprocal-BLAST hits of the two flanking genes between S. cerevisiae and each of the other four species. For each species individually, motif searches were performed with MEME (Bailey and Elkan 1995) on orthologous regions corresponding to S. cerevisiae Rap1-or Abf1-bound intergenic regions, based on the ChIP-chip dataset (Harbison et al. 2004). Graphical position-weight matrices were constructed from the MEME matches with WebLogo (Crooks et al. 2004). Ultraconservation of the HMR-E Rap1 binding-site variant in sensu stricto species The Rap1 binding site in the HMR-E silencer is a poor match to the typical sequence that Rap1 binds, and the in vitro affinity of Rap1 for the silencer version of the Rap1 binding sequence is approximately ten-fold lower than its affinity for the consensus sequence (Taylor et al. 2000). Because the binding sites at HMR-E are partially overlapping for silencing function (Brand et al. 1987;Kimmerly and Rine 1987), the presence of the weak Rap1 binding site per se was not striking. However, this variant became puzzling and conspicuous in the context of the level of divergence of the sequences flanking HML and HMR across S. cerevisiae's closely related sensu stricto species (S. paradoxus, S. mikatae, S. kudriavzevii, S. bayanus). Compared with most of the genome, these sequences evolved much faster within and between species. We identified the conserved HMR-E in these species and found that deletion of the putative silencer from S. bayanus, the most distant of these species from S. cerevisiae, led to loss of silencing, confirming that these sequences had a conserved role in silencing (Teytelman et al. 2008). We then compared the conservation of the HMR-E binding sites for Rap1 and Abf1 to their genome-wide profiles from S. cerevisiae ChIP-chip studies (Harbison et al. 2004). The Abf1 binding site within silencers closely matched the general profile for Abf1. In contrast, the Rap1 binding site at HMR-E (AAAACCCATCAAC) was virtually invariant among these species, conserved as a poor match to the inferred genome-wide Rap1 binding profiles in each of the sensu stricto species (Figure 1). This level of conservation of the Rap1 binding site was striking in light of the accelerated base-pair substitutions around and between the Rap1 and Abf1 binding sites (Teytelman et al. 2008). In addition, the Rap1 binding site at HML-E (AAAACC CATTCAT) is similar to the HMR-E binding site and is also a weak match to the Rap1 consensus sequence. The apparent constraint on the Rap1 binding site variant at HMR-E strongly suggested that this specific Rap1 binding-site sequence offered some quality to the silencer that closer matches to the consensus sequence could not. Rap1 consensus binding site at HMR-E was fully functional in silencing Piña and colleagues have suggested that the particular site bound by Rap1 may induce the protein into a confirmation that is biased either to act as an activator or as a recruiter of Sir proteins (Pina et al. 2003). n accaggagtacctgcgcttattct Figure 1 Conservation of HMR-E Rap1 and Abf1 binding sites in sensu stricto species. The Abf1 and Rap1 consensus sequences are depicted. Abf1 and Rap1 binding sites at silencers as they occur across all sensu stricto species are shown as compared to the genome-wide consensus sequences from both S. cerevisiae and S. bayanus. The latter scenario would be similar to the glucocorticoid receptor binding sites in human cells which act as ligands to induce site-specific functions of the receptor. The DNA variants of the glucocorticoid receptor binding site impact the confirmation and regulatory activity of the receptor, and replacing a weak site with the higher-affinity consensus alters the transcriptional response to the hormone (Meijsing et al. 2009). Given the peculiar conservation of the weak Rap1 binding site at HMR-E, we predicted that the genome-wide consensus binding site for Rap1 would not silence the HMR-a1 gene as effectively as the native Rap1 binding site. Hence, we replaced the HMR-E variant (AAACCCATAAC) with the genome-wide consensus sequence for the Rap1 protein (ACACCCATACATT). At steady state, the levels of silencing in a strain with the native HMR-E were indistinguishable from the consensus-Rap1-HMR-E strain (Figure 2A). Because silencing in some contexts is sensitive to carbon sources (Shei and Broach 1995), we also compared the two strains under different carbon sources, and again found no difference (Özaydin 2009). Recent results on the kinetics of the establishment of silencing indicate that several cell divisions are required to achieve full silencing in cells in which silencing had been previously disrupted. Moreover, some mutations can affect the kinetics but not the level of silencing (Katan-Khaykovich and Struhl 2005; Osborne et al. 2009). Steady-state measurements could therefore potentially miss such differences between silencers, particularly because the Rap1 site is important for the initiation of silencing. We tracked the kinetics of establishment of silencing, comparing the native HMR-E and the consensus-Rap1-HMR-E strains in cells previously treated with nicotinamde, which inhibits silencing by disrupting the catalytic activity of Sir2 (Bitterman et al. 2002). The rates at which silencing was established were indistinguishable between the two strains ( Figure 2B). Relative to the strength of silencing at the telomere and at HML, silencing at HMR is both strong and robust, as many of the mutations that affect silencing at the telomere and at HML retain wild-type levels of silencing at HMR. Therefore, a subtle difference in Rap1 binding ability might not result in a loss of silencing at that locus. Still, we were curious whether the consensus Rap1 binding site at HMR-E was capable of recruiting Rap1 protein to the same level as the wild-type HMR-E sequence. To test the level of Rap1 enrichment at HMR-E in the two different strains, we used ChIP for DNA associated with Rap1-Myc in those strains. DNA from these enrichments was amplified HMR-E DNA recovered from Rap1-Myc chromatin immunoprecipitation. Anti-myc antibody was used to immunoprecipitate DNA cross-linked to Rap1-Myc proteins from strains containing either the native Rap1 binding site at HMR-E (JRY9021 and JRY9022 biological replicates) or the genome-wide consensus Rap1 binding site at HMR-E (JRY9023 and JRY9024 biological replicates). Cells lacking the myc tag were used as a control. at the HMR-E region, at a positive control region of Telomere VI, and at a negative control region, SEN1 ( Figure 2C). There was no detectable difference between the Rap1 enrichment at HMR-E in samples containing the native Rap1 binding site and those with the Rap1 consensus sequence placed at HMR-E. These findings indicate that relative to the positive and negative controls, both Rap1 binding sequences were equally capable of localizing Rap1 to the silencer region and of establishing and maintaining functionally silent chromatin. The Rap1 consensus binding site at HMR-E improved silencing in sir1D cells Because both binding sites for Rap1 were capable of mediating silent chromatin formation, those results yielded no insight into what selective pressure may have shaped the HMR-E Rap1 binding site sequences. We reasoned that the selective advantage may only be observable in a sensitized context in which small differences in silencer function may be translated into observable differences. To test for small differences in silencing strengths between the two Rap1 binding sites, we performed reverse transcription qPCR in sir1D mutants containing either version of the silencer. The sir1D mutation was chosen to optimize the chance for observable phenotypic differences as this mutation does in other contexts (Osborne et al. 2009;van Welsem et al. 2008). Cells lacking SIR1 have a partial loss of silencing phenotype at HMR with roughly 50% of the transcript level observed in a sir2D strain (Pillus and Rine 1989;Xu et al. 2006). Therefore, slight increases or decreases in expression level could be easily observed. Again, the consensus Rap1 binding site at HMR-E was not defective for silencing. Moreover, the replacement of the native HMR-E Rap1 binding site for the consensus sequence actually improved silencing strength, as indicated by a reduced level of a1 expression in the sir1D background ( Figure 3). These results clearly established that the particular variant of the Rap1 binding site at HMR-E was not necessary for full silencing function of the HMR locus. Thus, collectively the data indicated that the particular Rap1 binding site at silencers did not act as an allosteric effector of silencing and did not evolve for maximal silencing strength. No evidence of genome-wide selection against Abf1 and Rap1 binding site co-occurrence The ability of the genome-wide consensus sequence of the Rap1 binding site to function in HMR-E's role as a silencer underscored the question of how the yeast cell prevents spurious silencing in Rap1 and Abf1-bound regions of the genome. We focused on the Rap1 and Abf1 sites, ignoring the Orc1 binding sites [ARS Consensus Sequence (ACS)] for two reasons. First, the requirement for the ACS is imprecisely specified in the four silencers, with a single exact match or multiple near-matches also present, depending on the silencer. Second, the evolutionary spacing across the sensu stricto species between the Rap1 and Abf1 binding sites at HMR-E is known, but how close the ACS has to be to either of those sites is unknown (Teytelman et al. 2008). On the basis of the HMR-E architecture in the sensu stricto, 25 potential proto-silencers of Saccharomyces, defined as euchromatic intergenic regions in which the Rap1 and Abf1 binding sites occur within 50 base pairs of each other, were identified (Table 3). We then asked whether negative selection against proto-silences could restrict silencing to HML/HMR, telomeres, rDNA and subtelomeres. Because the Rap1 genome-wide consensus binding site was fully functional in its silencing role at the HMR-E, it was possible that proto-silencers could also nucleate silencing in euchromatin. We reasoned that the binding sites would be less likely to be conserved if their occurrences were deleterious, as would be expected if the potential proto-silencers occasionally silenced adjacent genes. Hence, we measured the conservation of Rap1 and Abf1 binding sites across the sensu stricto species, comparing Rap1 and Abf1 binding-site conservation in all intergenic regions to the conservation in the 25 proto-silencers. Conservation was defined as the presence of a binding site in three or more species Figure 4 Conservation of Rap1 and Abf1 binding sites across species. Percent of S. cerevisiae binding sites conserved in three or more sensu stricto species, for Rap1 and Abf1 binding sites. The percent of conserved sites is shown in blue for all genomic matches. Shown in red is the percent for the 25 proto-silencer sites where Abf1 and Rap1 matches are within 50 bp of each other. The differences within Rap1 and Abf1 were not significant by the x 2 test at the 0.05 P-value cut-off. (MacIsaac et al. 2006). The binding sites for both Rap1 and Abf1 in the potential proto-silencers were no less likely to be conserved than genome-wide Rap1 and Abf1 binding sites outside of this context (Figure 4). This result suggested that there was no spurious silencing at the proto-silencers and that existence of such proto-silencers was not deleterious for the cell. As an additional test of selection against co-occurrence of Rap1 and Abf1 binding sites, we tested for signs of such negative selection by asking whether Abf1 binding sites occur less frequently in intergenic regions with known Rap1 binding, and vice versa, compared to regions bound by other transcription factors. In line with our previous results, the frequency of Abf1 binding sites was not decreased in Rap1-bound intergenic regions, nor was the frequency of Rap1 binding sites in Abf1-bound regions ( Figure 5). DISCUSSION For budding yeasts, silencing is a tricky balancing act. On one hand, the transcription of the HML and HMR loci must be robustly repressed at all times. On the other hand, silenced chromatin must be prevented from ectopic formation in most of the genome. This conflict is particularly challenging because the silencers use DNA-binding proteins that are important in euchromatin function at other regions of the genome. The problem is conceptually similar to the need to have one and only one centromere per chromosome, avoiding neocentromere activation at other sites [reviewed in (Sullivan et al. 2001)]. Strikingly, a Rap1 binding site in HMR-E, although a poor match to the Rap1 binding profile, was conserved in five species despite being located in the midst of a rapidly evolving region. This apparent paradox suggested the possibility that the role of Rap1 could be tailored to silencing or transcription activation by the particular sequence of its binding site within a silencer. However, the data presented here established clearly that a consensus version of the Rap1 site at the HMR-E silencer could stably maintain silencing in a population of cells, could establish silencing as quickly as a natural silencer, and was at least as robust to sensitizing mutations as a natural silencer. These results were puzzling, considering the ability of Abf1 and Rap1 bindings sites to establish silencing at HMR-E in the absence of an ORC binding site, and the ability of multiple Rap1 sites to nucleate telomeric silencing (Brand et al. 1987;Cockell et al. 1995;Hecht et al. 1996). If the consensus binding sites for Rap1 and Abf1 can initiate silencing, how does the cell prevent ectopic silencing in the many intergenic regions in which Rap1 and Abf1 sites co-occur? We investigated this question by analyzing whether there is purifying selection against co-occurrence of Rap1 and Abf1 motifs near each other. Our results showed no evidence of deleterious ectopic Sir-protein recruitment, as measured by the absence of a signal of selection against adjacent Rap1-Abf1 binding sites. Our work highlights a missing dimension to an understanding of the selective forces acting on the anatomy of silencers. We conclude that the selective force for the retention of the particular Rap1 site in the HMR-E silencer is apparently unrelated to silencing, with some other function providing the selective pressure. Silencers associate with cohesins (Chang et al. 2005), but many other sites do as well, making that explanation of selective force unlikely. Silencer function is required in mating-type interconversion to distinguish donor cassettes from recipient loci through the protection of the HO cut site at HML and HMR, which would seem to have the same requirements as silencing per se. However, the pattern of mating-type interconversion is highly regulated, and only partially explained by the recombinational enhancer near HML. It is conceivable that some aspect of the way that Rap1 binds a silencer plays a nuanced but sufficiently compelling contribution to interconversion to explain the enigma of the Rap1 binding site conservation (Haber 2012).
2017-06-16T02:32:58.488Z
2012-12-01T00:00:00.000
{ "year": 2012, "sha1": "d82931c80bcc5086741e38ebf554038d2cd8bccf", "oa_license": "CCBY", "oa_url": "http://www.g3journal.org/content/2/12/1555.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf4749a32ace54dee0dd69c053f683862b4b78c9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
234762850
pes2o/s2orc
v3-fos-license
A personal tribute to Louis Nirenberg: February 28, 1925--January 26, 2020 I first met Louis Nirenberg in person in 1972 when I became a Courant Instructor. He was already a celebrated mathematician and a suave sophisticated New Yorker, even though he was born in Hamilton, Canada and grew up in Montreal. In this informal style paper I will describe some of his famous papers, some of our joint work and other work he inspired. I will concentrate on some of Louis' work inspired by geometric problems beginning around 1974, especially the method of moving planes and implicit fully nonlinear elliptic equations. I have also sprinkled throughout some comments on his character and personality that I believe contributed to his great success. Introduction: California meets New York 1972. I first met Louis Nirenberg in person 1 in 1972 when I became a Courant Instructor after my Ph.d work at Stanford. He was already a celebrated mathematician and a suave sophisticated New Yorker, even though he was born in Hamilton, Canada and grew up in Montreal. I was born in Brooklyn, New York and a post-doc at Courant was a return home for me. I had been greatly influenced by my time at Stanford in the late sixties by the hippie culture and the political anti-war activism. The picture below shows more or less how Louis and I comparatively looked when I arrived at Courant institute. Louis accepted and welcomed me without hesitation as he did with almost everyone he met. One of his greatest and most endearing attributes which served him well throughout his entire life was his openness to young people and new ideas. He lived on the upper west side in a large New York style apartment and loved books and movies and of course food and travel. He was funny with a wry sense of humor which was often self-deprecating. If he really liked an idea, he would say, "that's good enough to steal". He once took me to Veniero's, a famous Italian pastry and coffee place in the East village and as we were stepping up to the counter to order he told me he wasn't going to treat me because he was cheap! In fact he was an incredibly generous person who gave freely of his time and who often invited the young faculty at Courant to dinner at his home. 2 In this informal style paper, I will describe some of Louis' famous papers, some of our joint work and other work he inspired. 3 For reasons of exposition, I will not follow a strict chronological order and I will concentrate on some of Louis' work inspired by geometric problems beginning around 1974, especially the method of moving planes and implicit fully nonlinear elliptic equations. During the twenty year period 1953-1973 4 he produced an incredible body of work in many fields of pde, geometric analysis and complex geometry starting with his famous thesis work [1] solving the classical Weyl and Minkowski problem via a continuity method, the Newlander-Nirenberg theorem on the integrabilty of almost complex structures [8] and its application to the deformation of complex structures [9], his work with Bers on the representation of solution of linear elliptic equation in the plane [3,4] (see also [2]), his work with Agmon and Douglis [5,11,12] on linear elliptic boundary value problems, the John-Nirenberg inequality and BMO [15], his work with Kohn on non-coercive boundary value problems and pseudo-differential operators [17,18,19] , his work with Trèves on local solvability [16,21,22], his many papers on Sobolev type inequalities (see for example [13]), his work with Caffarelli and R. Kohn on partial regularity for suitable weak solutions of the Navier-Stokes equation [30] and the list goes on. Of course I cannot begin to talk about this work but fortunately there are a number of excellent appreciations of his work, for example [83] , [84], [85], [86], [87], [88], [89]. Louis loved to collaborate and I apologize for omitting many other 2 At a Courant Christmas party I was surprised to see that he also loved to dance! 3 I have also sprinkled throughout some comments on his character and personality that I believe contributed to his success. important results. 5 His brilliant student and frequent collaborator Yanyan Li was especially devoted to Louis in his later years. 2. Louis' work on elliptic systems finds some novel applications. Louis with Agmon and Douglis introduced an incredibly general notion of a linear elliptic system N j=1 L kj (y, 1/i ∂)u j (y) = f k (y), 1 ≤ k ≤ n that is somewhat mysterious and hard to understand for the non-expert. Here the unknowns are u 1 , . . . u n , defined in a bounded domain Ω ⊂ R N or sometimes R N + . It involves assigning integer weights s k ≤ 0 to each equation and t k ≥ 0 to each unknown with max k s k = 0, order L kj ≤ s k + t j (consistency), in order to determine the principal part L kj of the system, that is the part of L kj where the order is precisely s k +t j . Ellipticity then becomes a maximal rank condition on L kj (y, ξ). One of the oddities of this notion of ellipticity (as was pointed out in the paper of Douglis and Nirenberg [5]) is that it can be destroyed by a non-singular transformation of the equations and the dependent variables and is thus seemingly dependent on the particular way the equations and unknowns are presented. We will in fact exploit this in the Lewy example below! I want to mention two novel applications of this work to what is called " free boundary regularity" taken from a joint paper [25] of Louis with David Kinderlehrer and me. 2.1. An example of Hans Lewy. The first example is due to Hans Lewy. Louis was a great admirer of Hans Lewy and was well acquainted with his work. Let u, v ∈ C 2 (Ω ∪ S) be solutions of the system where λ(x) = 0 is analytic and ∂Ω is partly contained in the hyperplane S = {x n = 0}. Lewy proved that for n = 2, u and v extend analytically across S. This is very surprising since the boundary conditions, which say that u and v share the same boundary conditions are "not coercive". To show that coercivity is present, we introduce the "splitting", w = u − v and rewrite the system (2.1), (2.2) in terms of u and w: Using the ADN general theory, we assign the weights s = 0 to equation (2.3) and s= -2 to equation (2.4) and the weights t u = 2 and t w = 4 to the unknowns. These weights are "consistent" and with this choice, the principal part of the system is which is certainly elliptic. The boundary conditions w = w xn = 0 are now just Dirichlet conditions for this fourth order equation which are well known to be coercive. The end result is that u and v are analytic in Ω ∪ S (or C ∞ if λ ∈ C ∞ ). One point to emphasize is that it is not necessary to eliminate u and often not possible in other related problems. 2.2. The regularity of the liquid edge. Our second example concerns the socalled "liquid edge" of a stable configuration of soap films. Consider a configuration of three minimal surfaces in R 3 meeting along a C 1+α curve, the liquid edge γ, at equal angles of 2π/3. Such a configuration represents one of the stable singularities that soap films can form (Jean Taylor [58], J.C.C Nitsche [62]). We can reformulate this configuration as a classical free boundary problem. With the origin of a system of coordinates on the curve γ, let us represent the three surfaces as graphs over the tangent plane to one of them at the origin. Denote by Γ the orthogonal projection of γ on this plane. Two of the functions, say u 1 , u 2 will be defined on one side Ω + of Γ while the third u 3 will be defined on the opposite side Ω − of Γ. We now extrapolate and suppose Ω ± ⊂ R n and Γ is a C 1+α hypersurface. Suppose that the graphs of u 1 and u 2 meet at angles µ 1 , µ 2 . Then we have an overdetermined system for the u i , i = 1, 2, 3: is the minimal surface operator in nondivergence form. One has to introduce w(x) = (u 2 −u 1 )(x) and the transformation y = (x , w(x)), x ∈ Ω + , the so-called zeroth order partial Legendre transform [24]. The mapping x → y transforms a neighborhood of 0 in Ω + into a neighborhood U ⊂ {y : y n > 0} and a portion of Γ into a S ⊂ {y : y n = 0}. It has an inverse x = (y , ψ(y)), y ∈ U ∪ S. The influence of Alexandrov and Serrin on Nirenberg's work. A central question in differential geometry has been to classify all possible "soap bubbles", or more formally put: classify all closed constant mean curvature hypersurfaces in R n+1 (or more generally in special Riemannian manifolds M n+1 ). Amazingly, the first such result was proven by J. H. Jellett 7 [60] in 1853. 6 The simplicity of the reflection mapping in combination with the zeroth order partial Legendre transform and the ADN theory is the real point of this example. 7 John Hewitt Jellett 1817-1888 was an Irish mathematician who in 1849 [59] studied how fixing a non-asymptotic curve in an analytic surface would render it infinitesimally rigid. A starshaped closed surface M in R 3 with constant positive mean curvature is the standard round sphere. A modern presentation of his proof, which extends to R n+1 goes as follows. Let X : M n → R n+1 be the position vector and N the outward normal to M . We assume X(M ) has constant mean curvature H > 0 and is starshaped about the origin, i.e. X · N ≥ 0. If ∆ M is the Laplace-Beltrami operator and A the second fundamental form of M , then Integrating (3.5) over M we conclude |A| 2 = nH 2 or M is totally umbilic, so a sphere. Heinz Hopf lectures at Courant and Stanford. In his visit to Courant and Stanford in 1955-1956, Hopf lectures on various topics in differential geometry in the large and presents his ingenious proof that if M is a closed immersed constant mean surface in R 3 with genus 0, then M is a round sphere. He also sketched the proof of a new result (not published at the time) of Alexandrov: Hopf goes on to speculate, " it is my opinion that this proof by Alexandrov, especially the geometric part, opens up important new aspects in differential geometry in the large". Alexandrov's idea is to show using the maximum principle that an embedded hypersurface has a plane of symmetry for any direction e 1 and thus is a sphere. Note that any open set of directions suffices. In 1971, Serrin published a paper in which he showed that if u is a solution of the overdetermined (free boundary) problem ∆u = −1 in Ω, u = 0, u ν = c < 0 on ∂Ω, (with Ω a C 2 domain) then Ω is a ball. A more physically appealing example in Jim's paper is that if a liquid in a capillary tube has both constant height and constant angle on the boundary, then the tube must have circular cross section. The important part of Jim's paper was that he used Alexandrov's method of moving planes and developed an important improvement of Hopf's boundary point lemma as well as a continuous (sweeping) use of the maximum principle. These techniques were simplified and improved by Gidas, Ni and Nirenberg [27] in their first paper but Serrin laid out the technical groundwork. 9 3.3. Symmetry for solutions of elliptic pde; motivating questions. Many problems in both Yang-Mills theory, astrophysics and reaction-diffusion equations are modeled by equations of the form ∆u + f (u) = 0. For example the power nonlinearity f (u) = u α occurs frequently. The range 1 ≤ α < n+2 n−2 is called the subcritical range because of the Sobolev embedding theorem while the range α > n+2 n−2 is called supercritical. The case α = n+2 n−2 is particularly important because the equation becomes conformally invariant. This is the conformally flat case of the famous Yamabe problem. Suppose we consider the Dirichlet problem Is u radially symmetric, i.e u = u(|x|)? There is a simple counterexample if we allow u to change sign. Take f (u) = λ k u where λ k is the kth Dirichlet eigenvalue of B. Then the eigenfunctions are not radially symmetric. Let ∆u + f (u) = 0, u ≥ 0 in R n . Is u radially symmetric? The famous equation ∆u + u n+2 n−2 = 0, u > 0 in R n has the explicit solutions On the other hand the subcritical equation ∆u + u α = 0, u > 0 in R n has only the trivial solution (Gidas-Spruck [75]). Singular solutions. There is also a complicated family of singular solutions of the form u(x) = r − n−2 2 ψ(t) where r = |x|, t = − log r and ψ(t) is a periodic translation invariant solution of the ODE: The simplest singular solution is Thus the classification of singular solutions is quite challenging. 3.5. A general symmetry result of Nirenberg and Berestycki for a bounded domain. Nirenberg refined his original method with Gidas and Ni in a paper with Berestycki [31] by incorporating the Alexandrov-Bakelman-Pucci (ABP) maximum principle into the argument. We sketch the simplest case of this improved proof which again uses another important idea of Alexandrov. These papers have had tremendous impact. Theorem 3.2. Let Ω be a bounded domain which is convex in the e 1 direction and symmetric with respect to the plane 3.6. Alexandrov's generalized gradient map and Monge-Ampere measure. If u ∈ C 1 and x ∈ Γ u , then ∂u(x) = ∇u(x). If u ∈ C 2 and x ∈ Γ u , then ∇ 2 u(x) ≥ 0. See the books of Figalli [82] and Gutierrez [81] for a detailed discussion of the ideas of Alexandrov. The following beautiful argument of Cabré [78] 11 utilizes Alexandrov's contact set to prove the classical isoperimetric inequality. Given Ω, solve the Neumann problem ∆u = 1 in Ω For if a hyperplane graph(p · x + k) is translated upwards in R n+1 "from −∞" has first contact on ∂Ω, then |p| ≥ |∇u| ≥ c, proving the Claim. Hence, 3.7. A simple version of the ABP maximum principle. Proof. Assume M > 0 is achieved at an interior point Let |p| < M/d and translate the plane graph(p · x + k) "up from −∞" until there is a first contact with the graph of v at a point x 0 ∈ Ω. Then x 0 ∈ ∂Ω for otherwise (since this would imply v(x 0 ) = 0) Therefore p = ∇v(x 1 ) proving the Claim. Hence , and the lemma follows. 3.9. The proof of symmetry by moving planes in the x 1 direction. We will prove that Replacing e 1 by −e 1 gives reflectional symmetry u(x 1 , y) = u(−x 1 , y) for any (x 1 , y) ∈ Ω. Step 2. Beginning and ending. For a − δ < λ < a, Σ λ is narrow and so has small volume for δ small. Thus we may apply the ABP maximum to (3.6) and conclude that w < 0 inside Σ λ . Note also that by the Hopf boundary point lemma, If λ 0 = 0 we are done. Assume for contradiction that λ 0 > 0. By continuity, w ≤ 0 in Σ λ 0 and moreover w ≡ 0 on Σ λ 0 . Therefore the strong maximum principle implies (3.7) w < 0 in Σ λ 0 . Choose a simply connected closed set K in Σ λ 0 with smooth boundary (which is nearly all of Σ λ 0 in measure) such that with δ as in Theorem 3.8. By (3.7) there exists η > 0 so that and so by continuity for ε sufficiently small. Now we are in the situation that However for ε sufficiently small, we can make |Σ λ 0 −ε \ K| < δ. Once again applying the ABP maximum principle Theorem 3.8 gives Combined with (3.9) we have proving the Claim and giving a contradiction at last! Remark 3.10. Louis' many contributions (with his collaborators) on the basic method of moving planes [27], [28] and its variant "the sliding method" [31] have become a standard tool in elliptic pde. He continued finding new twists and applications of moving planes to interesting pde problems in many papers with Berystycki, Caffarelli and Yanyan Li [39,41,40,43,44,45,47,48,55]. His work has inspired thousands of papers and also some important variants and extensions to fully nonlinear elliptic pde [68], including a measure theoretic version [66] [69] used to proved asymptotic symmetry without growth assumptions at singularites (including infinity), and the method of "moving spheres" [70,71,73] which is particularly important for conformally invariant elliptic pde. There are also many, many other important contributions. 4. The Monge-Ampère boundary value problem. The Monge-Ampère equation det u ij = ψ(x, u, ∇u) > 0 is the prototypical fully nonlinear elliiptic pde. To see when it is elliptic, we linearize at a point x 0 , i.e. where A ij is the cofactor matrix of u ij (x 0 ). For ellipticity we need (A ij ) > 0, that is, u convex. The boundary value problem for the Monge-Ampère operator is classically formulated as follows. Let Ω ⊂ R n be a smooth strictly convex domain and let φ, ψ > 0 be smooth. Find a strictly convex solution u ∈ C ∞ (Ω) of the boundary value problem det(u ij ) = ψ(x, u, ∇u) in Ω, (4.1) u = φ on ∂Ω. 4.1. Some history. In 1971 Pogorelov showed that for the special case φ ≡ 0, ψ = ψ(x) > 0, there is a unique weak solution u in the sense of Alexandrov and moreover u ∈ C ∞ (Ω). His proof uses his famous interior second derivative estimate and Calabi's ingenious interior estimates for third derivatives (in the metric ds 2 = u ij dx i dx j ). In 1974 Louis announced at the International Congress in Vancouver his joint work with Calabi providing a solution of (4.1)-(4.2) in complete generality. Unfortunately the argument contained a gap and fell through. Louis, Luis Caffarelli 13 and I [32] 14 and independently Nikolai Krylov (see the expository article [67] and the references therein) proved the now classical existence theorem: Suppose Ω is a strictly convex domain with Ω, φ, ψ > 0 smooth. Assume in addition that for the boundary data φ, there is a strictly convex subsolution Then there exists a strictly convex solution u ∈ C ∞ (Ω) to (4.1)-(4.2). If ψ u ≥ 0, the solution is unique. Remark 4.2. There are two main difficulties in proving the existence of smooth strictly convex solutions via the continuity method in which one tries to prove apriori estimates for any admissible solution. The first one is to show that the second normal derivative is apriori bounded on the boundary (assuming all other second derivatives are apriori bounded), that is , u nn ≤ C on ∂Ω. The standard way to do this is from the equation det D 2 u = ψ. In a suitable frame one can write where A ij is the cofactor matrix of u ij and then solve for u nn . Thus one needs to prove strict convexity of the solution at the boundary. In the simplest case φ = 0, assuming strict convexity of the domain, this boils down to showing u n ≥ c 0 > 0 with c 0 a controlled constant, for e n the outer unit normal. The second major problem is that C 2 estimates do not suffice because the equation is fully nonlinear elliptic. One need to obtain global C 2+α estimates. This can be done strictly in the interior of Ω using the Evans-Krylov theorem or Calabi's third derivative estimates. However the global estimates required a new idea. The following simple geometrical example suggests an improved result. Example 4.3. Let Γ 1 , Γ 0 be strictly convex smooth closed codimension 2 hypersurfaces in parallel planes, say x n+1 = 1, 0 respectively. Is there a hypersurface S of constant Gauss curvature K 0 for K 0 sufficiently small? Intuitively the answer is clearly yes. It was shown in [74] that there is a unique smooth solution as expected. However since Ω is not convex, the classical existence theorem does not apply. This motivated Bo Guan and me [75,76] to prove the subsolution existence theorem which holds in great generality. Then there exists a locally strictly convex solution u ∈ C ∞ (Ω) to (4.1)-(4.2). If ψ u ≥ 0, the solution is unique. Moreover any admissible solution satisfies the apriori estimate u C 2+α (Ω) ≤ C for a controlled constant C. Implicitly defined fully nonlinear elliptic pde. The work of CNS on Monge-Ampere equations has a natural and important extension to implicitly defined fully nonlinear pde. Let A = (a ij ) be a symmetric nxn matrix (or more generally a natural tensor on a Riemannian manifold) and define where the λ i are the eigenvalues of A and f (λ) is a symmetric function (say smooth for simplicity). Then F (A) will also be smooth. When A = (u ij (x)), x ∈ Ω, and f (λ) = Πλ i , F (A) = det u ij (x), and we recover the Monge-Ampere operator which is elliptic when λ lies in the positive cone Γ + n = {λ ∈ R n : λ i > 0}. What happens in general? 5.1. Ellipticity and concavity. Let f is defined in a symmetric open convex cone Γ in R n with vertex at the origin with Γ + n ⊂ Γ. The concavity condition is very important for the regularity theory of fully nonlinear elliptic pde. 5.2. Gårding's theory of hyperbolic polynomials. Louis was an incredibly thorough mathematician (i.e he didn't miss much) with a vast knowledge of pde literature and knew Gårding's work on hyperbolic polynomials. As the name suggests, Gårding's beautiful theory was related to hyperbolic pde but ultimately had many important algebraic consequences and is important in convex analysis (see the article of Reese Harvey and Blaine Lawson [79]). g Definition 5.2. A homogeneous polynomial p(λ) of degree m in R n is called hyperbolic with respect to a direction a ∈ R n (notation hyp a) if for all x ∈ R n , the polynomial p(x + ta) has exactly k real roots. Thus We may assume p(a) > 0. It is easily checked that is also hyp a. In particular the Gårding cones for the elementary symmetric functions σ k (λ) are nested starting from the positive cone for σ n (λ) and ending with the half-space λ i > 0 for σ 1 (λ). Note also that for k > 1, σ k = 0 on the positive λ i axes. The main result proved by Gårding [57] is an inequality which is equivalent to the statement that p 1 m (λ) is concave in Γ. It follows easily that p λ i (λ) > 0 in Γ for all i = 1, . . . , n. where f (λ) is elliptic and concave in a symmetric open convex cone Γ (with vertex at origin), containing the positive cone Γ n and lim sup λ→∂Γ f (λ) ≤ inf Ω ψ. Our theorem was hard to prove but is far from optimal. It also excludes the nice example Over the years there has been significant improvements. For example in 2014, Bo Guan [80] proved the essentially optimal result: Theorem 5.6. Assume only ellipticity and concavity (i.e no additional structure conditions). If there exists an admissible subsolution u ∈ C 2 (Ω), u = ϕ on ∂Ω, then there exists a unique admissible solution u ∈ C ∞ (Ω). If in addition f (λ) satisfies we may take Ω to be a Riemannian manifold with smooth boundary. 5.5. Final thoughts. The setting of implicitly defined fully nonlinear elliptic initiated in the paper [34] unleashed a tidal wave of research in fully nonlinear pde and geometric analysis (curvature flows, conformal geometry, complex geometry, . . . ) which is still very much ongoing. Louis Nirenberg's scholarship, insight and experience played a large role in this development and it is certainly one of his most enduring legacies. Louis was also a superb Ph.d advisor who produced 46 students 15 and mentored numerous young mathematicians from all over the world and this is also a great part of his legacy. We can all learn from him about the benefits of generosity, openness and collaboration.
2020-05-06T20:21:47.875Z
2021-05-17T00:00:00.000
{ "year": 2021, "sha1": "6a5f2f8231dba8e6b3225ae7ea57bde6b77c3c2a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7762268ce5aeae5f10dd665850537405fc80564b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
33913892
pes2o/s2orc
v3-fos-license
Versatile retraction mechanics: Implant assisted en-masse retraction with a boot loop The purpose of this paper is to explain the versatility offered by the use of arch wires with boot loops in retraction mechanics while taking direct anchorage from mini-screws. Materials and Methods: The materials include the mini screws placed at the appropriate location and retraction arches made of 0.019 X 0.025 SS with boot loops placed distal to the lateral incisors. Mini screw provides a stable anchorage for enmasse retraction of the anterior teeth with the help of a boot loop using sliding and/or loop mechanics. Results: The arch wires with boot loops have a definite advantage over the soldered/crimpable hooks because of the versatility it offers during the process of retraction. Conclusion: An innovative approach combining the advantages of absolute anchorage using mini implants and a retraction arch with boot loop is presented here. Introduction Incisor retraction is one of the primary goals in the majority of orthodontic patients who present with Class I bimaxillary dento-alveolar proclination or Class II division 1 malocclusions. Careful management of the anchorage is very critical to achieve clinical success in such cases. Treatment mechanics for anterior retraction are many and varied. It could be broadly classified as friction mechanics (sliding mechanics) or frictionless mechanics (loop mechanics) based on whether the archwire is made to slide though the buccal segment during the retraction or not. Both have its inherent advantages and disadvantages. With the advent of mini screws, there is an apparent shift of focus from the days of reinforcing anchorage by using differential moments to absolute anchorage by using an implant assisted sliding mechanics. [1] Sliding mechanics mainly involves attachment of a force module from the posterior anchorage unit to the unit which requires retraction. It could be a 6 or 4 unit anterior segment based on whether the canines have been taken back individually or not. An inherent disadvantage with such a procedure is the loss of torque of the incisors as the retraction proceeds. The loss of torque is inevitable due to the following reasons: • The retraction force is applied away from the center of resistance of the incisor segment • There exists a play between the wire and the bracket which allows free tipping. Although a full sized arch wire can prevent the loss of torque to a certain extent, it cannot be used for sliding mechanics as the sliding gets hampered due to the excessive friction between the wire and the brackets/molar tube in the buccal segment. [2] To compensate for the loss of torque the proponents of the third generation PEA (MBT prescription) has increased the torque values of the incisor brackets. In spite of an increase in the torque values, it was advocated to incorporate further torque by giving curves in the rectangular wire, as and when required. [3] It is very tedious to regain the lost torque in edgewise mechano-therapy as it is difficult to obtain sufficient third order moments at the wire bracket interface. Furthermore, the heavier force needed to overcome the frictional resistance could lead to loss of anchorage as the force becomes optimal for the larger anchor unit leading to its mesial movement. Loop mechanics involves the retraction of anteriors by activation of a loop placed between the anterior and the posterior segments. [4] Various geometrical designs of loops are available with stated and perceived advantages and disadvantages. Frictional forces are not brought into play (except when the clinician is activating the loop) as there is no sliding of the wire through the buccal segment. The loss of torque, which happens during retraction, could be prevented by adding additional gable moments by way of α bends as the teeth are being retracted. Another advantage of the loop mechanics is the reinforcement of the posterior anchorage by using the concept of differential moments (higher posterior moment obtained by placement of a β bend reinforces the posterior anchorage). With the availability of preformed wires with soldered hooks or crimpable hooks, most clinicians prefer the use of frictionless mechanics for the sake of convenience. The temporary anchorage devices (TADs) have provided the clinician with a safe alternative to prevent anchorage loss during sliding mechanics. [5] The TADs serve as an anchor either directly or indirectly. In the direct method, the force module is attached between the mini-screw and the anterior hook whereas in the indirect method the posteriors are rigidly ligated to the mini-screw there by stabilizing them while engaging the force module between the posteriors and the anteriors. Most often than not, clinicians tend to prefer the use of sliding mechanics while using implant assisted anchorage. [6] Materials and Methods A 22-year-old adult female patient reported with a chief complaint of proclination of anterior teeth and was diagnosed as having Class I bimaxillary dentoalveolar proclination on a Class I skeletal base. It was decided to treat her with extractions of first premolars in upper and lower arches to normalize incisor inclinations, achieve a stable buccal occlusion and a harmonious facial profile. Multi bracket treatment with 0.022 × 0.028" Roth prescription (3M Unitek) was chosen. Following extractions, teeth were initially leveled and aligned using a 0.014 inch NiTi followed by 0.016 inch NiTi wire. The initial alignment wire was replaced with a 0.018 inch Australian arch wire, which was replaced with 0.019 × 0.025 inch stainless steel working arch wires. The arch wire is made with a boot loop placed 2-3 mm distal to the canine bracket. The height of the boot loop is dependent on the specific case being treated and is designed to allow the retraction force to be directed through the center of resistance of the anterior teeth. The boot loop can be placed 2-3 mm distal to the lateral incisor bracket also if the clinician wants to add to anterior torque. After allowing the working arch wire to express the tip for a minimum period of 1 month, mini implants are inserted between the upper second premolar and molar. Self-drilling mini implants were used, made of titanium and measuring 8 mm in length and 1.5 mm in width (SK Surgical, India). The patient was prepared for placement of mini implant, diagnostic intraoral periapical (IOPA) radiographs were obtained for assessing the bone level, sinus and if adequate separation between the roots of the premolar and molar is available. Topical anesthesia was applied, followed by local anesthesia infiltration. Following anesthesia, the mini implants were inserted at the junction of the attached and movable mucosa, at an angle of 45° centered along the alveolar bone between the premolar and first molar. Post insertion IOPA radiographs were taken to ensure that the min implants were placed as planned. The force module was attached from the mini screw to the boot loop to effect anterior retraction. Once the retraction is completed, detailing and finishing was done with regular treatment mechanics. The activation of the loop can be brought about by three methods: • Loops could be activated in the conventional way (by cinching the wire behind the molar tube) for retraction after stabilizing the posterior segment indirectly with the TAD • Loop activation could also be done directly with implant as the anchor by engaging a force module between the distal leg of the loop (which is being activated) and the implant • Loops could also be used as a point of attachment (a substitute to a soldered or crimpable hook) for engaging the force module while doing en-masse retraction with sliding mechanics (this method was followed in the case presented). Results The results obtained after retracting anteriors using sliding mechanics and an arch wire with boot loop are promising. Though the boot loop was in place for a considerable period, the patient did not complain of any significant amount of discomfort except in the initial 2 or 3 days after the loop placement. Adequate torque control was observed post treatment as also seen in the torque comparison between pretreatment and post treatment cephalograms. Discussion Versatility offered by a boot loop in enabling retraction using mini implant and the various clinical guidelines are explained with as follows: • Engaging the force module to the mesial leg facilitates sliding mechanics (horizontal arm prevents the slippage of elastomeric/elastic) whereas engaging to the distal leg activates the loop facilitating frictionless mechanics. This is particularly useful in breaking the binding encountered during the retraction with sliding mechanics. Though there is a discontinuity in the archwire due to the formation of the loop, no skewing of the arch was seen in the treated case • Adequate β bend given at the distal leg provides torque to the incisors by virtue of the wire-bracket geometry where as additional torque can be given by incorporating α bends. Since the wire stays flat behind the β bend (unlike a reverse curve, which is commonly incorporated to gain torque in the third generation pre adjusted edge-wise appliances) the free sliding is never affected • The amount of β bend is decided by the extent of bodily control needed during the retraction. Upright incisors require more bodily control during the retraction thereby warranting gable bend placement from the initial days of retraction. However, severely proclined incisors require controlled tipping; hence, the clinician should be watchful about the incisor inclination during the retraction and the gable bends should be placed as and when required • The horizontal arm of the boot loop can be crimped to bring about a relative intrusion of the four incisors in cases where there is interference for retraction from the extruded lower incisors (till you achieve lower incisor intrusion with the necessary mechanics) • The height of the horizontal arm can be adjusted to have the desired force vector. Horizontal arm below the level of the implant will bring about an intrusive vector during retraction. Care should be taken to prevent the torque loss in such situations as the force is applied below the center of resistance of the incisor segment. A mini-screw and the horizontal arm positioned at the level of the center of resistance of the incisor segment allows pure translation of the incisors where as a horizontal arm above the level of the center of resistance will allow gain in torque of incisors. Increasing the vertical height should be done carefully without impinging on to the soft tissues. Versatile retraction mechanics combines the advantages of a boot loop and mini implant assisted retraction while using sliding mechanics. The multiple options available while combining a mini implant and an arch wire with boot loop, allows the clinicians to do en-masse retraction once a working arch wire of 0.019 × 0.025 SS is inserted (without any further arch wire change during the phase of retraction) [ Figures 1-3]. The amount of beta bend given was based on the extent of proclination of the incisors. The treated case does not show any significant amount of torque loss warranting root corrections during the final stages of the treatment [ Figure 4 and Table 1]. It was seen that the versatile Lower incisor to A-Pog (mm) 6.5 1.5 Interincisal angle 116 132 SNA: Angle formed between Sella-Nasion plane and Nasion-Point A plane, SNB: Angle formed between Sella-Nasion plane and Nasion-Point B plane, ANB: Angle formed between Nasion-Point A plane and Nasion-Point B plane, NA:Nasion-Point A plane, FH: Frankfort horizontal, SN: Sellan nasion plane
2018-04-03T02:31:40.929Z
2015-03-01T00:00:00.000
{ "year": 2015, "sha1": "8737f69c0c138bcf96fbe9d392bc0b19bff77d07", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0976-237x.152936", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ba767a8deacfc20e1569b340ff0bee36ffe3d5d8", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
12101829
pes2o/s2orc
v3-fos-license
A Dynamic Approach to the FDI-Environment Nexus: The Case of China and India The cointegration analysis and a vector error-correction (VEC) model are applied to examine the shortand long-run relationships among foreign direct investment (FDI), economic growth, and the environment in China and India. The results show that FDI inflow plays a pivotal role in determining the shortand long-run movement of economic growth through capital accumulation and technical spillovers in the two countries. However, FDI inflow in both countries is found to have a detrimental effect on environmental quality in both the shortand long-run, supporting pollution haven hypothesis. Finally, it is found that, in the short-run, there exists a unidirectional causality from FDI inflow to economic growth and the environment in China and India ─ a change in FDI inflow causes a consequence change in environmental quality and economic growth, but the reverse does not hold. INTRODUCTION Since the economic reform and opening up to the outside world in the late 1970s and the early 1980s, China and India have been the fastest growing economies in the world. Between 1992 and 2005, for example, the Chinese and Indian economies have grown on average by approximately 10% and 7% annually ( Figure 1). Accordingly, foreign direct investment (FDI) inflows to the two countries have grown rapidly during the same period. Between 2000 and 2005, for example, the average annual inflows of FDI in China and India have reached $54.5 billion and $5.2 billion, respectively, more than double the amount of the 1992-1999 period ( Table 1) A plethora of studies has been conducted to deal with the economics of FDI in developing countries over the last three decades. Theoretical research in this area can be roughly categorized into two groups. The first group of studies has provided the theoretical rationale of the effect of FDI inflows on economic growth, which is known as 4 the FDI-growth nexus (e.g., Romer 1986, Lucas 1988, Rebelo 1991, Helphman and Grossman 1991. For example, the modern endogenous growth theory shows that longrun economic growth of the economy can result from more open liberalized government policies conductive for FDI inflows. More specifically, if capital is considered as knowledge rather than just plant and equipment, then the inflow of foreign capital can itself result in technological change and spillovers of ideas across countries (Grossman and Helphman 1991). With the capital exhibiting such increasing returns to scale, therefore, changes in FDI inflows can be an important vehicle for long-run economic growth in developing countries. The second group of studies has attempted to relate theoretical consideration to the impact of FDI on the environment in developing countries, which is referred to as the FDI-environment nexus (e.g., Pethig 1976, Copeland and Taylor 1994, Porter and van der Linde 1995. For example, the pollution haven model asserts that, under globalization circumstance, the relatively lax environmental standards in developing countries become attractive comparative advantage to the pollution-intensive foreign capital seeking for weaker regulations to avoid paying costly pollution control compliance expenditure domestically (Copeland and Taylor 2003). On the other hand, the Porter hypothesis claims that, since environmental quality is a normal good, as income increases with FDI inflows, developing countries tend to adopt more strict environmental regulations (Porter and van der Linde 1995). To date, on the other hand, empirical studies have mostly concentrated on how the inflow of FDI affects economic growth in developing countries (e.g., Tsai 1991, Wang and Swain 1997, Liu et al. 1997, Sun and Parikh 2001, Bende-Nabende et al. 2001, Liu et al. 2002, Shan 2002, Chakraborty and Basu 2002, Yao 2006, and Chang 2007. For 5 example, Wang and Swain (1997) employ a single equation model (i.e., ordinary least squares) to analyze factors affecting foreign capital inflows into China and Hungary; they show a positive relation between changes in the level of GDP and the inflow of FDI in those countries. Sun and Parikh (2001) use a structural model (i.e., three least squares) to examine the relationship between inward FDI, exports and economic growth in China; they find that an increase in FDI (and exports) has a positive and significant impact on Chinese economic growth. Chakraborty and Basu (2002) adopt a non-structural time series model (i.e., vector error-correction) to explore the dynamic interaction between FDI and economic growth in India; they discover evidence that GDP has a significant positive effect on inflows of FDI for the Indian economy in both short-and long-run. Accordingly, empirical analyses of the FDI-environment nexus in developing countries have received little attention. To the best of our knowledge, Smarzynska and Wei (2001), Xing and Kolstad (2002), Eskeland and Harrison (2003), and He (2006) are the only four empirical studies that have attempted to address this issue. For example, Xing and Kolstad (2002) examine the effect of the U.S. FDI on environmental quality in both developed and developing countries; they find that developing countries tend to utilize lenient environmental regulations as a strategy to attract dirty industries from developed countries. He (2006) explores the relationship between FDI and the environment in China; he discovers evidence that an increase in FDI inflow results in deterioration of environmental quality. However, these studies implicitly assume a oneway causality from measures of environmental quality/regulations (SO 2 and CO 2 emissions or pollution abatement cost) and/or economic growth (GDP) to FDI and adopt a structural model (i.e., reduced-form equations) to estimate the impacts of FDI based on 6 such causality. As such, previous studies have neglected the endogenous nature as well as the possible causal relationships between FDI (and economic growth) and environmental quality in a multivariate framework; that is, whether an increase in FDI in developing countries caused by their weaker regulations deteriorates environmental quality or, alternatively FDI related spillover of knowledge tends to improve environmental quality via economic growth. In other words, no study has dealt with dynamic movements of FDI (and economic growth) and environmental quality. 1 The contribution of this study, therefore, is to examine the FDI inflowenvironment nexus in a dynamic framework of multivariate time-series. For this purpose, we assess the short-and long-run relationships among FDI, sulfur dioxide (SO 2 ) emissions and GDP in China and India using the Johansen cointegration analysis and vector error-correction (VEC) model. The Johansen approach features multivariate autoregression and maximum likelihood estimation; this method is well suited to address the issue of endogeneity and causal mechanisms when variables used in the model are non-stationary and cointegrated. In addition, the cointegration test is used to find the long-run equilibrium relationships among the selected variables. Finally, the VEC model provides information on the short-run dynamic adjustment to changes in the variables within the model. This analysis will shed new light on dynamic interrelationships between FDI inflows, economic growth and the environment, and contribute to the empirical literature on FDI-environment nexus. In the next section, the theoretical and empirical modeling of FDI-environment nexus is presented. This is followed by a description of data used in the analysis and a 7 discussion of unit root tests. The empirical results are discussed followed by some conclusions. Theoretical Framework In examining the dynamic relationship between FDI, GDP and SO 2 emissions in China and India, we rely on a FDI-environmental policy model developed by Xing and Kolstad (2002). More specifically, in its simplest form the foreign direct investment ( FDI ) in the host country can be specified as follows: where GDP is the gross domestic product of the host country, which is used as a proxy for the strength of the economy; 1 Z is a vector of exogenous variables affecting FDI inflows such as cost structures (i.e., labor costs) and differentials in rewards of factor services; and * R is the environmental regulatory laxity. The relationship between GDP and FDI is expected to be positive, implying that economic growth is the most important determinant for FDI inflow to the host country ( ). The positive relationship between FDI and * R indicates that lax environmental policy is more attractive to pollution-intensive FDI, thereby increasing polluting industries in the host country. Similarly, the pollution ( E ) such as SO 2 emissions in the host country can be specified as follows: where 2 Z is a vector of exogenous variables affecting pollution levels such as energy consumption and prices. In general, the relationship between GDP and pollution emissions is expected to be positive, indicating that an increase in the scale of economic activity through income growth necessarily brings about a proportionate increase in ). Defining environmental quality as a normal good, however, it is further hypothesized that pollution emissions decrease as rising income passes beyond ). Economists call this relationship as the Environmental Kuznets Curve (EKC) Krueger 1991 and1993). The relationship between pollution and * R is expected to be positive, implying that lenient environmental regulations result in an increase in environmental degradation. Assuming that 2 f is invertible in * R 2 , equation (2) can be solved for * R as a function of the other variables as follows: Finally, we substitute equation (3) into equation (1), which yields the following relationship: The estimation of equation (4) is the basic approach of this study. It should be emphasized that the relationship between FDI and pollution emissions (or environmental regulatory laxity) in developing countries is ambiguous and uncertain. More specifically, if pollution-intensive foreign capitals move to developing countries with weaker regulations, then the inflow of FDI deteriorates environmental quality ( On the other hand, if developing countries rely on technology transfer through FDI from developed countries as a primary means of technology acquisition, the inflow of FDI tends to enforce environmental regulations via economic growth, thereby improving ). Specification of Time-Series Models To estimate the long-run relationship among FDI, GDP and SO 2 emissions, we use the maximum likelihood estimation procedure developed by Johansen (1988) and Johansen and Juselius (1992). More specifically, given a vector having up to k lags as follows: are the matrix of long-run coefficients, where I is the identity matrix;  is a vector of constant; and t u is a vector of normally and independently distributed error terms, or white noise. If the coefficient matrix  has reduced rank ─ i.e., there are ) 1 (   n r cointegration vectors present, then the  can be decomposed into a matrix of loading vectors,  , and a matrix of cointegrating vectors,  , such as where r is the number of cointegrating relations,  represents the speed of adjustment to equilibrium, and '  is a matrix of long-run coefficients. For three endogenous nonstationary variables in our analysis, for example, the term in equation (5) represents up to two linearly independent cointegrating relations in the system. The 10 number of cointegration vectors, the rank of  , in the model is determined by the likelihood ratio test (Johansen 1988). If all variable in a vector of stochastic process t Y are cointegrated, an errorcorrection representation captures the short-run dynamics while restricting the long-run behavior of variables to converge to their cointegrating relationships (Engle and Granger 1987). This can be done by estimating an error-correction model in which residuals from the equilibrium cointegrating regression are used as an error-correcting regressor. For this purpose, equation (5) can be reformulated as a short-run dynamic model as follows: Y is a measure of the error or deviation from the equilibrium, which is obtained from lagged residuals from the cointegrating vectors. Since the series are cointegrated, equation (6) incorporates both short-and long-run effects. That is, if the long-run equilibrium holds, then the term 1 '   t Y is equal to zero. During periods of disequilibrium, on the other hand, this term is non-zero and measures the distance of the system from equilibrium during time t . Thus, an estimate of  provides information on the speed-of-adjustment, which implies how the variable t Y changes in response to disequilibrium. Data It is worth noting that among principal air pollutants, sulfur dioxide (SO 2 ) and carbon dioxide (CO 2 ) are the major measures of air pollution that have been widely used in the empirical studies. Of those, SO 2 represents the measure of local air pollution, whereas 11 CO 2 represents a global pollutant (externality), which individual countries are unable to regulate without international cooperation (Frankel andRose 2005, He 2006). Given our individual country-specific approach, therefore, it is more appropriate to select SO 2 emissions as a proxy for the measure of environmental quality in China and India. Testing For Unit Roots When dealing with time-series data, the possibility of unit roots in a series raises issues about parameter inference and spurious regression (Wooldridge 2000). For example, OLS regression involving non-stationary series no longer provides the valid interpretations of the standard statistics such as t -statistics, F -statistics, and confidence intervals. To avoid this problem, non-stationary variables should be differentiated to make them stationary. However, Engle and Granger (1987) show that, even in the case that all the variables in a model are non-stationary, it is possible for a linear combination of integrated variables to be stationary. In this case, the variables are said to be cointegrated and the problem of spurious regression does not arise. As a result, the first requirement for cointegration analysis is that the selected variables must be non-stationary. To determine the existence of a unit root in the series, we examine the integration order of individual time-series ( ) for China and India using the Dickey-Fuller generalized least squares (DF-GLS) test (Elliot et al. 1996). This test optimizes the power of the conventional augmented Dickey-Fuller (ADF) test using a form of detrending. The DF-GLS test works well in small samples and has substantially improved power when an unknown mean or trend is present (Elliot et al. 1996). The results show that the levels of all the series are non-stationary, while the first differences are stationary ( Johansen Cointegration Test Before implementing the cointegration test, the important specification issue to be addressed is the determination of the lag length for the VAR model, because the Johansen procedure is quite sensitive to changes in lag structure (Maddala and Kim 1998 (Table 3). 3 More specifically, in the residual serial correlation and heteroskedasticity tests using the F -form of the Lagrange Multiplier (LM) procedure, the null hypotheses of no serial correlation and no heteroskedasticity cannot be rejected at the 5% significance level. In the residual normality test using the Doornik-Hansen method (Doornik and Hansen 1994), on the other hand, the null hypothesis of normality can be rejected for 3 individual series and the 14 system for China at the 5% significance level. However, non-normality of residuals does not bias the results of the cointegration estimation (Gonzalo 1994 which in turn leads to higher economic growth. When determining the existence of cointegration relationship, the cointegration vectors ( j  ) estimated from equation (5) of the three eigenvectors is most highly correlated with the stationary part of the process t Y  when corrected for the lagged values of the differences. As such, 1  represents the cointegration vector determined by the cointegrated VAR model (Johansen 1988). After normalizing the coefficients of FDI, for example, the long-run equilibrium relation ( 1  ) between the three variables in China and India can be represented as the reduced forms of equations (7) and (8) (7) and (8) show that economic growth in China and India has a positive longrun relationship with FDI, indicating that economic growth tends to attract more FDI inflow. In addition, a positive long-run relationship between SO 2 emissions and FDI in both countries implies that an increase in SO 2 emissions (or relaxation of environmental 16 regulations) tend to an increase in FDI inflow. This finding provides supportive evidence for the so-called pollution haven hypothesis; that is, such developing countries as China and India tend to utilize lenient environmental regulations in an effort to attract multinational corporations, particularly those engaged in highly polluting activities from developed countries. VEC Model The VEC model is estimated to identify the short-run adjustment to long-run steady states as well as the short-run dynamics among FDI, GDP and SO 2 emissions in China and India. For this purpose, we estimate the short-run VAR model in equation (6), with the identified cointegration relationships in equations (7) and (8). We adopt a general-tospecific procedure to estimate the VEC model (Hendry 1995). In the case of China, for example, since FDI and SO 2 emissions are found to be weakly exogenous to the system, the VEC model is first estimated conditional on the two variables. By eliminating all the insignificant variables based on an F -test, the parsimonious VEC (PVEC) model is then estimated using OLS (Harris and Sollis 2003). Likewise, the VEC model for India is estimated conditional on FDI. The number of lags used in the PVEC model is the same as that in the cointegration analysis. The multivariate diagnostic tests on the estimated model as a system show no serious problems with serial correlation, heteroskedasticity, and normality (Table 6). This suggests that the PVEC specifications do not violate any of the standard assumptions. The results show that the error-correction terms ( 1  t EC ) for China and India are negative and significant at the 5% significance level (Table 6). More specifically, the negative coefficients of 1  t EC ensure that the long-run equilibrium can be achieved. The absolute value of 1  t EC indicates the speed of adjustment to equilibrium. As such, the results indicate that, with a shock to the Chinese and Indian economies, GDP and SO 2 emissions tend to recover to their long-run equilibrium position. However, the adjustment toward equilibrium is not instantaneous. For example, the coefficients of 2. Since the environmental regulatory laxity is not directly observed, Xing and Kolstad (2002) solve this latent variable problem by using pollutant emissions to infer laxity. For example, SO 2 emissions can be used as a yardstick to characterize the change of environmental regulation laxity; that is, relaxation (enforcement) of environmental regulation leads to an increase (decrease) in SO 2 emissions. Accordingly, pollution emissions ( E ) and environmental regulatory laxity ( * R ) is interchangeable in this model. 3. The sample size could be another issue of concern for the Johansen procedure, because finite-sample analyses can bias the cointegration test toward finding the longrun relationship either too often or too infrequently. In fact, the number of observations used in this study seems to be a bit small; our findings should thus be viewed with caution. However, Hakkio and Rush (1991) note: "Our Monte Carlo studies show that the power of a cointegration test depends more on the span of the data rather than on the number of observations. Furthermore, increasing the number of observations, particularly by using monthly or quarterly data, does not add any robustness to the results in tests of cointegration." Following these authors, the annual data used in this study (23 years) can be considered to be long enough to reflect the long-run relationship between FDI, GDP and SO 2 emissions, which should somewhat mitigate our concern with the relatively small sample size. 20 4. Doornik and Hendry (2001) note: "The sequence of trace tests leads to a consistent test procedure, but no such result is available for the maximum eigenvalue test. Therefore current practice is to only consider the former." Following these authors, we only depend on the former to test the null hypothesis. Note:  denotes the first differences of the variables. p -values are given in parentheses. ** and * indicate rejection of null hypothesis of non-stationarity at the 5% and 10% significance levels, respectively. Serial correlation of the residuals of individual equations and a whole system was examined using the F -form of the Lagrange-Multiplier (LM) test, which is valid for systems with lagged independent variables. Heteroskedasticity was tested using the F -form of the LM test. Normality of the residuals was tested with the Doornik-Hansen test (Doornik and Hendry 1994). Note: ** indicates rejection of the null hypothesis at the 5% significance level. Parentheses are p -values. The trace test leads to a consistent test procedure, but the maximum eigenvalue test does not (Doornik and Hendry 2001, p. 175). For this reason, we only report the former to test the null hypotheses. 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 Year
2016-01-29T17:58:53.149Z
2009-12-31T00:00:00.000
{ "year": 2009, "sha1": "ba224c5ed1b6f68f92e21bcd603ec5526a35c63c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.11644/kiep.jeai.2009.13.2.202", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ba224c5ed1b6f68f92e21bcd603ec5526a35c63c", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
52943357
pes2o/s2orc
v3-fos-license
EFFICACY OF HYALURONIC ACID AFTER KNEE ARTHROSCOPY : A SYSTEMATIC REVIEW AND META-ANALYSIS LAY ABSTRACT Hyaluronic acid might be beneficial for patients after knee arthroscopy. However, the results remain controversial. A systematic review and meta-analysis was conducted to explore the efficacy of hyaluronic acid following knee arthroscopy. Randomized controlled trials assessing the effect of hyaluronic acid in knee arthroscopy were included. Compared with control intervention after knee arthroscopy, hyaluronic acid treatment was found to significantly improve Western Ontario and McMaster Universities Osteoarthritis Index scores and decrease pain on motion, but had no substantial influence on pain scores at 2, 6 and 12 weeks after knee arthroscopy. Objective: To investigate the effect of hyaluronic acid on functional recovery and pain control in patients following knee arthroscopy. Design: A systematic review and meta-analysis was conducted to explore the efficacy of hyaluronic acid following knee arthroscopy. Subjects and methods: Randomized controlled trials (RCTs) assessing the effect of hyaluronic acid in knee arthroscopy were included. A meta-analysis was performed using the random-effect model. Results: Six RCTs involving 310 patients were included in the meta-analysis. Overall, compared with control intervention following knee arthroscopy, hyaluronic acid treatment was found to significantly increase Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) scores (mean difference 11.43; 95% confidence intervals (95% CI) 1.39–21.47; p = 0.03), but had no impact on pain scores at 2 weeks (mean difference –0.16; 95% CI –0.81–0.49; p = 0.63), pain scores at 6 weeks (mean difference 0.01; 95% CI –0.86–0.89; p = 0.98), pain scores at 12 weeks (mean difference –0.51; 95% CI –1.56–0.53; p = 0.34). In addition, pain on motion was significantly reduced after knee arthroscopy (risk ratio (RR) 0.22; 95% CI 0.06–0.79; p = 0.02). Conclusion: Compared with control intervention after knee arthroscopy, hyaluronic acid treatment was found to significantly improve WOMAC score and decrease pain on motion, but had no substantial influence on pain scores at 2, 6 and 12 weeks after knee arthroscopy. Objective: To investigate the effect of hyaluronic acid on functional recovery and pain control in patients following knee arthroscopy.Design: A systematic review and meta-analysis was conducted to explore the efficacy of hyaluronic acid following knee arthroscopy.Subjects and methods: Randomized controlled trials (RCTs) assessing the effect of hyaluronic acid in knee arthroscopy were included.A meta-analysis was performed using the random-effect model.Results: Six RCTs involving 310 patients were included in the meta-analysis.Overall, compared with control intervention following knee arthroscopy, hyaluronic acid treatment was found to significantly increase Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) scores (mean difference 11.43; 95% confidence intervals (95% CI) 1.39-21.47;p = 0.03), but had no impact on pain scores at 2 weeks (mean difference -0.16; 95% CI -0.81-0.49;p = 0.63), pain scores at 6 weeks (mean difference 0.01; 95% CI -0.86-0.89;p = 0.98), pain scores at 12 weeks (mean difference -0.51; 95% CI -1.56-0.53;p = 0.34).In addition, pain on motion was significantly reduced after knee arthroscopy (risk ratio (RR) 0.22; 95% CI 0.06-0.79;p = 0.02).Conclusion: Compared with control intervention after knee arthroscopy, hyaluronic acid treatment was found to significantly improve WOMAC score and decrease pain on motion, but had no substantial influence on pain scores at 2, 6 and 12 weeks after knee arthroscopy. Key words: hyaluronic acid; knee arthroscopy; WOMAC score; viscosupplementation; meta-analysis.I n the short-term postoperative period after knee arthroscopy patients frequently experience pain, swelling and impaired function (1)(2)(3).Knee arthroscopy is widely used for anterior cruciate ligament reconstruction, meniscus tear, and arthroscopic debridement.Currently, several analgesics are used for pain control following arthroscopic knee surgery, resulting in some adverse events (4,5).During knee arthroscopy, the normal hyperviscous synovial fluid is replaced by irrigation fluid (normal saline), which is further replaced by new, naturally formed synovial fluid after the surgery.The irrigation fluid not only facilitates the removal of harmful debris, but also dilutes the hyaluronic acid layer covering joint tissues (e.g.cartilage).Irrigation fluids have been reported to have a negative effect on the metabolism and structure of the joint cartilage (6)(7)(8)(9). Hyaluronic acid, a complex glycosaminoglycan, is an important component of synovial fluid and cartilage matrix, which lubricates and allows smooth and pain-free joint motion (10)(11)(12).Hyaluronic acid could promote homeostasis of the joint environment and serve as a semipermeable barrier to protect the cartilage from the free movement of lytic enzymes, inflammation mediators, and inflammatory cells in the synovial fluid (13)(14)(15).In addition, hyaluronic acid has been reported to relieve joint pain and prevent the progression of cartilage degeneration in osteoarthritis (16).Exogenous hyaluronic acid injected into the arthritic joint space has been shown to improve the qualitative and quantitative properties of endogenous hyaluronic acid and therefore improve joint lubrication (17).In a randomized controlled study (RCT), intra-articular hyaluronic acid was reported to improve pain control and swelling after arthroscopic anterior cruciate ligament reconstruction (18). In contrast to this promising finding, however, some RCTs have shown that hyaluronic acid has no influence on Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) scores, and pain scores at 2 and 6 weeks following knee arthroscopy (19)(20)(21).Considering these inconsistent effects, we therefore conducted a systematic review and meta-analysis of RCTs to evaluate the effectiveness of hyaluronic acid after knee arthroscopy. MATERIAL AND METHODS This systematic review and meta-analysis was conducted according to the guidance of the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) statement (22) and the Cochrane Handbook for Systematic Reviews of Interventions (23).All analyses were based on previous published studies, thus ethical approval and patient consent were not required. Literature search and selection criteria PubMed, Embase, Web of Science, EBSCO, and the Cochrane Library were systematically searched from inception to September 2017, with the following key words: hyaluronic acid, and knee arthroscopy.To include additional eligible studies, the reference lists of retrieved studies and relevant reviews were also hand-searched and the process above was performed repeatedly until no further article was identified.Conference abstracts meeting the inclusion criteria were also included. The inclusion criteria were: study population, patients undergoing knee arthroscopy; intervention, hyaluronic acid injection; control intervention, normal saline or no injection; outcome measure, WOMAC scores; and study design, RCT.Patients receiving local anaesthetic in the control group were excluded. Data extraction and outcome measures The following information was extracted for the included RCTs: first author, publication year, sample size, baseline characteristics of patients, hyaluronic acid, control, study design, WOMAC scores, pain scores at 2, 6 and 12 weeks, pain on motion.The author would be contacted to acquire the data when necessary. The primary outcome was WOMAC score.Secondary outcomes included pain scores at 2, 6 and 12 weeks, pain on motion. Quality assessment in individual studies The Jadad scale was used to evaluate the methodological quality of each RCT included in this meta-analysis (24).This scale consisted of 3 evaluation elements: randomization (0-2 points), blinding (0-2 points), dropouts and withdrawals (0-1 points).One point would be allocated to each element if it was mentioned in article, and another 1 point would be given if the methods of randomization and/or blinding had been described appropriately and in detail.If methods of randomization and/or blinding were inappropriate, or dropouts and withdrawals had not been recorded, then 1 point was deducted.The Jadad scale score varied from 0 to 5 points.An article with Jadad score ≤ 2 was considered to be of low quality.If the Jadad score was ≥ 3, the study was thought to be of high quality (25).Two investigators independently assessed the quality of included studies.Any discrepancy should be solved by consensus. Statistical analysis Mean differences (MDs) with 95% confidence intervals (95% CIs) for continuous outcomes (WOMAC scores, pain scores at 2, 6 and 12 weeks) and risk ratios (RRs) with 95% CIs for dichotomous outcomes (pain on motion) were used to estimate the pooled effects.An I 2 value greater than 50% indicates significant heterogeneity.The random-effects model with DerSimonian and Laird weights was used in all analyses.Sensitivity analysis was performed to detect the influence of a single study on the overall estimate via omitting one study in turn when necessary.Owing to the limited number (< 10) of included studies, publication bias was not assessed.p < 0.05 in 2-tailed tests was considered statistically significant.All statistical analyses were performed with Review Manager Version 5.3 (The Cochrane Collaboration, Software Update, Oxford, UK). Literature search, study characteristics and quality assessment The flow chart for the selection process and detailed identification was presented in Fig. 1.A total of 678 publications were identified through the initial search of databases.Ultimately, 6 RCTs were included in the meta-analysis (18-21, 26, 27). The baseline characteristics of the 6 eligible RCTs in the meta-analysis were summarized in Table I.The 6 studies were published between 2007 and 2012, and sample sizes ranged from 29 to 80, with a total of 310.There were similar characteristics between the hyaluronic acid group and the control group at baseline.One RCT reported knee arthroscopy for anterior cruciate ligament reconstruction (18), 2 RCTs reported knee arthroscopy for meniscus tear (26,27), 2 RCTs reported arthroscopic debridement for knee osteoarthritis (19,21), and 1 RCT reported arthroscopic knee joint lavage, or in combination with cartilage debridement (20).Among the 6 RCTs, 2 studies reported WOMAC scores (19,21), 2 reported pain scores at 2 weeks (18, 20), 2 reported pain scores at 6 weeks (18, 21), 3 reported pain scores at 12 weeks (18, 21, 27), 2 reported pain on motion (20,27).Jadad scores of the 6 included studies varied from 3 to 5, all 6 studies were considered to be high-quality ones according to quality assessment. Primary outcome: WOMAC scores This outcome data was analysed with a random-effects model, the pooled estimate of the 2 included RCTs suggested that, compared with the control group after knee arthroscopy, hyaluronic acid injection was associated with significantly increased WOMAC scores (mean difference = 11.43;95% CI = 1.39 to 21.47; p = 0.03), with no heterogeneity among the studies (I 2 = 0%, heterogeneity p = 0.73) (Fig. 2). Sensitivity analysis No heterogeneity was observed among the included studies for the WOMAC scores.Thus, we did not perform sensitivity analysis by omitting one study in turn to detect the source of heterogeneity. DISCUSSION Pain management allowed early mobilization and rehabilitation following knee arthroscopy, and mainly included oral analgesics, femoral nerve block, and intra-articular injections (28)(29)(30).Continuous femoral nerve block was revealed to alleviate pain within 48 h, but had no influence on pain management and knee function (28,31,32).Intra-articular fentanyl/ bupivacaine achieved comparable efficacy in relation to femoral nerve block in the first 24 h (33).Intraarticular injection of tenoxicam could reduce analgesic consumption in the first 3-6 h (34). Exogenous hyaluronic acid was reported to stimulate de novo synthesis of hyaluronic acid, and inhibit the release of arachidonic acid and interleukin-1αinduced prostaglandin E 2 synthesis, which reduced www.medicaljournals.se/jrm the anti-inflammatory response for pain control (35). One included RCT reported that hyaluronic acid treatment was capable of substantially alleviating pain and symptoms within 2 days after arthroscopic anterior cruciate ligament reconstruction (18).Another RCT also reported that statistically significant pain reduction was found one week postoperatively in arthroscopic surgery (26).In our meta-analysis, hyaluronic acid was revealed to significantly reduce pain on motion in knee arthroscopy, but had no influence on pain control 2, 6 and 12 weeks after the arthroscopic surgery.These results support the efficacy of hyaluronic acid treatment for pain control one week postoperatively when the inflammatory response after surgery is obvious.Hyaluronic acid has been reported to result in a more rapid recovery from arthroscopic surgery, with less pain, less effusion, and a lower intake of analgesics (26).One RCT, involving 66 patients with various degrees of chondral damage, showed that post-arthroscopic instillation of hyaluronic acid-based synovial fluid substitute into the joint benefited long-term stabilization of treatment outcome 2 years after surgery (20).Another multicentre, prospective, open study showed that hyaluronic acid could provide effective pain relief, and improve stiffness and physical function at 4-12 weeks after arthroscopic meniscectomy in patients with knee osteoarthritis (36).The current meta-analysis also indicated that hyaluronic acid was associated with significantly increased physical function, as evidenced by the improved WOMAC scores.The incidence of postoperative swelling was reported to be significantly reduced after hyaluronic acid injection following knee arthroscopy (27). Several study limitations should be taken into account.Firstly, our analysis was based on only 6 RCTs, all of which have a relatively small sample size (n < 100).Overestimation of the treatment effect was more likely in smaller trials compared with larger samples.The detailed methods of knee arthroscopy, and the variation in timing and volume of hyaluronate in the included studies were different.These factors may have an influence on the pooling results.Next, the duration and follow-up time of hyaluronic acid varied from 2 weeks to 2 years.Finally, it was necessary to compare therapeutic effects of hyaluronic acid with femoral nerve block, intra-articular opioids and antiinflammatory drugs. Conclusion Hyaluronic acid treatment showed important abilities to reduce pain on motion in the short-term and to improve physical function in knee arthroscopy. The authors have no conflicts of interest to declare. Fig. 1 . Fig. 1.Flow diagram of study searching and selection process. Fig. 2 . Fig. 2. Forest plot for the meta-analysis of Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) scores. Fig. 4 . Fig. 4. Forest plot for the meta-analysis of pain on motion. Table I . Characteristics of included studies
2018-10-22T17:25:40.571Z
2018-11-07T00:00:00.000
{ "year": 2018, "sha1": "a8eea4ae501bb5bd575cd92eab6ab312bb1e1845", "oa_license": "CCBYNC", "oa_url": "https://www.medicaljournals.se/jrm/content_files/download.php?doi=10.2340/16501977-2366", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cc92191f9e86a1ee74653be928fe30b943191f90", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
64684840
pes2o/s2orc
v3-fos-license
Specific aspects of evaluation of control systems similarity The paper discusses the problem of estimating the similarity of two or more control systems. In accordance with the statements made earlier it is indicated that such management systems should be characterized by a strict or approximate equality of the values of targeted indicators (criteria) of control effectiveness. Two main variants of procedures for estimating the control systems similarity are proposed. The first is based on the direct evaluation of the targeted indicators of control performance, the second – on the use of the so-called similarity relations, describing the conditions the fulfillment of which is valid for such systems. The procedures for evaluating the control systems similarity with the help of special similarity relations are associated with considerably less expenses. However, they are formed for a relatively narrow class of automatic control systems and are basically special cases of similarity, which requires further research in this direction. Introduction One of the primary and important tasks for development of the traditional theory of similarity [1] for control systems is the problem of estimating the similarity of two or more systems. Its solution is based on the axiomatics of similarity of control systems, stated in the form of Statements, according to which such control systems should be characterized by a strict or approximate equality of values of the targets (criteria) of control effectiveness. The formula expression for this statement for the j-th and l-th control systems can be written in the form [2]     [3,4]. We shall note here that preliminary studies of dynamic properties and conditions for the functioning of full-scale control objects and their operating physical models [5,6] have made it possible to conclude that almost all of them are subject to the influence of uncontrolled disturbances. Research methods Two variants of procedures for evaluating the control systems similarity can be distinguished. The first is the direct use of the relation (1), the practical realization of which requires a reliable estimation, and in the case of a non-stationary control object -continuous or periodic tracking of the values of the target indicators of control systems efficiency and their subsequent comparison with the corresponding value  n jl q  . It is also possible to estimate the control systems similarity by the method of correlated processes [7], by calculating and subsequently analyzing the correlation moments between the values of the performance indicators of the two control systems. The second version of the procedure for estimating the control systems similarity is based on the use of the so-called similarity relations [5] describing the conditions that are valid for such systems. They represent a linear or nonlinear combination of known characteristics of the actions properties, objects and control parts of systems, such as statistical and fractal time series, parameters of dynamic characteristics of the transformation channels for these effects of objects and control algorithms. Similarity relations can be obtained, in particular, by analytical methods or by using numerical simulation and search optimization methods. The statement of the problem, the solution scheme, conditions for the effective application of these methods are given in [5]). Application of methods The values obtained as a result of the value transformation (3) ) (t Classification of methods for similarity evaluation The results of experience generalization concerning the evaluation and study of the control systems similarity [1,2,4,5,7] made it possible to form a finite set of methods for estimating the similarity of control systems (Figure 1), the differences between them for the first variant are determined by possible ways of estimating the values of their performance indicators. The first version of the procedure for assessing the control system similarity, based on the direct use of targeted indicators of their efficient performance using the relation (1), is more universal from the point of view of its practical application. It can be used to estimate the similarity of almost any control systems having different structures and functioning in different conditions, if the values of the targets are evaluated on a finite time interval with the use of full-scale, model, full-scale and model data. However, in order to obtain adequate estimation results, it is necessary to have a large number of arrays with a large volume of reliable data computation of which is very time consuming. In addition, obtainment of reliable data in the existing control systems is associated with considerable difficulties and cannot always be realized. of mathematical models of the channels for converting input effects and the effects themselves, including uncontrolled ones with fairly strict limitations on their structures and the range of parameters. It is possible to obtain a solution of the integral equation with an subintegral function comprising irrationality in the denominator only in very rare cases [3], in which on the area of effective application of the calculation method substantial limitations are imposed associated mainly with the structure and the range of variation of the values of model parameters transforming the channels of control objects, their input effects, control algorithms etc. Usually these restrictions are also very rarely implemented in practice. Area of the calculation methods can be somewhat extended in the cases when such numerical solution of integral equations is associated with a lower cost compared to the cost for the realization of the modelling methods of control systems. The second version of the procedure for estimating the similarity of control systems, based on special similarity relations requires considerably less computing and time costs, and from this point of view it is more preferable. However, it, as well as analytical methods, is essentially limited in practical application. Such relationships are obtained only for a small class of control systems. In particular, they include control systems of positioning action with typical laws of regulation, the similarity relations for which and the scope of their application are given in [5]. Conclusions The procedures for estimating the control systems similarity, based directly on the relation (1) are applicable for virtually any control systems of the same or similar structures. However, they require significant costs and time for reliable determination of the targeted indicators of their performance efficiency. The procedures for evaluating the control systems similarity with the help of special similarity relations are associated with considerably less expenses. However, they are designed for a relatively narrow class of automatic control systems which requires further research in this direction.
2019-02-17T14:16:11.322Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "5a8c012943d96e4a99f6a7623b62586ca8bd0093", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/84/1/012028", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b87d3c99e3bbbc135da2199c1abb09e3a4ee2aa2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
153862581
pes2o/s2orc
v3-fos-license
Political Failures and Intergovernmental Competition In normative public economics, intergovernmental competition is usually viewed as harmful. Although empirical support for this position does not abound, market integration has intensified competition among developed countries. In this paper we argue that when assessing welfare effects of intergovernmental competition for various forms of political failures (the public choice critique), the outcome is ambiguous and competition can be welfare improving. Introduction What is the role of competition between governments?If competition is the fundamental force of efficient economic performance in the private sector, why should it be different for the public sector?Why cannot the same disciplining effect of competition be applied to the public sector as well?In the private sector competition will promote efficiency because firms which best satisfy consumers' preferences will survive and prosper, while others will lose customers and fail.Extending this argument to the public sector, competition among governments and jurisdictions should induce them to best serve the will of their residents.If they fail to do so, residents will vote out their incumbent or they can leave for other jurisdictions, which offer a better deal. The purpose of this paper is to show that if the normative public economics view of a harmful tax competition and a risk of a race to the bottom has some merit, it also needs to be seriously qualified.Indeed a positive role for intergovernmental competition in general and fiscal competition in particular can be found.There are two main ways.First, the role for intergovernmental competition can be compared to an auction mechanism to get resources allocated to their best possible uses.Another possibility is that there is an agency problem in government which tends to make the public sector inefficient and possibly too large.In this paper we shall concentrate on this agency problem to show that intergovernmental competition can be welfare enhancing.This is in stark contrast with EU stance on intergovernmental competition, which perceives it purely as messing up incentives with very damaging consequences for welfare. It should be stressed at the outset that the purpose is to present a "public choice" perspective on the topic of intergovernmental competition in a manner that is provocative to stimulate debate even if it is not found persuasive.The intention is to temper normative public economics analysis with some public choice perspectives.That does not mean that we claim the public choice approach to be the correct one.Normative and political approaches to public policy issues are complementary.The normative approach evaluates the consequences of competition among benevolent governments.It is mainly interested in the problem of market failures (and redistribution).It is a useful benchmark to start with, but policy implementation also requires to analyse the possibility of government failure.The "public choice" perspective is to allow for the possibility that governments may be imperfect.We will consider three forms of government imperfections: nonbenevolence, noncompetence, and noncommitment. The paper is organized as follows.First we present the disciplinary benefit of competition in aligning incentives of the politicians to those of the electorate.Second, we study how competition can facilitate the selection of competent and noncompetent governments.Third, we discuss how competition can usefully help the government to credibly commit to some desirable course of actions.Lastly, the main results from the fiscal competition theory are summarized and evaluated in the concluding section. Competition and Discipline Politicians may pursue different objectives.At times, they may be public-spirited and dedicate themselves fully to furthering public interest.But they may also pursue their own ideas, even if these differ from those of their constituents.Some may want to derive private gains while in office or actively seek perks of office.Some may extend clientelistic favors to their families and friends.But the most important way in which they can act against the best interests of their constituents is by choosing policies that advance their own interests or those special groups to which they are beholden. A government is accountable if voters can discern whether it is acting in their interest and sanction them appropriately; if it is not, so incumbents anticipate that they will have to render accounts for their past actions.The problem is then to confront politicians with a tradeoff between diverting rents and losing office or doing what voters want and getting reelected.In this view, elections can be seen as an accountability mechanism for controlling and sorting good from bad incumbents.By "good incumbent," we mean someone who is honest, competent and not easily bought off by special interests. The standard view of how electoral accountability works is that voters set some standard of performance to evaluate governments and they vote out the incumbent unless these criteria are fulfilled.However elections do not work well in controlling and sorting politicians.There are severe problems in monitoring and evaluating the incumbent's behavior in order to make informed decisions about whether to reelect or not.Voters face a formidable agency problem because they are inevitably poorly informed about politicians' behavior and type.Moreover, the electoral sanction (pass or fail) is such a crude instrument that it can hardly induce the politicians to do what the public wants. In this perspective, it might be reasonable to try and organize competition among politicians in order to control them.In this respect, the Brennan and Buchanan [1] view is that decentralization is an effective mechanism to control governments' expansive tendencies.The basic argument is that competition among different decentralized governments can exercise a disciplinary force and break the monopoly power of a large central government.Comparing performance in office among different incumbents would help in sorting good types from bad types as well as controlling the quality of their decisions.Hence one votes against an incumbent if his performance is bad relative to others, in order to induce each incumbent to behave in the public interest (see e.g.[2]). To see the logic of the argument, consider a standard political agency example (see Persson and Tabellini [3,Chapter 4] for a review of political agency models.See also Besley and Smart [4] for a related model with rent diversion and public good provision).Suppose that the circumstances under which politicians make decisions can be good (state a) or bad (state b).Governments decide to adopt policy A, which is better for their constituents in the good state a, or policy B, which is better in the bad state b.Governments need To induce politicians to act as they can under this information structure, voters must set their reelection rule.If voters set the standard, the incumbent must meet in order to be reelected too high (such as committing to vote for the incumbent if the welfare level is at least 3), then the incumbent cannot be reelected whatever he does if conditions turn out to be bad (state b).Consequently, the incumbent has the incentive to obtain the rent r and leave office.Alternatively, if the voters set the standard for reelection lower, say at 1, the incumbent will be able to divert rent when conditions happen to be good (state a) and be reelected by giving voters less than what they could obtain.Then voters are in a quandary because whatever they decide to do, the politicians will sometimes escape from their control and divert rent. Suppose now that the electorate can compare the outcome of its incumbent with other incumbents (in different constituencies) facing exactly the same circumstances.Then from the observation of outcomes elsewhere, voters can potentially infer whether the prevailing conditions are good or bad and thereby get the most they can under either conditions.The information will be revealed if there is at least one government that chooses a different policy from that of the others.When conditions are good, vote for the incumbent if the outcome is at least 3.When conditions are bad, vote for the incumbent if the outcome is at least 1.Otherwise vote the incumbent out.Hence, a government facing good conditions a knows that by choosing the appropriate policy A, it will be reelected for sure and get V which is more than the rent r it can get by choosing B and being voted out.In turn a government facing bad conditions b knows that by choosing B it will be reelected and get V which is better than what it would get by adopting the wrong policy A to get the rent r but no chance of being reelected.Therefore, comparing the performance of their incumbent with other incumbents facing similar circumstances, voters can gain increased control over their politicians and deduce what is attributable to circumstances as opposed to government actions (Besley and Case [5] find empirical evidence for this kind of yardstick competition in tax setting for US States for gubernatorial elections). Competition and Screening The original insight that tax competition leads to inefficiently low taxes and public good provision was obtained in models with benevolent decision makers (see Hindriks and Myles [6,Chapter 18] for a review of fiscal competition models).An alternative approach is to consider public officials that seek in their decision making to maximize their own welfare and not necessarily that of their constituencies.From this perspective, tax competition may help discipline nonbenevolent governments.For instance if we view governments as "leviathan" mainly concerned with maximizing the size of the public sector, then tax competition may improve welfare by limiting taxation possibilities and thereby cutting down the size of government that would be otherwise excessive.This argument suggests that the public sector should be smaller when taxes and expenditures are more decentralized.The evidence on this is, however, mixed. An analogous argument applies to governments with some degree of benevolence, possibly due to electoral concerns.When political agency problems are introduced, this inefficiency of competition among governments is no longer so clear.Intergovernmental competition makes the costs of public programs more visible, as well as their benefits in ways that make public officials accountable for their decisions.Stated briefly, competition may induce government officials to reduce waste and thus reduce the effective price of public goods (see [4]). In this section we concentrate on a different agency problem which is the competence issue.We shall show that fiscal competition can help to discipline and screen out government competence.Competence is defined as the capacity to transform tax in public good provision.The model is adapted from Hindriks and Lockwood [7,8] to allow for the fact that fiscal competition raises the marginal cost of public funds.The setup and results are also different.In Hindriks and Lockwood [7,8] the purpose is to compare centralization with decentralization in terms of changing the number of regions under the jurisdiction of the policymaker (with the scope for yardstick competition under decentralization).In this paper, the purpose is to analyse the impact of fiscal competition on welfare assuming decentralized fiscal policy.We will not consider centralization.Also in Hindriks and Lockwood [7,8], the agency problem is about benevolence whereas the agency problem we consider in this paper is about competence.This makes a difference in the results because selection and incentive effects are conflicting in benevolence models whereas they reinforce each other in competence models.To put it simply, in a benevolence political agency model, bad incentives for the incumbent make selection easier at election time. The Model. There are two time periods.In each time period, a politician makes a decisions about taxation and public good provision.Moreover, at the end of period 1, there is an election in which voters choose between the incumbent and a challenger, having observed only first-period fiscal policy.Consider the situation in which policy makers know the cost of public services better than does the taxpayer. Suppose the unit cost is either high θ H or low θ L (with θ L < θ H ). Politician is either "good" with probability π or "bad" with probability 1 − π.A "good" politician is always low cost, and a "bad" politician is high cost with probability 0 < q < 1.Thus the good politician is competent and the bad one is incompetent.However the incompetent can also benefit from favorable economic circumstances (with probability 1 − q) and produce at low cost.The gross benefit from a level G of public services is B(G) which is increasing and concave function.The per-period welfare of the typical taxpayers is where μ ≥ 1 is the marginal cost of public funds (MCF). The MCF is the aggregate efficiency loss caused in raising an additional dollar of tax revenue.With tax base mobility, the MCF is biased upward by the taxing jurisdiction because it does not take into account the positive fiscal externality that its taxes create for other jurisdictions.To put it simply, suppose capital taxation to finance public good provision. The public good supply G is determined by the budget constraint G = TK(T), where K(T) is the stock of capital as a function of the tax rate.The marginal cost of public fund is MC−TΔK(T) where MC is the marginal cost without capital mobility and ΔK(T) < 0 is the fiscally induced outflow of capital (representing a positive fiscal externality for other jurisdictions).This fiscal externality is increasing with the mobility of the tax base (i.e, the tax base elasticity).Therefore the intensification of tax competition (defined as increasing mobility of the tax base) is represented by an increase in μ (i.e, a basic implication of the tax competition theory). Both voters and politicians have the same discount factor, 0 < δ < 1. With full information, taxpayers will demand a level of public service B (Gθ) = μθ and pay the government T θ = θG θ .Depending on the announced cost, taxpayers demand different amount of public services with G H < G L . All politicians are honest: they care about the welfare of the voters and they do not want to divert rent (see Besley and Smart [4] for a similar analysis with dishonest politicians.Interestingly enough, they find that competition is welfare improving when there is a predominance of "good" (honest) politicians.In our model we show the opposite and more natural result that when politicians differ in competence, competition improves welfare when there is a predominance of "bad" (incompetent) politicians).The lack of congruence between politicians and voters comes from the private benefit of holding office, R > 0. This benefit from office creates potential conflict with voter interest to weed out bad politicians.There is also some lack of transparency in the government tax and spending decisions in the sense that the incumbent can "delay" the revelation of the true cost.This is made possible by borrowing freely on the international capital market at interest rate equal to discount rate δ < 1.In the first period, the incumbent can freely borrow b on the international market so in second period must pay back b/δ.This borrowing is not observable by voters. Economics Research International 3.2.Equilibrium.In the first period, the politician observes the unit cost θ ∈ {θ L , θ H } and then chooses a level of provision conditional on cost.Voters observe taxing and spending decisions prior to election.Voters make an inference about their incumbent's type based on observed performance and compare it to prior beliefs about the type of the challenger, and reelect their incumbent if he is at least as likely to be "good."The incumbent gets rent R if he is reelected. Proceeding backwards, in the second period, the incumbent just sets (G k , T k + b) if = θ k .So, given borrowing b the second-period payoffs to voters from good incumbents is W L − b and the second-period expected payoff from bad incumbent is EW − b, where ( Since W H < W L we have that bad type produces lower welfare than good type EW < W L and so voters prefer competent politicians and they will not reelect their incumbent if they believe he is likely to be incompetent.In the first period, the good incumbent sets (G L , T L ).The bad incumbent sets (G L , T L ) if cost is low, and if cost is high, he can So, if the probability of pooling is λ, voter beliefs that the incumbent is good are ( So, whatever λ, Pr(good | T L ) ≥ π, so pooling always implies reelection and separating always implies no reelection (given this, it is clear that good incumbents behave nonstrategically by choosing G L , the optimal supply when cost is low.This is because if the voters observe G L ,T L , whatever strategy the "bad" incumbent follows, rational voters must conclude that the probability that the incumbent is "good" is at least π). If the incumbent is bad, his payoff to separating when cost is high is where the incumbent rationally anticipates that he will be replaced by a challenger who is competent with probability π.His payoff to pooling is where the incumbent rationally anticipates that he will win the election and that the debt incurred in order to pool must be repaid.So, comparing payoffs, the bad incumbent will pool if which gives where is the welfare loss of the distortion in public good supply, and S(μ) = W L − EW > 0 is the selection cost of reelecting the bad incumbent instead of a good challenger.It can be shown that both incentive and selection costs are increasing in μ. there is a pooling equilibrium, and for μ > μ • there is a separating equilibrium. In the separating equilibrium, the expected welfare of voters is which is decreasing in μ.In the pooling equilibrium, the expected welfare of voters is which is also decreasing in μ. The change in welfare due to a change in equilibrium strategy from a pooling equilibrium (μ ≤ μ • ) to a separating equilibrium (μ > μ • ) is Thus there is discontinuous increase in welfare, as indicated in Figure 1, around μ • that is proportional to the proportion of incompetent politicians. Proposition 1. Intensification of fiscal competition that leaves equilibrium unchanged reduces voter welfare. However more competition around μ • that induces a change in the political equilibrium increases voter welfare.The welfare gain from fiscal competition is higher when there is a presumption that politicians are likely to be bad. So, whether we view fiscal competition as harmful or not is reflecting our perception of the quality of governments, unconstrained actions of a good governments is good, but it can be very costly when governments are bad.Intensifying competition is most likely to be welfare improving for voters when there is a predominance of bad politicians. Competition and Coherence There are also circumstances where intergovernmental competition may be welfare enhancing even when governments are well-meaning and competent.This is the case when governments have imperfect commitment.We consider two examples. The first example is the case where countries seek to give a competitive advantage to their domestic firms by offering wasteful subsidies.In equilibrium all countries will do this, so each country's subsidy cancels out with the subsidy of others.Since they cancel, none gains any advantage and all countries would be better off giving no subsidy.This is the Prisoners' Dilemma once again.Tax competition may help solve this inefficient outcome by allowing firms to locate wherever they choose and preventing governments from discriminating between domestic and foreign firms operating within a country.The mobility of the firms will force governments to recognize that their subsidy will not only give a competitive advantage to their domestic firms but will also attract firms from other countries.Because the government cannot discriminate between all firms operating within its borders, it will have to pay the subsidy to both the domestic and foreign firms, thereby eliminating the competitive advantage.Therefore mobility eliminates the potential gains from the subsidy and raises its cost by extending its payment to foreign firms.Obviously this argument relies on the political desire to support local industry.The condition is not so much to protect local jobs since foreign firms can also create local jobs; it is more to distort comparative advantage to favour local production against imports. Tax competition can therefore improve welfare by reducing the incentive for countries to resort to wasteful subsidies to protect their own industries.Notice that the nondiscrimination requirement plays a crucial role in making tax competition welfare improving.If discrimination were possible, then governments could continue to give wasteful subsidies to their domestic firms only (see Janeba [9] for more details on the conditions to obtain this result.The argument of Janeba is based on the Brander-Spencer result that countries use wasteful subsidies in an attempt to shift profits.The home and foreign firms choose the location of production activities before engaging in Cournot competition.Each government taxes only the income earned from production within its borders, at a single rate.Each government maximizes the sum of its firm's before-tax profits and the taxes obtained from the other country's firm (tax export)). The second example is the use of tax competition as a commitment device.In the tax competition model, governments independently announce tax rates and then the owners of capital choose where to invest.A commitment problem arises here because the governments are able to revise their tax rates after investment decisions are made.If there were a single government and investment decision were irreversible, then this government would have an incentive to tax away all profits.The capital owner would anticipate this incentive when making its initial investment decision and choose not to invest capital in such a country. Tax competition may help to solve this commitment problem.The reason is that intergovernmental competition for capital would deter each government from taxing away profits within its borders because it would induce expost reallocation of capital between countries in response to difference in tax rates.Tax competition is a useful commitment device as it induces governments to forego their incentive to tax investment in an effort to attract further investment or to maintain the existing investment level. Empirical Evidence It is natural for economists to think that competition among jurisdictions should stimulate public decision makers to act more efficiently and limit their discretion to pursue objectives that are not congruent with the interest of their constituency.Test of this hypothesis led to substantial empirical research investigating whether inter-governmental competition through fiscal decentralization affects public expenditures.The evidence as reviewed in Oates [10] supports in general the conclusion that increased competition tends to restrict government spending.But the fact that spending falls with more competition does not mean that resources are more efficiently allocated as competition increases.The problem is that it is hard to come up with measures of the quality of locally provided public services.However, there is one notable exception which is education where standardized test scores and postgraduating earnings provide performance measures that are easily comparable across districts.Following this strategy, Hoxby [11] finds that greater competition among school districts has a significant effect both in improving educational performances and reducing expenditures per student.Besley and Case [5] develop and test a political model of yardstick competition in which voters are poorly informed about the true cost of public good provision.They use data on state taxes and gubernatorial election outcomes in the US.The theoretical idea is that to see how much of a tax increase is due to the economic environment or to the quality of their local government, voters can use the performance in others jurisdictions as a "yardstick" to obtain an assessment of the relative performance of their own government.The empirical evidence supports the prediction that yardstick competition does indeed influence local tax setting.From that perspective intergovernmental competition is good to discipline politicians and limit wasteful public spending. A substantial body of empirical studies has emerged testing for interdependence among jurisdictions in tax and expenditure choices.One of the first and very influential works is by Case et al. [12] who test a model in which state's expenditure may generate spillovers to nearby states.The great novelty of this work is to allow for spatially correlated shocks as well as spillovers.Using data from a group of states, strong evidence of fiscal interdependence emerges and the effects arising from interdependence are large.A dollar increase in spending in one state induces other states to increase their own spending by seventy cents.Brueckner and Saavedra [13] test for the presence of strategic competition among local governments using data of 70 cities in the Boston metropolitan area.Taking capital as the mobile factor and population as fixed, local jurisdictions choose property tax rates taking into account the mobility of capital in response to tax differentials.Property taxes are the only important local revenue.The authors use spatial econometric methods to relate the property tax rate in one community to its own characteristics and to the tax rates in competing communities.They find that tax rate in one locality is positively and significantly related to tax rates in contiguous localities.This means that the tax interdependence generates upward sloping reaction functions.Same conclusion has been obtained with similar methodology by Heyndels and Vuchelen [14] in their study of propertytax mimicking among Belgian municipalities.Turning to welfare migration, Saavedra [15] uses spatial econometric estimates of cross sections welfare benefits (AFDC) for the year 1985, 1990, and 1995 of all states in the US.She finds strong evidence that a given state's welfare benefit choice is affected by benefit levels in nearby states for each year.Moreover the findings show significant and positive spatial interdependence, suggesting that a given state would increase its benefit level as nearby-state benefits rise. Conclusion: Competition versus Harmonization The role of competition may be thought as a device to secure better fiscal performance, or at least to detect fiscal inefficiency.If market competition by private firms provides households with what they want at least cost, why intergovernmental competition cannot lead to better governmental activities?Poorly performing governments will lose out and better performing ones will be rewarded.Though appealing, the analogy can be misleading and the competitive model is not directly transferable to fiscal competition among governments.Once there is more than one jurisdiction, the possibility is opened for a range of fiscal externalities to emerge.Such externalities can be positive, as with tax competition and lead to tax rates that are too low.Competition among governments to render high quality services may give way to competition for under cutting tax rates to attract mobile factors always from other jurisdictions.Given capital mobility, any attempt by local government to impose a net tax on capital will drive out capital until its net return is raised to that available elsewhere.The revenue gain from higher tax rate would be more than offset by an income loss to workers due to the reduction in the locally employed capital stock.Fiscal harmonization across jurisdiction would be unanimously preferred.Empirical studies are essential to compare the costs and benefits of inter-governmental competition.Evidence of the presence of fiscal interaction between jurisdictions is not compelling evidence of harmful tax competition.Tax interaction can also be due to political effect where the electoral concern induces local governments to mimic tax setting in other jurisdictions.In such case competition can be an effective instrument to discipline and control officials. We can conclude with the question raised at the beginning of this paper on the analogy between market competition and government competition.The main lesson from the fiscal competition theory is that intergovernmental competition limits the set of actions and policies available to each government.There is no doubt that such constraints that are imposed on the authority of governments do, indeed, constraint or limit actions, and, in so doing, both "good" and "bad" actions may be forestalled.So, whether we view such competition as harmful or not reflects our perception of the quality of governments.Unconstrained actions of "good" incumbents are good, but it can be very costly when governments can either abuse power, make wrong decisions, or adopt incoherent policy. Table 1 : Political accountability and voter welfare.
2019-05-15T14:34:09.580Z
2012-08-22T00:00:00.000
{ "year": 2012, "sha1": "8c37b9a969f639b2ab9d3fee5720c4eb5284908c", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2012/409135.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8c37b9a969f639b2ab9d3fee5720c4eb5284908c", "s2fieldsofstudy": [ "Political Science", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
266371033
pes2o/s2orc
v3-fos-license
An application of fuzzy bipolar weighted correlation coefficient in decision-making problem Diagnosing and finding the disease in medical sciences is a complex procedure. The basic steps involved in finding starts with signs, symptoms, and test. This study is based on the diagnosis of a skin disorder. The identification of a disease has been made on the basis of symptoms that sometimes show bipolarity. To address this bipolarity, the bipolar fuzzy sets are used as bipolar fuzzy sets cover the positive as well as negative aspects of a specific symptom. It is combined with the idea of soft sets, which gives more precise results. We have proposed a new technique in which a correlation coefficient is used to measure bipolar fuzzy soft set, which has been applied for diagnosis. The BFSSs deal most effectively with dual and fuzzy information. The correlation coefficient and the weighted correlation coefficient of BFSSs are suggested in this research. Based on said techniques, the decision-making method is suggested under a bipolar fuzzy environment to resolve ambiguous and unclear information. The implementation and effectiveness of the proposed and existing strategy has been checked by numerical computation. Introduction Decision making plays an important role in every aspect of life.In this study, we have combined the decision making concept with weighted fuzzy bipolar correlation coefficient.Correlation plays an important role in different disciplines named as Statistics, engineering, and social sciences.This is used to assess the interdependence of two variables based on their reciprocal relationship.This is very important concept not in statistics but also in probabilistic theory.The probabilistic techniques can be utilized to tackle a wide range of real-world engineering issues, probability strategy has some limitations; for example, the process's probability is determined by a significant amount of random data.On the other hand, large complex systems have many ambiguities, making it challenging to obtain precise probability occurrences.As a result, probability theory outcomes do not always provide meaningful information for experts due to a lack of quantitative information/data.Results based on probability theory are not always available to the general public due to the earlier challenges.As a result, probabilistic approaches are frequently insufficient to resolve such intrinsic data uncertainties.A lot of studies around the world have addressed and suggested various techniques to solving problems with uncertainty.To initiate, (Zadeh 1965) in [1], devised the concept of a fuzzy set to handle problems involving ambiguity and uncertainty.(Yu, 1993), in [2], introduced the correlation of fuzzy numbers.(Chiang & Lin, 1999), in [3], studied the fuzzy correlation of imprecise information by using the following strategies for typical statistics.As a result, they introduced the concept of the flexible correlation coefficient, which enlightens the fuzzy set. It has been observed that many developments of new theories have been made, but they have limitations due to their insufficient parameterization tool.As a result, the decision-maker (s) cannot make an appropriate decision due to uncertain information. The concept of fuzzy set theory was then polished by (Molodtsov, 1999) in [4], introduced the concept of soft set (SS) theory, which assigns ratings to parameters.to overcome these benefits.In [5,6,7], Chen, C.-T. (2000), Chen, T.-Y., & Tsao, C.-Y. ( 2008) and (Maji et al., 2002,), developed, group topsis concept, fuzzy soft set (FSS) and intuitionistic fuzzy soft set (IFSS) as extensions of this theory by combining existing FS and IFS theoretical approaches.But it is experienced that in some circumstances, we must thoroughly examine both the positive and negative aspects of the problem, which can't be addressed with fuzzy sets.To address these issues, (W.-R.Zhang, 1998) in [8], originally presented bipolar fuzzy sets as a fuzzy set extension.The concept of bipolarity is fundamental in separating positive and negative information. The positive information in bipolarity represents the level of satisfaction, while the negative information shows the level of unhappiness.In BFSS theory, expert preferences are expressed as a set of parameters.For instance, in case an individual needs to buy a mobile, then according to the BFS theory, the information is captured corresponding to a single parameter on two classification-one is satisfaction degree.The other is the degree of dissatisfaction, whereas, in BFSS, deep insight about the mobile specification is considered such as 'price,' 'camera, 'RAM,' 'screen size,' etc.In [ 9,10,11,12,13] ), investigated some applications on bipolar fuzzy sets.Despite all of these possibilities of bipolar fuzzy sets, it is still important to incorporate all of the data into a valid decision-making model and analyze it in an organized manner to maintain an acceptable knowledge representation framework.As a result, in [14], (Alghamdi et al., 2018), proposed bipolar fuzzy multi-criteria decision-making models that evaluated options for several criteria using bipolar fuzzy values.Moreover, Shimaiza et al. (2019), see [15,16], explored some concepts on multicriteria decision making and bipolar fuzzy sets.Furthermore, bipolarity and fuzziness are two distinct but overlapping concepts used to describe various characteristics of the human mind.While dealing with ambiguity, a decision-maker concentrates on language imprecision.Decision-making is a procedure of deciding or classifying an alternative from a set of available options.The most important factors are the source of information, collection of alternatives, and preference values in which decision is made.Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is a famous approach to deal with Multi-Criteria Decision-Making Method that was first proposed by (Hwang & Yoon, 1981) see [17], expanded the conventional TOPSIS to Fuzzy TOPSIS and addressed a fuzzy information decision-making problem.Each criterion's weight and rating of each possibility are described in linguistic terms in his work.Since then, the extended fuzzy TOPSIS technique has expanded its scope to include many applications, including logistics, human resources, and management.(Li & Nan, 2011), in [18], proposed the TOPSIS approach in an intuitionistic fuzzy framework and was also addressed by (Joshi & Kumar, 2014), see [19].(S.P. Wan et al., 2015;in [20] and Shuping Wan et al., 2016, [21]) introduced a novel method using interval-valued Atanassov intuitionistic fuzzy preference relation for group decision making.(Dong & Wan, 2016), [22], apply TOPSIS technique for virtual enterprise partner selection.Further, (Dey et al., 2016), [23] approach for handling decision-making problems in a bipolar neurosophic environment.After this formation, numerous scholars have applied the TOPSIS Method to decision-making in fuzzy intuitionistic fuzzy, inter-valued fuzzy, and hypersoft sets environments.(Shuping Wan & Dong, 2020) [24], proposed a method for Atanassov's Interval-Valued Intuitionistic Fuzzy MAGDM with Incomplete Attribute Weight Information For the extended TOPSIS method based on correlation coefficient, the reader is referred to (Garg & Arora, 2020), see [25] , (Lin et al., 2019) in [26] , (R. M. Zulqarnain et al., 2020;Rana Muhammad Zulqarnain et al., 2021), see [27,28]. Although the approaches described above are quite helpful in handling many decisionmaking difficulties, they do have certain drawbacks: 1.The weights of each Decision Maker in the Decision-Making (DM) technique are frequently seen as the same concerning diverse qualities, which is quite illogical and impractical.Every DM is good at only one subject in a real-world decision situation.As a result, allocating various weights to different features for each decision-maker is more natural and reasonable. 2. The multicriteria decision making problem is solved using the BF -TOPSIS approach.The data structure is unique in that no preference is taken into account while selecting alternatives. 3. The distance measure was employed in the topsis procedure to obtain optimal answers.However, the topsis approach based on distance cannot always determine the optimal solution, it is necessary to improve the BF TOPSIS method by using other measurements such as correlation coefficient. We suggested a new approach based on correlation co-efficient to overcome these flaws in a bipolar fuzzy environment.It has been mentioned here that Chiang, D.-A., & Lin, N. P. (1999), in [3], described the coorrelation coefficient with respect to fuzzy sets.Here we have used bipolar concept for the coorrelation coefficient.Our contribution can be summarized as: 1. Based on positive and negative membership degrees, we developed a connection between correlation coefficient and a weighted correlation coefficient for bipolar fuzzy soft sets and then examined their features. 2. Different weights are allocated for each decision-maker concerning different attributes or criteria, which is natural in real-world decision-making. 3. Bipolar fuzzy soft sets are introduced by merging bipolar fuzzy sets with soft sets to produce an order of preferences that leads to more exact outcomes. 4. The proposed correlation co-efficient is used to modify the TOPSIS method for bipolar fuzzy soft sets. The rest of the research is structured as follows.In section one, some definitions such as Fuzzy Set, Soft set, Fuzzy Soft set, Bipolar Fuzzy set, and Bipolar Fuzzy Soft set have been recalled, which have been used to construct the structure of further work.In section two, the correlation coefficient and informational energies of Bipolar Fuzzy Soft sets are suggested, and the correlation coefficient and the weighted correlation coefficient of Bipolar Fuzzy Soft sets are developed.In the third section, the properties are proved by utilizing the correlation coefficient and informational energies.A modified technique is suggested under a Bipolar Fuzzy environment to resolve confusing and unclear problems.A diagnosis analysis example is given in the fourth section to check the applicability of the proposed technique.Finally, comparative analysis has been organized. Preliminaries In this section, we will go over some basic definitions like fuzzy set (FS), soft sets (SSs), fuzzy soft sets (FSSs), bipolar fuzzy set (BFS), and bipolar fuzzy soft sets (BFSSs).These definitions have been used to construct the structure of further work and for better understanding. Definition 3.1: A fuzzy set χ in the universe of information U can be demonstrated as a set of ordered pairs, and it can be represented mathematically as w ¼ fðy; m x ðyÞÞjy 2 Ug, where χ degree of membership of y that assumes values in the range from 0 to 1, i.e., μ x (y)2[0,1].Definition 3.2: Let U be the universal set and 2 be the set of attributes concerning U. Let P (U) be the power of U and A�2 then, a pair (F,A) is called soft sets over U where F is a mapping from A to P(U), i.e., as F: A!P(U).It is defined as (F,A) = F(e)2P(U).Definition 3.3: Let A,B�ε, where ε be a set of parameters and (F,A), (G,B), be two Soft sets over U.Then, the basic operations over them are indicated as: 1. (F,A)�(G,B) if A�B and F(e)�G(e)8e2A.F,A)�(G,B) and (G,B)�(F,A). Complement (F,A) , where K U is a set of all subsets of U. Definition 3.4: A bipolar fuzzy set A in a universe U is as an object having the form: where, l þ A ðuÞ : u !½0; 1�, l þ A ðuÞ : u !½À 1; 0�, so l þ A ðuÞ denotes the positive information (satisfaction degree) and l À A ðuÞ denotes the negative information (dis-satisfaction degree).Definition 3.5: A mapping F: A!BF u F: A!BF u is called as bipolar fuzzy soft sets defined as where BF u is the bipolar fuzzy soft sets of U and l þ j and l À j are satisfaction and dissatisfaction degrees, respectively?For simplicity, we denoted the pair F u i ðe j ÞÞ as and called as a bipolar fuzzy soft number. Motivation Dual aspects of bipolar critical thinking on positive and negative aspects are used in a wide variety of decision-making.For example, in decision and coordination, the two sides are frequently the effects and side effects, like and unlike, cooperation and competition.In this case, the decision analyst will need to know about the notion of bipolar fuzzy sets.The bipolar fuzzy sets (BFS) are another modification of the fuzzy set (FS) by adding a negative membership function.To choose the optimal option, various decision approaches are utilized, one of which is the technique for order preference by similarity to the ideal solution, named as Topsis Therefore Further it was extended from traditional TOPSIS to fuzzy TOPSIS and then into bipolar topsis, (BF-TOPSIS).The said method has been applied to solve the medical-related problem (medical diagnosis).Our main goal for writing this research is to offer a new correlation coefficient for BFSSs data and build the TOPSIS method for BFSSs using the proposed correlation coefficient.We presented a new BFSS correlation coefficient and investigated various aspects of the generated correlation coefficient to quantify the dependency on BFSSs.We created an algorithm to solve MADM issues using the extended TOPSIS approach, and checked the validity of the proposed technique with a numerical illustration.To find similarity measures and distance, the general closeness coefficient is employed.Meanwhile, the correlation coefficient is employed to calculate the closeness coefficient in our suggested method. In this section, we suggest some correlation coefficients under the bipolar fuzzy environment, which are defined as Definition 3.1: Ug be two BFSSs defined over a set of parameters ε = {e 1 , e 2 ,. ...e m }.Then the informational energies of two BFSSs (G, ε) and (H, ε) are expressed as; The correlation of two BFSSs (G, ε), and (H, ε) can be expressed as ð3:3Þ In Eq (3.1) expressions l þ G j ðu i Þ represents the positive membership function and l À G j ðu i Þ represents negative membership function of the BFSS (G, ε), where as in Eq (3.2), l þ H j ðu i Þ and l À H j ðu i Þ shows the positive and negative membership function of the BFSS (H, ε) respectively.The correlation between both (G, ε) and (H, ε)) bipolar fuzzy soft sets are expressed in Eq (3.3). The following proposition established the relationship between correlation of bipolar fuzzy soft set and their informational energies defined over the parameters (H, ε)) be the correlation between them, then following properties holds: By expanding the summation of the above expression where (i = 1,2,3. ..n)After expanding the second summation where (j = 1,2,3. ..m), we get After summating the above whole expression, we get the following By applying Cauchy-Schwarz inequality on above equation, we proceed by using the concept of correlation coefficient We have ðC BFSS ððG; εÞ; ðH; where, I BFSS ðG; εÞ ¼ f ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi P m j¼1 ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi P m j¼1 ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi P m j¼1 ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi P m j¼1 Proof: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi P m j¼1 q ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi ffi ffi ffi ffi P m j¼1 H j ðu i Þ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi P m j¼1 q ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi P m j¼1 Proposition 3: The correlation coefficient ρ 2 ((G, ε), (H, ε)) between two BFSSs (G, ε) and (H, ε)) expressed in equation Eq (3.5) satisfy the following properties: (Property 1): 0�ρ 2 ((G, ε), (H, ε))�1 Proof: Since ((G, 2), (H, 2)) are BFSSs therefore, the inequality ρ 2 ((G, ε), (H, ε))�0 is obvious, and here we only have to prove ρ 2 ((G, ε), (H, ε))�1 by applying Cauchy Schwarz inequality, X n i¼1 a i b i � ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ð ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ð ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi maxfð and hence by Eq (3.5), we have 0�ρ 2 ((G, ε), (H, ε))�1.The proof of remaining properties is similar to proposition 2. However, in everyday practical applications, the considerations of weight are essential.The weights determine every decision's outcome the decision-maker assigns to each object in the universe of discourse.As a result, it's vital to consider the weight before making a decision.The distinctive set can take diverse weights and, in this way, the weight vector for the experts u i and parameters e j are described as.Consider o ¼ ðo 1 ; o 2 ; o 3 ; . . .o n Þ; ði ¼ 1; 2; 3 . . .nÞ; where ω represents the weight vector for the experts u i such that ω i �0 and ∑ i = 1 ω i = 1, and O represents the weight vector of the parameters 2 j such that O>0 and P m j¼1 ðO j ¼ 1Þ.Additionally, the correlation coefficient ρ 1 , ρ 2 are extended to Weighted Correlation Coefficient for BFSSs which are as follows: Definition 3.4: For two BFSSs (G,E) and (H,E), the weighted correlation coefficient is written as: r 3 ððG; εÞ; ðH; 2ÞÞ ¼ C WBFSS ððG; εÞ; ðH; 2ÞÞ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi I WBFSS ðG; εÞ:I WBFSS ðH; εÞ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi ffi ffi ffi P m j¼1 2 j ð q ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi P m j¼1 2 j ð If we assume ω i = 1 and 2 j = 1 in Eq (3.7), then the correlation coefficient ρ 3 ((G, ε), (H, 2)) reduces to ρ 1 ((G, ε), (H, 2)), defines in Eq (3.4) Proposition 4: The correlation coefficient ρ 3 ((G, ε), (H, 2)) for two BFSSs (G, ε) and (H, 2), fulfill the properties as follows: 1. 0�ρ 3 ((G, ε), (H, 2))�1. The proof is similar to Proposition 3. (parameter) j and l À ðzÞ ij = -1,then the professional i shows the maximum dissatisfaction degree to criteria(parameter) j. Step 3: Construct the weighted decision matrix Establish the weighted decision matrix g À ðzÞ ¼ g À ðzÞ ij , where Here, w i used as a weight vector for its professional O j is the weight vector for j th parameters and g À ðzÞ ¼ ð� g ðzÞ Þ n�m ; � g ðzÞ is known as weighted decision matrices.in this weighted decision matrix, the weights are computed by using the above formula mentioned in Eq (3.9) Step 4: Compute the correlation matrix The correlation between each value of � g ðzÞ ij and positive ideal r + = (1,0) is obtained as: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi By using these formula's, we get a correlation coefficient matrix which is represented by c ðzÞ ¼ ðc ðzÞ ij Þ n�m , (z = 1, 2, 3. ..k) here ðc ðzÞ ij Þ is correlation coefficient of each value of weighted decision matrix with the positive ideal r + . Step 5: Computation of positive ideal alternative (PIA) and negative ideal alternative (NIA) From the correlation coefficient matrix, we find the indices h ij and g ij for each professional u i and parameter e j which is h ij ¼ argmax ðzÞ fc ðzÞ ij g and h ij ¼ argmin ðzÞ fc ðzÞ ij g based on these indices, we find PIA(L + ) and NIA (L − ) as Step 6: Correlation coefficient with PIA By using either of proposed correlation coefficient ρ 1 and ρ 2 , we compute the correlation coefficient between weighted decision matrix � g ðzÞ ; z ¼ ð1; 2; 3 . . .kÞ , of each alternative γ (z) , (z = 1,2. ..k) and PIA(L + ). Step 7: Correlation coefficient with NIA Find correlation coefficient between weighted decision matrix � g ðzÞ (z = 1,2. ..k) and the NIA(L − ) by using following formulas: Step 8: Computation of closeness coefficient The closeness coefficient for each alternative γ (z) , (z = 1,2. ..k) is obtained as: where Zð� g ðzÞ ; L À Þ ¼ 1 À q ðzÞ and Zð� g ðzÞ ; L À Þ ¼ 1 À P ðzÞ Step 9: Ranking the preference order By arranging the value of R (z) in descending order, we give ranks to alternatives and find the best alternative. In this section below, we describe the application of the proposed technique by giving an example Diagnosis of specific skin disorder Medical diagnosis is a procedure in which the occurrence of the disease is found based on the patient's symptoms.For proper diagnosis lab test as well as physical examination is required.The initial diagnosis starts with signs, symptoms, and later tests.Many health care units evaluate the patient's health by the domain expert based on signs, symptoms, and investigation.Diagnosis has been made on the basis of the intensity of these signs and symptoms.Assuming that the skincare experts have to select the specific skin disorder by evaluating the symptoms related to five different diseases that include A (1) = Measles, A (2) = Chicken Pox, A (3) = Latex allergy, A (4) = Lupus.A group of three decision-makers (professionals) having weights vector (0.5, 0.4, 0.1) decided upon to evaluate these four diseases (alternatives) on the basics of five symptoms, such as S 1 = Pain or itching S 2 = Red and inflamed skin, S 3 = Fever or flu-like symptoms, S 4 = Pimples or blisters, S 5 = loss of appetite.Symptoms are considered as criteria having weights (0.30, 0.15, 0.25, 0.20, 0.1).These weights are assigned by the professionals (expert doctors) after specifying the intensity of signs or symptoms for a particular patient.This intensity was rated as mild, moderate, severe, very severe.The framework for evaluating the diagnosis strategy is provided in Fig 2 Here the objective is to find out the specific disease on the basis of criteria (symptoms) mentioned by the decision-makers. Table (1)(2)(3)(4) shows the bipolar fuzzy decision matrix, where each entry corresponds to each decision-maker, and particular symptom represents the positive and negative membership values.The positive membership degree shows the possibility of alternative under specific characteristics, and the negative membership degree shows the impossibility of alternative under specific characteristics.Suppose the value of positive membership degree (l þ i ) is close to 1.In that case, the alternative represents the maximum possibility of occurrence to criteria.If the value of negative membership degree λ ij − is close to -1, then the alternative represents the maximum impossibility of occurrence to criteria. A (1) S The above Tables (5-8) represents the weighted bipolar fuzzy decision matrix.This matrix is obtained by multiplying the bipolar decision matrix to weight vector for ith professional and weight vector for the parameter.Step 3: Compute the correlation coefficients between each alternative γ (z) , (z = 1,2,3,4) and positive ideal r + by utilizing Eq (3.The above matrix from ψ (1) to ψ (2) represents the correlation coefficient matrix, in which each value shows the correlation coefficient.This correlation coefficient is computed for each value of the weighted decision matrix and perfect ideal r+ by applying the Eq (3.9). The association between each alternative and negative ideal is mentioned above. In the above step, by arranging the closeness coefficient in descending order, we conclude the result that the first closeness coefficient (R (1) ) has a higher value, so alternative one (A (1) ) is the best alternative.As a result, the disease measles is diagnosed.The maximum of which best describes the symptoms mentioned. Discussion and comparative study The following section will discuss the effectiveness, adaptability, and advantages of the suggested technique and algorithm.We also put together a quick comparison of the suggested and existing methodologies. BF-TOPSIS is used to select the best alternative To justify the predominance of our methodology, this section comprises the comparative investigation of the proposed approach with the prevailing methodology (Akram et al., 2020).The obtained results so described are as follows: To implement the BF-TOPSIS approach, we utilize the information as presented in tables (4.5 to 4.8).Based on these tables, we compute weighed bipolar fuzzy decision matrix.After finding BFPIS and BFNIS, the results are as follows. We deal with accurate information of the alternatives using the methodology of (Zadeh 1965), although this method cannot deal with falsity objects or many sub-attributes of the alternatives.In [29], H. M. Zhang et al., 2007) are unable to present dual features of the problem and many sub-attributes of the alternatives.It has been mentioned that the (Garg & Arora, 2020), see [25] and (Rana Muhammad Zulqarnain et al., 2021), [28] Methodologies are unable to deal with the dual aspect information's.Our proposed technique may readily overcome these challenges and give more effective outcomes for MADM difficulties.Instead, our proposed method is a sophisticated strategy that can deal with multiple sub-attribute information alternatives.The Table 9 as mentioned above shows a comparison and figure 9 presents the comparison of bipolar fuzzy Topsis and the proposed technique.On the other hand, the methodology, we developed addresses the dual issues of alternatives with sub-attribute and their inaccuracy.As compared to existing methodologies, the technique we developed is more efficient and can deliver successful results for decision-makers using an array of perspectives. Conclusion The investigated research manipulates the BFSSs to deal with the unsatisfactory and incompatible data by assuming the possibility and impossibility degrees over the set of parameters.The basic intuition of the correlation coefficient and weighted correlation coefficient for BFSSs with their properties is formulated in this research.A new algorithm has been introduced on the basics of the correlation coefficient by taking into account the set of attributes and experts.The suggested method could be helpful in future for medical diagnosis in dual or bipolar behavior.The correlation matrix has been formulated by using correlation indices.PIA and NIA are also determined.To locate the hierarchy of the alternative, we defined the closeness coefficient for the proposed technique.Finally, a numerical illustration of the diagnosis has been described.The following Table 10 represents the comparison of the existing and proposed techniques as described in Table 9. Table 10.Comparison of the existing and proposed techniques. Existing Method Proposed technique The existing method neglects the preference information of decision-makers.Hence, it does not provide precise results. Using soft sets in our proposed technique, we consider the preference information, leading to a more compressed and precise result. PIA and NIA are computed by taking the extreme values based on the given decision matrix. The computational procedure is different for the determination of ideal solutions.PIA and NIA are determined on the given alternative rating based on the impact of the maximum correlation coefficient.This leads to less loss of information during the process. This method considers the only degree of similarity between the observations. The correlation measures take into account the degree of similarity and the degree of discrimination between the observations.This helps avoid the decision which is only based on negative reasons. The utilization of Euclidean distance does not think about the relationship of attributes. Our proposed method is based on correlation coefficient; that's why it effectively bothers the relationship of attributes and alternatives. It is insufficient to only consider the evaluation information from individual decision-makers when making the decision results.
2023-12-21T05:07:20.417Z
2023-12-19T00:00:00.000
{ "year": 2023, "sha1": "4a2851ac9071cc8753788e452e16f88fff923ea2", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4a2851ac9071cc8753788e452e16f88fff923ea2", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
55064705
pes2o/s2orc
v3-fos-license
A multi-scale analysis of moisture supply associated with precipitation on Isla del Coco, Costa Rica Wrapped by the Pacific waters and the mist of shipwrecks and pirates stories, one of the rainiest Eastern Pacific islands protects a biodiversity treasure: Isla del Coco. This study presents the analysis of moisture sources linked with contributions to precipitation in the area. The diurnal cycle of precipitation on the island was reviewed from GPS station data previously evaluated using available meteorological data from field campaigns held on the island in 2011 and 2012. Near-surface salinity patterns were also analyzed along with sea surface temperature, evaporation, Ongoing Longwave Radiation (OLR) as well as latent and sensible heat fluxes. Moisture contributions to precipitation on the island are supplied by evaporative sources, and moisture recycling is important. Regional precipitation is a continuous supply of moisture for the atmosphere whereas transport from evaporative sources is seasonally constrained. The analysis of the diurnal cycle of moisture supply suggests that contributions from available moisture linked with precipitation recycling show a slightly delayed response to deep convection in the region. The diurnal cycle of contributions to precipitation from evaporative moisture sources, based on the modeling component of the work, is consistent with the diurnal cycle of precipitation. The trajectory analysis remarks the role of the low level winds, the Intertropical Convergence Zone (ITCZ) position and the stability conditions to modulate the supply of moisture. The moisture contributions from the sources present a different sensitivity to El Niño-Southern Oscillation (ENSO). Contributions from precipitation recycling showed a large variability linked to ENSO as increasing contributions were determined to be related with El Niño during boreal summer and autumn months. The variability of the contributions from a North-east evaporative source is modulated by the response of the Caribbean Low Level Jet (CLLJ; Amador, 1998; Amador, 2008) to ENSO. The South-western evaporative source showed sensitivity to El Niño, as transport was found to decrease (increase) from November to March (May to July), meanwhile the response to La Niña was small. Good agreement amongst the ENSO response of the fields and the known dynamics of the region was found. Rev. Biol. Trop. 64 (Suppl. 1): S87-S103. Epub 2016 Febrary 01. Wrapped by the Pacific waters and the mist of shipwrecks and pirates stories, one of the rainiest Eastern Pacific islands protects a biodiversity treasure (Cortés, 2008;2012).Isla del Coco, located in Costa Rica, with a terrestrial extension of 24 km 2 , is located at 5°32' N -87°04' W, with its highest peak at 575.5m.a.s.l (Montoya, 2007).First known maps of the island are dated back to the 16 th century, the records of exploration visits date back to the 17 th century and research documents are dated to have started in the 19 th century (Cortés, 2008).Descriptions of organisms found on the island and surrounding waters, account for invaluable inventories of vegetation as well as marine, terrestrial and aerial animals since the late 19 th century.The island was declared UNESCO World Heritage site in 1997 and since 1998, it is a wetland of international importance under the RAMSAR Convention.As Isla del Coco is the only Eastern Tropical Pacific (ETP) island featured by a tropical rainforest, understanding the interactions that drive the islands precipitation is fundamental to improve our knowledge on the islands characteristics and dynamics.Furthermore, such information is key for the preservation of the island and its biodiversity on land, in the water and air.Early descriptions on the climate of the island were provided in the late 19 th century by Pittier (1898) and pioneer quantitative information was later given by Protti (1964).Modern studies based on meteorological records (Alfaro, 2008) show that weather and climate of Isla del Coco feature intense precipitation, which ranges between 5000 and 7000mm per year.Currents systems, air circulation, the presence of the Inter Tropical Convergence Zone (ITCZ) among other factors, point to the island as a remarkable location to study climate phenomena such as El Niño-Southern Oscillation (ENSO) and climate change (e.g.Quirós-Badilla, & Alfaro, 2009;Maldonado, & Alfaro, 2012).Unfortunately, long term meteorological records do not exist, data is scarce and its quality is not always ensured.Available records include data form a rain gauge operated by the National Meteorological Institute since 1979 (Fernández, 1984).However, the data record contains numerous gaps as the operation of the station was not continuous in time (Quirós-Badilla, & Alfaro, 2009). This work combines in situ observations, reanalysis products and Lagrangian modeling to explore the relationship between observed precipitation and moisture transport at a regional scale, ranging from the diurnal cycle to interannual variability.Data from a GPS COCONet met station is used after verification of its accuracy for precipitation measurements, in absence of continuous data records in the most recent years.The study aims to determine whether the dominant wind flow plays a role on the diurnal cycle of precipitation as reported by Alfaro (2008) and how sea surface conditions response to ENSO may drive precipitation changes on the island.The links amongst moisture transport, sea surface temperature (SST), evaporation and salinity are then analyzed at different time scales. MATERIALS AND METHODS Precipitation includes data from the Costa Rican National Meteorological Institute (IMN in Spanish) weather station coded as 200002.This station is located in Chatham Bay (5°32'51" N -87°02'42" W, 151 m.a.s.l).Hourly data for the period from June 2011 to June 2012 was used.Hourly data was computed from 1 and 5min records of a COCONet GPS automatic weather station (Protti, González, Freymuller, & Doelger, 2012) for the June 2011 to December 2012 period.The station COCONet GPS station is coded as CocosIsl_CRI2011 and is located in Wafer Bay (5°32'40" N -87°03'20" W, 11 m.a.s.l).The GPS station is equipped with vaisala WTx510/520 meteorological package that records rain and hail precipitation by impact.The performance of the GPS station measurements is evaluated using observations from expeditions during July 2-8, 2011 and March 15-21, 2012 recorded by Davis weather stations at Wafer Bay (Cortés et al., 2011;Cortés et al., 2012).A map of the island location relative to Costa Rica is provided in Figure 1, in which the location of the observation sites is shown.The map also depicts colored topography contours so that elevation differences can be also noticed for further reference on the characteristics of the island and the location of observation sites. Sea Surface temperature from the OISST (NOAA Optimum Interpolation Sea Surface Temperature Analysis) product (Reynolds et al., 2007) was retrieved for the period 1980-2012.The OISST product is generated using an optimum interpolation of Pathfinder AvHRR (Advanced very High Resolution Radiometer) data.The OISST dataset has a 0.25º grid resolution and coverage is global.Salinity at 5m depth from the Simple Ocean Data Assimilation (SODA) dataset (Carton, & Giese, 2008) for the 1980-2012 period was retrieved, average resolution of this dataset is 0.5°×0.5°.Input information to generate the dataset includes available hydrographic profile data, moored temperature, salinity time series, surface temperature as well as salinity observations of various types, ocean station data and infrared satellite data.Monthly evaporation, sensible heat and latent heat fluxes data in a regular 1° grid of horizontal resolution from the Objectively Analyzed Air-sea Fluxes (OAFlux) Project (Yu, Jin, & Weller, 2008) was used.OAFlux data are constructed using blending of satellite retrievals (e.g.SSMI, AMSR-E and QuikSCAT) and NWP reanalyses (ERA40, NCEP1 and NCEP2).Outgoing Longwave Radiation (OLR) monthly mean data from the AvHRR instrument, available on a 2.5°x2.5°grid from the Climate Diagnostic Center site were used (Chelliah, & Arkin, 1992). Using a numerical water vapor tracer (WvT) method, the source-receptor relationship was analyzed to determine the sources of moisture for Isla del Coco and its vicinity precipitation.The method tracks the air masses that circulated over the island up to ten days back in time using the information of backward trajectories.The Lagrangian backward trajectories were generated for a global domain using the Lagrangian particle dispersion model FLExPART (Stohl, Forster, Frank, Seibert, & Wotawa, 2005) and ERA-Interim data (Dee et al., 2011).Reanalysis data every 6hr (00, 06, 12 and 18) and analysed for the intermediate 3 hr time steps (03, 09, 15 and 21) were used as FLExPART input.A Lagrangian approximation of the Eulerian budget equation was used to estimate the evaporation and precipitation difference integrated over an atmospheric column to identify the sources of moisture (see details in Stohl, & James, 2004).Once the moisture sources were identified, the transport of moisture was estimated and time series were computed from hourly to monthly time scales.A clustering algorithm based on Dorling, Davies & Pierce (1992) was applied to reduce the number of trajectories and show explicitly the mean structure of the moist flux that contributes to precipitation on the island.Composites of SST, surface fluxes, evaporation and salinity were computed for the five strongest ENSO events between 1980 and 2012 based on the Multivariate ENSO Index (MEI) and Nino 3.4 indices.Table 1 shows the detailed information on the events selected for the compositing. GPS automatic station measurements performance test The deployment of the COCONet met station at Wafer Bay is seen as a good opportunity to obtain complementary information to that provided by the IMN station.Considering the high frequency of the observations, data are useful for the analysis of rain showers duration and intensity as provided by Alfaro & Hidalgo (2016).However, the WxT520 is indicated to be more suitable for short-term deployments as the instrument is not intended for research performance purposes.As the objective of this work is to use the GPS met station precipitation information for a longer time span (18 months), an evaluation on the performance of the GPS met station is required.A comparison between the GPS met station data and data from weather stations deployed during the expeditions was performed. The overall results suggest that the information provided by the sensors of the GPS met station can be considered as reasonably accurate for the temperature, precipitation and wind speed variables.A good agreement between the GPS met and Davis automatic stations measurements was found during the 1983, 1987, 1992, 1998, 2010 1983, 1987, 1992, 1998, 2010 1983, 1987, 1992, 1998, 2010 1983, 1987, 1992, 1993, 1998 1983, 1987, 1992, 1993, 1997 1987, 1991, 1992, 1997, 2002 expeditions in 2011 and 2012.Figure 2 shows the correspondent scatter plots for temperature, relative humidity and precipitation.Temperature observations (Fig. 2a) present a large correlation and show that during the expeditions, temperature ranged between 23ºC and up to 30ºC so that diurnal thermal amplitude in the island is considered to be large.Despite the comparison of relative humidity (Fig. 2b) shows more widespread results, observed differences may respond to the emplacement of the stations, regardless being located in the same bay.As observed in Figure 1, the GPS met station is located more inland compared to the site of the expeditions observations, hence the observations from the expedition have a stronger influence of marine conditions.It is also important to mention that the expedition observations appear to be capped for values larger than 93 % and this may also influence the results.A more appropriate comparison would be the use of specific humidity, however, there were not enough information available to estimate the specific humidity and use this variable instead of relative humidity.A longer time period of observations is still needed to determine whether the emplacement of the stations is enough to explain this result or if there might be sensitivity issues with the humidity sensors of the GPS met station.Precipitation measurements between the two stations are coherent (Fig. 2c), notice that the range of precipitation corresponds to the 2011 expedition and that observations were used at an hourly rate.Information from the GPS met station shows reasonable skills to reproduce the diurnal cycle, mainly of temperature and precipitation.Moreover, based on historical precipitation data, the GPS met station performance is also accurate to capture intense rainfall events (not shown).The comparison for wind is not presented as the location of the stations in the bay is such that wind direction measurements are not comparable.However, it is worth mentioning that wind speed measurements were consistent in terms of magnitude.To be more conclusive in terms of the data quality from the GPS station, control for a longer time span and the use of a higher quality humidity sensor is required for more detailed ground tests. The annual cycle of moisture transport linked with precipitation on Isla del Coco Alfaro (2008) suggested that the diurnal cycle of precipitation on Isla del Coco was strongly forced by the wind field.The overall effect of wind on the islands precipitation can be understood in terms of local and remote components.Such components include air-sea breeze, topographic effects and radiative forcing of precipitation by local differential heating.For precipitation, large scale forcing on the winds is an important constraint, as its role to modulate moisture transport and availability is well known in the region (Durán-Quesada, Gimeno, Amador, & Nieto, 2010;Durán-Quesada, 2012).To determine the importance of the wind driven moisture transport linked to precipitation on the island, the moisture sources for local precipitation were identified.The spatial scale of the transport and the locations where moisture uptake is larger were determined from Lagrangian trajectories analysis. Air masses that supply moisture for precipitation in the vicinity of Isla del Coco have a regional origin as shown in Figure 3.The results suggest that moisture contributions to the island come from evaporation over the ETP and the westernmost edge of the Caribbean Sea (red shading in Fig. 3).Green and blue shading in Figure 3 show the regions in which precipitation exceeds evaporation, these areas are observed over the easternmost Tropical Pacific.Even when they do not account as moisture sources themselves, they play an important role in increasing the moist content of the atmosphere.If this precipitation re-evaporates, is transpired or recycled (depending on the underlying surface), the moisture supply for precipitation increases and it can be considered as a net moisture contribution to precipitation.The moisture sources present a well defined annual cycle, between November and January, small contributions from the inner Caribbean and South-West of the island were detected.Precipitation largely exceeds evaporation in the easternmost tropical Pacific due to the presence of the ITCZ (Fig. 3a).Most of the atmospheric moisture available in the region is linked to large scale rain producing systems.As shown by the averaged trajectories in Figure 4a the moisture uptake is limited to the island vicinity.The structure of the moisture transport depicts an enhancement of the easterly low level flow.As the easterly flow is intensified by the secondary peak of the CLLJ, evaporation in the inner Caribbean and to the North-East of the island (Fig. 3b) becomes the main moisture supplier during February.During boreal winter months, the CLLJ is the main moisture conveyor for Isla del Coco.The dynamics of this evaporative moisture source is very complex as it is related with the large scale circulation forcing and its interaction with the region, exhibiting also a strong air-sea coupling.Note that the (E-P) -10 maximum shown in Figure 3b is located over a region of warmer SST and of a deeper 20ºC isotherm (see Fig. 4b from xie , xu, Kessler, & Nonaka, 2005).It is important to remark that even when the air masses might have their origin in the Caribbean and the major moisture uptake takes place in the ETPac, the contributions from the Caribbean are not negligible.As the CLLJ decreases in intensity, the Caribbean influence retreats and the evaporative source of moisture once located to the east of the island vanishes.Precipitation over the ETPac is now lead by the activity of the ITCZ and the development of convection over the Panama Bight region.The intensification of precipitation increases moisture availability in the vicinity of Isla del Coco (Fig. 3c).Note that during these months, the ITCZ meridional migration (following the summer hemisphere) is northward, therefore, precipitation is enhanced over the ETP and moisture supply from evaporative sources is not detected.Few months later, the CLLJ reaches its summer maximum but different from February, the regional conditions do not favor moisture transport.Enhanced precipitation may supply moisture to the island surrounding atmosphere during the following months with contribution showing a peak between June and September (see green and blue shaded area in Fig. 3d).The results show that after May, an evaporative moisture source starts its development southwest of the island.This evaporative source of moisture becomes strongly active between June and September (see red shaded area in Fig. 3d) until it starts to decrease after September.The presence of this strong evaporative source responds to the combination of increasing evaporation in the ETP and the strengthening of the southwesterly low level flow.SST becomes warmer (sensible heat increases), evaporation is enhanced and the release of latent heat increases.The latter increases the moisture content of the atmosphere.Low level winds transport the moist air to the vicinity of the island where it can contribute to precipitation.As noticed from Figure 4c, the south flow transport the air masses towards Isla del Coco, intense uptake occurs South-west of the island and moisture supply from this source is intensified.Note that during August, the moisture content in the ETPac is very large as convective processes and evaporation are intense.Figure 4 highlights the role of regional low level winds for the transport of moisture, for reference, the average level of the trajectories shown is below 750 hPa.To summarize, moisture transport from the two evaporative sources of moisture was estimated and monthly averaged.Black solid lines in Figure 5 shows the annual cycle of moisture contributions.Moisture transport due to easterly winds (Fig. 5a) peaks during boreal winter, the decrease in moisture contributions from this source is abrupt and a slight increase is observed during the summer months when the easterly flow intensifies again, in good agreement with the annual cycle of the CLLJ.Meanwhile, moisture transport from the South-west evaporative source identified increases from April to September (Fig. 5b).Despite this source is active for a longer period, the intensity of its contributions is smaller than easterly transport.The activity of both evaporative sources is strongly seasonally constrained.The annual cycles provided in Figure 5 show that Isla del Coco have a continuous moisture supply all year round.The ITCZ and convective systems development account for precipitation on the island, but it is the moisture transport which sustains the moisture availability required for rain producing systems locally and hence help to provide the conditions on the island for the existence of a rain forest. Diurnal cycle of moisture supply: effect on precipitation on Isla del Coco The diurnal cycle of precipitation on Isla del Coco is driven by the influence of low level winds, the ITCZ position and the stability conditions near the island (Alfaro, 2008).Within this volume, Alfaro & Hidalgo (2016) remark local differences in the diurnal cycle of precipitation at Chatham and Wafer bays based on available observations (see their Fig. 1).According to their results, unlike Wafer Bay, a peak of afternoon summer rainfall is reported at Chatham.At Wafer Bay, precipitation peaks at late morning from January to March, at the afternoon from April to May and at the early morning during June to September, while December tends to be a drier month. Here we aim to answer two questions, whether the transport of moisture has a diurnal cycle and, if so, how it modulates the diurnal cycle of precipitation. Moisture contributions from the evaporative sources were computed at an hourly time scale, in addition, Lagrangian precipitation estimates were used to approximate the recycling of ETPac precipitation on Isla del Coco. the model to capture the moisture variations at such scales.For this period, moisture transport from the eastern source is enabled and as shown in Figure 6b, transport from this source peaks in the same window.During the peak of moisture transport from the eastern source, precipitation occurs in the late morning hours. We can infer that precipitation on the island is diurnally driven by long range moisture transport from the Easternmost Tropical Pacific and the inner Caribbean.Between June and August precipitation recycling and moisture transport from the south-western source (Fig. 6c) peak in the morning and early afternoon hours.We highlight the strong relationship between the transport of moisture and the timing of precipitation.Moisture exports from the remote source located to the north-east of the island have a marked annual cycle that peaks during February (Fig. 6b).As previously indicated, the CLLJ is the conveyor for regional moisture transport from both evaporative sources.Cook & vizy (2010) demonstrated that, the meridional wind is more intense at 10:00 (local time) for the southerly flow.Intensified wind flow from the South during the late morning might be reflected in the minimum of transport observed as the island is located south of the winds veering.Transport from the southwestern source is maximum between August and November (Fig. 6c).The wind field during these months is featured by the increase of a south-westerly flow.This flow is known to be related with the development of the Chocó jet (Poveda, & Mesa, 2000) and has being also identified as a moisture conveyor for Central America (Durán-Quesada et al., 2010;Durán-Quesada, 2012).Moisture transport due to the Chocó jet is slightly larger during the morning.The diurnal cycle of the Chocó jet responds, among other mechanisms, to the development of mesoscale convective systems over the offshore Pacific coastline (Poveda, Waylen, & Pulwarty, 2006).The forcing of the diurnal cycle of the Chocó jet, can be inferred from known results that show that convective systems are more likely to develop at 0600 and 1500 local time (Mapes, Warner, xu, & Negri, 2003).Figure 6c shows that in fact, transport from this source is larger in the same time window.The diurnal cycle of the moisture supply from the south-western source is modulated by the Chocó jet forcing due to the development of convective systems.The results obtained with the Lagrangian modelling approach used in this study are coherent with the near-land regional timing of convection in the eastern Pacific.Bain, Magnusdottir, Smyth & Stern (2010) showed that despite an east-west shifting, convective activity in the ITCZ peaks in the early morning and afternoon.A similar timing was found by Mapes et al. (2003) for mesoscale convective systems developing over western Colombia and the Panama Bight. Moisture transport response to ENSO Isla del Coco is located in a region where the response of SST, OLR and surface fluxes to ENSO shift (Quiros-Badilla, & Alfaro, 2009).This makes the island an ideal location to study the effect of ENSO on the regional distribution of precipitation.This part of the study focuses on the analysis of the regional response of SST, evaporation, OLR, salinity and surface fluxes to ENSO and associated changes in the moisture transport to the island from the evaporative sources as well as the contributions from precipitation recycling.In general terms, precipitation recycling and moisture transport show a contrasting response to ENSO phases as shown in Figure 7 by the composited annual cycles for El Niño (black solid line) and La Niña (gray dotted line).Precipitation recycling shows a larger response to ENSO during boreal summer as the recycling increases (decreases) for El Niño (La Niña) as shown in Figure 7a.This result is in agreement with the intensification of precipitation in the vicinity of the island during El Niño events.Contributions from the eastern evaporative source (Fig. 7b) present a strong sensitivity to the seasonal response of the wind field to ENSO.This is, an opposite response to the ENSO phases during winter and summer months.For February-March transport increases (decreases) for La Niña (El Niño) while for June-August transport is more intense (weaker) during El Niño (La Niña) events.The response of this source to ENSO is completely lead by the dependence on the low level wind flow.Note that the observed response of the moisture transport to ENSO forcing is actually the fingerprint of the CLLJ response to ENSO.The CLLJ is known to be stronger (weaker) in summer for El Niño (La Niña) while the opposite is observed for winter.The difference between the response of the western moisture source to ENSO phases is larger between May and September (Fig. 7c).From May to July moisture transport from this source is enhanced (decreased) for El Niño (La Niña) while after July the observed response is opposite.As the western source reaches its maximum intensity after July, the result may imply that under, El Niño conditions, the moisture weakens.Therefore, a decrease in precipitation on the island for these months is suggested as the moisture supply is reduced. To evaluate how does observed precipitation variability relates with the modeling results for moisture supply, results from Quirós-Badilla & Alfaro (2009) are considered to alleviate the poor long term data availability on the island.Quirós-Badilla & Alfaro (2009) found that El Niño events are related with above normal precipitation and the opposite was determined for La Niña events, being the signal detected stronger for October.The results obtained in this work show that moisture supply is increased under El Niño conditions mostly during the summer months consistent with the enhanced precipitation for warm ENSO.However, a decrease in moisture transport from the western source as well as in moisture recycling was detected during August and a deficit in overall moisture contributions from the sources was found for August under El Niño.Monthly composited differences of TRMM precipitation during El Niño and La Niña events were computed in order to evaluate the regional scale precipitation response. Based on TRMM data, El Niño is associated with a decrease in precipitation over the ETPac north of the equator during August.Therefore, the moisture supply decrease is consistent with observed precipitation reduction in the ETPac.This contrasting result with Quirós-Badilla & Alfaro (2009) may reflect continuity and quality issues with the observational data used in their study as well as the dominance of local conditions on the island not reproducible by the trajectories and captured by TRMM due to horizontal resolution.During the January to early March, the transport of moisture and precipitation recycling decrease during El Niño as does daily rainfall according to TRMM composites. SST, OLR, latent heat and sensible heat fluxes, evaporation and salinity are considered to evaluate links that may provide us with further information about ENSO forcing on moisture supply processes.Considering the strong signal detected by Quirós-Badilla & Alfaro (2009) and the precipitation recycling peak found in this study for October, further fields are analyzed.The SST distribution (Fig. 8a) during El Niño features the associated warming in the tropical Pacific.Increased SST accelerate winds due to thermal gradients, notice the Caribbean-Pacific SST gradient despite the CLLJ is minimum for this month.via such mechanism, winds that transport moisture from the north-eastern source can be modulated and the contributions from the source change, as shown in Figure 7c.OLR differences (Fig. 8b) show the enhancement of convection as depicted by negative OLR values during El Niño. The air-sea coupling in the vicinity of Isla del Coco is of importance as it defines not only regional weather and climate but biological processes.The distribution of salinity, is driven, in general terms, by the difference between evaporation and precipitation as well as the freshwater discharge of rivers (for further details on salinity distribution in the ETPac please refer to Amador et al., 2016 in this volume).Isla del Coco lies under the ITCZ and nearby the Pacific coast of Colombia, where strong convective systems seed heavy rainfall in a region in which river discharge into the ETPac is very large.Evaporation in the region is forced by large SSTs but is also wind driven (see in Fig. 8c the evaporation maxima west of the Tehuantepec and Papagayo gaps for January).During January, under El Niño conditions, the intensification of precipitation largely exceeds evaporation.As a result, salinity decreases in the waters that surround the island as shown in Figure 8d.However, the suppression of deep convection west of Colombia and the consequent reduction of precipitation over some locations of South America, are associated with an increase in the salinity in the coastal regions of Panama and Colombia as shown in Figure 8d by the red shading.As observed from that figure, this generates a strong salinity gradient east of the island that may induce a circulation due to density differences.This salinity gradient must be studied in terms of the impacts of ENSO for productivity in the waters surrounding the island. CONCLUSIONS The scarcity of long-term meteorological observations on Isla del Coco can be alleviated by considering the use of a GPS met station deployed for seismic purposes.Even when instruments on the currently GPS station operating on the island are not intended for research performance purposes.Based on comparisons with an automatic meteorological station data, the quality of the information is considered as fairly good for research applications in absence of other high quality meteorological observations.It is worth mentioning that, a longer time span analysis is still required to be conclusive in terms of assuring the quality of the GPS station meteorological data, as the evaluation performed only included short periods of available observations from field campaigns on the island. The results of the WvT method applied to study moisture transport and contributions to precipitation on Isla del Coco allowed us to determine the islands main moisture suppliers.Moisture sources were identified to be linked with contributions from evaporation from both Eastern Tropical Pacific and the Caribbean Sea as well as the recycling of moisture.The clustering analysis of backward trajectories reproduced the transport as dominated by regional low level winds and constrained by the seasonal migration of the ITCZ.Furthermore, deep convection related with mesoscale convective systems over the Panama Bight was found to act as a modulator of precipitation-linked moisture contributions. The annual cycle of moisture transport was determined to be controlled by the transport of moisture and the diurnal cycle of mesoscale convective systems development.Based on the results we can conclude that precipitation on the island is seasonally forced by regional winds but diurnally driven by large scale convective processes.Observational data is needed to properly understand the diurnal wind regime on the island beyond the known local timing of land-sea breeze.As the CLLJ strengthens during the first trimester of the year, northeasterly moisture transport increases and precipitation is lead by the low level winds and local atmospheric stability.When moisture transport retreats, precipitation is controlled by large-scale convection until the south-westerly flow and evaporation enhances the transport of moisture South-west of the island. Transport of moisture exhibits a well defined diurnal cycle, precipitation recycling peaks at early morning and afternoon, in good agreement with large scale precipitation maximum values.However, moisture availability is not necessarily linked with rain so that further research on local stability conditions suitable to enhance precipitation are needed.A relevant aspect to consider is that when the evaporative sources are active, the release of latent heat is linked to a more unstable atmosphere and this may play an important role in determining whether convection develops or not.The analyzing of the diurnal cycle of the moisture transport, it was found that transport related with the CLLJ does not seem to show a very marked diurnal cycle whereas for transport related with the Chocó jet the diurnal cycle of transport is slightly delayed with respect to the known diurnal behavior of the development of convective systems offshore the Pacific tropical South American coastline. Moisture supply is sensitive to ENSO, in general terms contributions increase (decrease) during El Niño (La Niña) events.However, the response of the moisture supply to ENSO requires individual month analysis as the response changes.The north-east evaporative source shows a significant response to the seasonally forced ENSO response of the low level wind flow.Transport from this source is enhanced (decreased) for La Niña (El Niño) during February-March while presents the opposite behavior for June-August and October-November.The south-western evaporative source response to La Niña is very small.In contrast, for El Niño, transport is largely increased between June and July, and decreased during August. The evaluation of ENSO sensitivity of precipitation, SST, evaporation, OLR and salinity show how increases of SST may force winds via the enhancement of the Caribbean-Pacific SST gradient.This is reflected in the modulation of the north-eastern source moisture transport.Enhanced convection linked to warm ENSO increases as seen from OLR data.The relationship of the difference between evaporation and precipitation with the spatial distribution of the sources of moisture was also analyzed considering near-surface salinity.It was found that despite evaporation being largely forced by the SST distribution, wind driven evaporation is important when the low level flow is enhanced by the topographic gaps in the Central American Isthmus.Therefore, the response of evaporation and its influence on salinity to ENSO depends on how this variability mode modulates regional low level winds.variability of salinity distribution, showing increasing salinity under El Niño, was found related with the suppression of deep convection west of Colombia. Beyond the analyses presented in this work, the results could motivate further studies to asses new emerging scientific questions.To understand in detail the diurnal cycle of precipitation on Isla del Coco and its relationship with the interaction between moisture transport and the geographical features of the island, it is important to conduct specific studies in which the vegetation on the island is considered.Moreover, the problem of the diurnal cycle of moisture transport requires either higher resolution wind and humidity information or long terms in situ stations located in an arrangement in a transect of the island that includes elevation observations.To account for more information on how the ENSO mode affects the island and its ecosystems, considering the vertical variations in the distribution of salinity is fundamental.The latter as strong salinity gradients forced by ENSO may lead to vertical circulations that may have a potential for the productivity of the island and its surrounding biodiversity.A9-180, A9-902, 805-A9-532 (CSUCA-ASDI), B3-600, B3-413 (CI), B0-065, B4-227, B5-601, B5-295 and 808-B5-298.Thanks to José L. vargas and Alberto Salazar for all the logistic support during the expeditions as well as the scientists, rangers and crew participants that collaborated with setting up the meteorological stations.The authors also acknowledge the comments and suggestions by Hugo Hidalgo and the anonymous reviewers which improved the quality of the manuscript. RESUMEN Rodeada por las aguas del Pacífico e historias de naufragios y piratas, una de las islas con mayores lluvias en el Pacífico oriental protege un tesoro de biodiversidad: La Isla del Coco.Este estudio presenta un análisis de las fuentes de humedad relacionadas con las contribuciones a la precipitación en la región.El ciclo diurno de la precipitación en la isla fue revisado utilizando datos de una estación meteorológica del arreglo GPS de COCONet previamente evaluada con datos de estaciones meteorológicas automáticas de campañas de observación realizadas entre 2011 y 2012.Los patrones de salinidad cerca de la superficie fueron analizados junto con temperatura superficial del mar, evaporación, radiación saliente de onda larga así como flujos de calor sensible y calor latente.Las contribuciones de la humedad a la precipitación en la isla es proporcionada por fuentes evaporativas y el reciclaje de humedad es también un proceso relevante.La lluvia regional es un suplidor de humedad continuo mientras que el transporte de humedad tiene un comportamiento estacional muy bien definido.El análisis del ciclo diurno del suministro de humedad sugiere que las contribuciones de humedad relacionadas con el reciclaje de precipitación muestran una respuesta ligeramente desfasada a la convección profunda en la región.El ciclo diurno de las contribuciones de las fuentes evaporativas, basado en los resultados del modelado, es consistente con el ciclo diurno de la precipitación en la isla.El análisis de trayectorias remarca el papel de los vientos de bajo nivel, la posición de la Zona de Convergencia Inter Tropical y las condiciones de estabilidad para modular el suministro de humedad.Las contribuciones de las fuentes evaporativas presentan una sensitividad a El Niño-Oscilación del Sur (ENOS).Las contribuciones del reciclaje de humedad muestran una fuerte variabilidad relacionada con el ENOS como un aumento del aporte de humedad durante eventos cálidos del ENOS durante los meses de verano y otoño boreal.La variabilidad de las contribuciones de la fuente evaporativa Noreste es modulada por la respuesta del Jet de Bajo Nivel del Caribe al ENOS.La fuente evaporativa Suroeste es sensitiva al Niño, el transporte de humedad disminuye (aumenta) entre Noviembre y Marzo (Mayo a Julio) mientras que la respuesta a la Niña se determinó como débil.Se encontró una buena relación entre la respuesta de los campos analizados al ENOS y la dinámica regional. Fig. 1 . Fig. 1.Location of Isla del Coco is shown with a zoom of the island in which the location of the GPS and IMN stations as well as the location were the Davis automatic weather station was deployed at Wafer Bay during the expeditions.Shaded contours show details on the topography of the island. Fig. 2 . Fig. 2. Comparison of observations of the GPS station sensors and the expeditions deployments, both at Wafer Bay during July 2-8, 2011 and March 15-21, 2012.GPS station observations were averaged (or accumulated) over hourly periods to match the resolution of the expeditions observations for (a) Temperature; (b) Relative Humidity and (c) Precipitation. Fig. 3 . Fig. 3. Climatology of ten days integrated (E-P) as computed from the Lagrangian backward trajectories for air masses precipitating inside the dotted line box for (a) December; (b) February; (c) April and (d) August.Positive values show the evaporative sources of moisture, hence moisture uptake locations.Negative values show regions where precipitation recycling might be a source of moisture availability for precipitation on Isla del Coco.The location of Isla del Coco is shown by the blue dot and the area for analysis is the black lines square.Ten days was used as it was the average time residence of water vapor. Fig. 4 . Fig. 4. Six days climatological clusters (1980-2012) Lagrangian backward trajectories for air masses precipitating on Isla del Coco and surrounding waters (the island is shown by the blue dot and the area for trajectories arrival is shown by the black square).Colors show the specific humidity in Kg/Kg as obtained from the modeling results.The figures clearly show that the maximum moisture uptake occurs in the waters surrounding the island. The 1980-2012 monthly mean of the diurnal cycle of moisture contributions to Isla del Coco precipitation are shown in Figure 6.The recycling of moisture (Fig. 6a) peaks in the early morning and late afternoon between November and April.This result is coherent with the diurnal cycle of precipitation estimated from observations, encouraging the capability of .Mar. Apr.May Jun.Jul.Aug. Sep.Oct. Nov. Dec.Jan.Feb.Mar.Apr.May Jun.Jul.Aug. Sep.Oct. Nov. Dec. Fig. 5 . Fig. 5. Monthly composites of the moisture transport from the evaporative moisture sources to precipitation on Isla del Coco.Figures show the marked annual cycle of the moisture transport from the evaporative sources identified as suppliers for Isla del Coco precipitation. Fig. 6 . Fig. 6.Diurnal cycle of moisture transport (monthly averaged) from the sources of moisture to the vicinity of Isla del Coco (dotted line box in Fig. 2) using threehourly means of the 10 days integrated net fresh water flux computed with the Lagrangian backward trajectories, units in mm/day.(a) Evaporative source of moisture located to the North-East of the island; (b) Evaporative source of moisture located to the south-west of the island. Fig. 7 . Fig. 7. Composites of the moisture supply from the moisture sources to precipitation on Isla del Coco (black solid line for the average contributions, solid grey line for contributions under El Niño and dotted grey line for contributions under La Niña).Units in mm/day, (a) contributions from precipitation recycling and moisture transport from (b) evaporative source of moisture located to the North-East of the island and (c) evaporative source of moisture located to the south-west of the island. Fig. 8 . Fig. 8. Composite differences between warm and cold ENSO for (a) SST in October; (b) OLR in October; (c) Evaporation in January and (d) Salinity in January.
2018-12-11T00:55:39.728Z
2016-03-02T00:00:00.000
{ "year": 2016, "sha1": "1bc73fe8f6256214327205d0e5e43e1e33dea3d2", "oa_license": "CCBY", "oa_url": "https://revistas.ucr.ac.cr/index.php/rbt/article/download/23413/23684", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1bc73fe8f6256214327205d0e5e43e1e33dea3d2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
245007490
pes2o/s2orc
v3-fos-license
Analysis of the ratio between the plasticity of clay and the expansion capacity by changes in humidity and temperature Clay is a cohesive material that varies in volume due to changes in humidity and temperature. Its behavior is studied with physical and geotechnical characterization of the material. The experimental analysis of the expansiveness of clays is related to plasticity, which depends on the expansive minerals it contains. The objective is to analyze the relationship between the plasticity index and the expansion capacity due to changes in humidity and temperature; various types of clay from San José de Cúcuta, Colombia, were studied. Liquid limit and plastic limit were analyzed with which the plasticity index was determined. The test tube free expansion and calcination were carried out at 1000 °C to determine the volumetric change due to humidity and temperature, respectively. The clays with plasticity index of 10% - 20% presented expansion by humidity of 5% - 10% and by calcination of 0% - 10%, which indicates low expansion; while the clay with plasticity index of 20% - 40% presented expansion by moisture of 20% - 50% and by calcination of 25% - 50%, which indicates moderately high expansion. The results show that there is a relationship between plasticity index and the expansion capacity due to changes in humidity and temperature. Introduction Clays are unstable materials that present variation in their volume due to wetting and drying processes, or by calcination at temperatures above 1000 °C. The behavior of clays is uncertain, and it is only possible to know it when the material is physically and geotechnically characterized. The study of clays is widely used in the field of civil engineering because the presence of this material makes building construction difficult due to its volumetric instability. However, there is a lack of studies that correlate the physical and geotechnical properties of clays with the expansive capacity due to variation in humidity due to wetting and drying cycles with expansion by calcination at temperatures above 1000 °C. The clays with the highest expansive capacity occur in areas with warm and tropical climates [1]. This is due to low rainfall and evapotranspiration processes, contributing to post-deposition patterns of the soil [2,3]. The volumetric change by wetting and drying cycles is presented by the electronegativity imbalance that originally occurs in a clayey tissue in solid state, this process is balanced when water exchange cations surround the clay sheet, generating a double diffusion layer (hydration caused by the surface of the clay crystal, adsorbing water molecules) [1]. On the other hand, the volumetric changes due to calcination of clays occur when the material is exposed to relatively high temperatures. This process is due to the fact that the clay, which is basically silica, contains properties of pyroplasticity and viscoplasticity, that is, when the temperature increases, 2 the clay reacts and becomes a viscous mass that, when the internal gases react, produces closed internal porosity, generating expansion of specimens [4]. Various studies have been carried out on the study of the expansiveness of clays due to changes in humidity. Pérez, et al. (2021) studied the behavior of volumetric change by humidity and drying cycles of montmorillonite clay [1]. Sridharan (2014) studied the clay mineralogy and physicochemical mechanisms that govern the behavior of fine soil [5]. Ruge, et al. (2019) performed a mineralogical, microstructural and porosymmetric analysis in clay soils to determine the mineral composition that controls the potential swelling of clay soils [6]. On the other hand, the study of the expansivity of clays by calcination at temperatures higher than 1000 °C is related to the production of light aggregates. carried out studies on the expansion of clays for the production of light aggregates, starting from an exhaustive review and ending with the evaluation of the expansion capacity of clays with the addition of wastewater [7]. Sánchez, et al., 2019, on the other hand, study the influence of the variation of the calcination time in the expansion of the clays [8]. Likewise, Gang Lee (2016) studied the swelling mechanism of clay to make light aggregate according to the size of the particles [9]. Likewise, Loutou and Hajjaji, 2017, studied light aggregates based on clay residues through thermal transformations and physicalmechanical properties [10]. The aforementioned studies are based on the study carried out by Riley, 1951, who made a theoretical and experimental correlation between the capacity for expansion of clays by calcination and their chemical composition, presented by means of a ternary diagram of prediction of expansiveness. [11]. Considering the presence of expansive minerals that affect the behavior of clays when they are subjected to changes in humidity and temperature, the present research work aims to study the relationship between the plasticity index, determined by the limits of Atterberg (liquid limit and plastic limit), and the expansion capacity due to changes in humidity and temperature of 10 clays from the city of San José de Cúcuta, Colombia. Methodology In this section, the method used to determine the plasticity index of the study clays is presented, as well as the process that was carried out to measure the expansion due to humidity and temperature changes of the clay materials that are the focus of study. Determination of the plasticity index of the clays studied The plastic limit and liquid limit of the 10 clay samples were studied; the plastic limit is known as the point at which a material goes from a semi-solid state to a plastic state. While the liquid limit is the point at which the clay sample stops from a plastic state to a liquid state [12]. The plastic limit is determined with the mass of clay that passes through the 40 sieves, that is, the clay sample with particles smaller than 0.42 mm. The clay is kneaded into cylinders 8 g in mass and 3.18 mm in diameter. The sample is rolled between the palm of the hand and a smooth surface that will not absorb moisture until it begins to fracture. On the other hand, the liquid limit is determined with the Casagrande standard test, for which two sections of clay are subjected to a number of impacts in a cup or spoon known as the Casagrande pan until the two sections touch [13]. The plasticity index depends on the liquid limit and the plastic limit. It is determined by subtracting the value of the plastic limit from the liquid limit. The plasticity index is expressed in Equation (1); a low plasticity index indicates that a small increase in the moisture content of the soil transforms it from a semi-solid to a liquid condition, that is, it is very sensitive to changes in humidity. Meanwhile, a high plasticity index indicates that for a soil to pass from a semi-solid state to a liquid state, a large amount of water must be added [13]. where PI is the plasticity index, LL is the liquid limit, and PL is the plastic limit. Free expansion test in specimen The free expansion index is defined as the increase in volume that a soil undergoes without external restrictions when it is submerged in water [14]. To determine the free expansion index, a 5 g portion of clay is taken and poured into a 100 ml capacity cylinder, filling it with distilled water, and 5 g of clay with kerosene is poured into another cylinder to complete 100 ml. Kerosene is used because it is a non-polar fluid that does not generate volumetric changes in the clay. The sample is allowed to settle for a period of 24 hours to allow the volumetric equilibrium of the samples; after 24 hours a reading is taken of the volume reached in the test tubes. Equation (2) shows the expression to calculate the free expansion index. where, FEI is the free expansion index, V # is the volume read from the cylinder with clay and distilled water, and V $ is the volume read from the test tube with clay and kerosene. Calcination test According to Sánchez, 2019 [8], there is no relationship between the expansion capacity and the exposure time of clay pellets to temperatures above 1000 °C. Therefore, for the clay calcination test, a static heat treatment will be applied at a temperature of 1100 °C with a burning time of 5 minutes. The burning curve starts from room temperature and goes to 600 °C in a time of 2 minutes, this in order to release gases that can cause the mixture to break down, these gases are produced by the combustion of carbon compounds. Then the clays are brought for 3 minutes at a temperature of 1100 °C where iron oxide (Fe2O3) reacts causing the release of oxygen [8]. Equation (3) shows the expression to calculate the calcination expansion index. CEI = where, CEI is the calcination expansion index, V % is the volume read after the calcination process, and V & is the volume read before the calcination process. Results The prediction of the expansiveness of the clay is complex and requires the analysis of several variables; the plasticity index has proven to be one of the most important characteristics for the characterization of a hyperactive clay material, that is, its volume varies due to external changes. In addition, the physical analysis of the liquid limit and plastic limit can be correlated both with the expansivity by calcination and by changes in moisture of the clays. In the following sub-sections, the results obtained in the laboratory are presented and the relationship that exists between the physicalgeotechnical properties of the clays with the expansion capacity they have due to changes in humidity and temperature in the material is indicated. The plasticity index defines the plastic field of materials and represents the percentage of moisture that the clays must have to be preserved in a plastic state. This value allows determining the settlement parameters of a clay material and its potential expansiveness. Table 1 shows the chemical compounds of silica (SiO2), alumina (Al2O3), iron oxide (Fe2O3), and other fluxing oxides (Rx). In addition, it presents the liquid limit and the plasticity limit together with the calculation of the plasticity index of the 10 clay samples studied. The clays studied are mainly composed of SiO2 of 56% -70%, Al2O3 of 17% -25%, Fe2O3 of 4% -12%, and other fluxing oxides of 4% -9%. Chemically, clay exhibits volume changes due to calcination when it has high indices of silica and alumina, compounds that generate the material's viscoplasticity. In addition, the iron oxide must be eliminated to avoid the instability of the material, so it must be raised to a temperature of 600 °C to calcine said compound. However, it has not been shown that there is a correlation between the chemical properties and the geotechnical properties of clays. It is evidenced that clay samples 1, 2, and 7 have the lowest plasticity index, which indicates that these samples require a low amount of water addition to go from being a semi-solid to having the condition of a liquid. Depending on the plasticity index and the liquid limit, it can be determined if the plasticity of the clay is high, medium, or low. Figure 1 shows that all the materials studied are clays; being the samples denominated as 1 and 7 the clays with low plasticity, with a liquid limit of 28% and with a plasticity index of 15%, approximately. The samples designated as 2, 4, 5, 9, and 10 are clays classified with medium plasticity, with a liquid limit of 30%-50% and a plasticity index of 16% -28%. Finally, the samples named 3, 6, and 8 are classified as clays with high plasticity, presenting a liquid limit of 50% -58% and a plastic limit of 33% -40%. Figure 1. Determination of the plasticity level of the studied clays (low, medium, or high) according to the relationship between the liquid limit and the plasticity index. Using Equation (2), the expansion capacity by wetting and drying cycles of the studied clays was determined. Figure 2 shows an increasing trend that relates the plasticity index and the free expansion index. A constant growth rate is evidenced, that is, as the clay increases its plasticity index, the expansion index due to changes in humidity increases 1.6 times. What has been described above shows a correlation between plasticity and expansion due to wetting and drying cycles of the clays. The calculation of the expansion index by calcination of the clay was carried out using Equation (3). Figure 3 shows the increasing trend of the expansivity due to temperature changes of the clays. Clays exposed to calcination at temperatures above 1000 °C show expansion due to the production of internal pores due to the properties of pyroplasticity and viscoplasticity. The Figure 3 shows a constant increasing trend of the calcination expansion index with respect to the plasticity index. This indicates that the clay with greater plasticity will have a greater increase in volume due to sudden changes in temperature. The studies in Figure 2 and Figure 3, presented previously, show that there is a directly proportional correlation between the plasticity of clay materials and their expansion capacity due to changes in humidity and temperature. The relationship between changes in humidity and the plasticity of the clay can be predicted because a clay with greater plasticity has a greater capacity to absorb water, which causes the clay to increase in volume. This can be identified in areas where there are constructions founded on clay terraces with high indexes of plasticity, since in these areas when there are rainy seasons, the constructions tend to be damaged because over-pressures are generated by the expansion of clay soils. However, the swelling of the clay due to temperature changes related to the plasticity index is a bit more complex to analyze. Since so far, the studies carried out around the expansion of clays by calcination have focused on the analysis of the chemical composition. That is why this research shows that there is a correlation between expansion by calcination not only with the chemical composition, but also with the geotechnical properties. The high plasticity index indicates great water absorption capacity, which may be related to the pyroplastic instability of the clay when it is calcined, in addition, the high content of compounds such as silica and alumina provides the viscoplastic instability that is required. so that the clay expands by increasing the calcination temperature. That said, it is advisable to analyze clays from other parts of the world and identify if the same correlation presented in this work is present. Generally, clay that presents high expansion rates due to wetting and drying cycles, extracted and discarded because this type of soil damages the construction of civil works, however, with what has been demonstrated in the present research work, it can be concluded that Clays that present problems for the foundations of buildings can be treated and used for the manufacture of other types of materials, such as the manufacture of lightweight aggregates for hydroponic crops, gardening, light concrete, thermal and acoustic insulators, among others. Uses that can be given to the product from the calcination of clay with high plasticity indexes. Figure 3. Ratio between the plasticity index and the expandability by calcination measured by means of a static heat treatment at temperatures above 1000 °C. Conclusions With the results obtained, it was determined that there is a relationship between the plasticity index of the clays and the variation in volume due to changes in humidity and temperature; clays with a plasticity index lower than 20% presented an expansion index due to humidity changes and calcination of less than 10%. The clays with a plasticity index of 20% -30% presented an expansion index due to humidity changes of 20% -30% and by calcination of 25% -35%. Finally, the clays with a plasticity index of 30% -40% presented an expansion index due to humidity changes and calcination of 35% -50%. Which shows a constant growth trend. The clays that present the highest expansion index due to humidity changes are the same ones that present the highest expansion due to calcination. This indicates that those clays that damage the soils where the constructions are founded and in turn affect the buildings, are the same clays that can be used for the production of other construction materials, such as light aggregates for hydroponic crops, gardening, lightweight concrete, thermal and acoustic insulators, among others.
2021-12-10T20:07:01.045Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "8eabcab1813f6c596dc3182384e5c7ff00babaf7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2139/1/012010", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8eabcab1813f6c596dc3182384e5c7ff00babaf7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
233458882
pes2o/s2orc
v3-fos-license
MRD-Based Therapeutic Decisions in Genetically Defined Subsets of Adolescents and Young Adult Philadelphia-Negative ALL Simple Summary In acute lymphoblastic leukemia (ALL), once a complete remission is achieved following induction chemotherapy, the study of submicroscopic minimal residual disease (MRD) represents a highly sensitive tool to assess the efficacy of early chemotherapy courses and predict outcome. Because of the significant therapeutic progress occurred in adolescent and young adult (AYA) ALL, the importance of MRD in this peculiar age setting has grown considerably, to refine individual prognostic scores within different genetic subsets and support specific risk and MRD-oriented programs. The evidence coming from the most recent MRD-based studies and the new therapeutic directions for AYA ALL are critically reviewed according to ALL subset and risk category. Abstract In many clinical studies published over the past 20 years, adolescents and young adults (AYA) with Philadelphia chromosome negative acute lymphoblastic leukemia (Ph− ALL) were considered as a rather homogeneous clinico-prognostic group of patients suitable to receive intensive pediatric-like regimens with an improved outcome compared with the use of traditional adult ALL protocols. The AYA group was defined in most studies by an age range of 18–40 years, with some exceptions (up to 45 years). The experience collected in pediatric ALL with the study of post-induction minimal residual disease (MRD) was rapidly duplicated in AYA ALL, making MRD a widely accepted key factor for risk stratification and risk-oriented therapy with or without allogeneic stem cell transplantation and experimental new drugs for patients with MRD detectable after highly intensive chemotherapy. This combined strategy has resulted in long-term survival rates of AYA patients of 60–80%. The present review examines the evidence for MRD-guided therapies in AYA’s Ph− ALL, provides a critical appraisal of current treatment pitfalls and illustrates the ways of achieving further therapeutic improvement according to the massive knowledge recently generated in the field of ALL biology and MRD/risk/subset-specific therapy Introduction Acute lymphoblastic leukemia (ALL) is the most common type of cancer in children, peaking at 7.6:100.000/year at 1-4 years of age and decreasing in adolescents and young adults (AYA), to an average incidence of 1.8 and 0.8 between 15 and 39 years of age [1,2]. Therapeutic progress has been outstanding in childhood ALL over the past decades, now reaching an 80-90% chance of cure [3,4]. Although therapeutic results are progressively less favorable with the increase of patient age, the case of AYA patients is unique. AYAs have often been treated by adult ALL specialists with adult ALL protocols, a setting in which the favorable prognostic influx of a younger patient age has been consistently recognized [5]. Moreover, AYA patients share several diagnostic and clinical features in common with children and exhibit a higher tolerance to modern highly effective pediatric regimens, which makes them eligible to receive these pediatric-type treatments. The latter concept was explored in several clinical trials that eventually confirmed and provided an explanation for the therapeutic superiority of modern pediatric regimens compared to standard adult protocols in AYA ALL [6]. In ALL, beyond treatment itself that plays by definition a primary prognostic role, among all recognizable risk factors, a key point concerns the course of minimal residual disease (MRD) during the early consolidation phase in patients who respond well and achieve a complete hematologic remission (CR) following induction chemotherapy [7,8]. MRD is a fundamental, independent prognostic parameter that reflects the dynamics of chemo-sensitivity of the residual submicroscopic ALL cell burden to a given chemotherapy program in individual patients, whatever the underlying disease subset, ALL genetics and clinical risk profile. For these reasons, the collection of MRD data has become an essential component of modern treatment strategies for ALL at all ages, first to optimize the definition of risk class and provide an individual prognosis and, second, of equal or even greater importance, to align treatment type and intensity with the MRD-related risk of systemic relapse [9,10]. In the present review, we examine the evidence for the use of MRD analysis in the upfront management of AYA ALL. Because until now MRD-oriented trials were almost exclusively performed in Philadelphia chromosome-negative (Ph−) ALL, we exclude from our survey the subset of Ph+ ALL, which is also significantly less frequent in AYAs compared to older patients. We wish to underline that, once successfully treated, AYA patients, including children, are those who may experience the greatest benefit in terms of future life years and quality of their life. Disclosing the potential for cure or on the contrary the risk of failure through an MRD analysis integrated with other significant risk parameters is therefore of paramount importance in AYAs, to support and optimize risk-related treatment choices and results. ALL in Adolescents and Young Adult Patients Quite frequently, in the past, AYA patients with Ph− ALL have been included in adult ALL trials enrolling patients within a broad age range (from 15-18 to 55-65 years). While it may be difficult to extrapolate exact AYA data and results from these trials, they remain an important source of information regarding AYA ALL since the median patient age in most adult ALL studies is around 35-40 years and therefore some 50% of study patients fall within the AYA category. Instead, selected age ranges were rather heterogeneous in dedicated AYA trials, variably extending up to between 25 and 45 years, which makes it similarly difficult, as well as somewhat artificial, to set exact and universal age boundaries for AYA ALL. In addition, in some large pediatric trials, the upper patient age was extended to include all teens and younger adults, up to 25-30 years [11][12][13] or even 45 years [14], sometimes without clearly separating outcome results of children from AYAs [12,13]. In all these studies, the diagnostic characteristics of AYA ALL, genetics in first place, were intermediate between those observed in children and older adults. Overall, once the challenge of improving the outcome of AYA patients was correctly perceived and the first innovative trials demonstrated the advantages of using modern pediatric rather than historical adult programs, the new approach was rapidly adopted by virtually all cooperative adult ALL Study Groups worldwide [9,[15][16][17] and is highly recommended at present. All these topics are considered in the following sections. AYA Patient Identification: Age Ranges While AYAs can represent roughly one half of all study patients in adult ALL trials (median patient age 35-40 years), there is no formal consensus for the patient age range in AYA-dedicated trials, this ranging from 15-18 to 35-45 years. Here, we refer for simplicity to AYA ALL as 18-40 years of age, with exceptions that are reported. It is worth recognizing that even patients aged 40-45 to 50-55 years were rather successfully treated with the same pediatric-based regimens used in children and AYAs. Ph− ALL: Diagnostic Subsets Defined by Immunophenotype and Cytogenetics/Genetics in AYAs Whereas the incidence of the two main immunophenotypic ALL subsets does not vary significantly across age groups, with B-cell precursor (BCP) ALL representing the majority of new cases and T-ALL no more than 20-25%, the distribution of different genetic subsets shows a definite age-related pattern, with a lower incidence of the favorable ones in AYAs compared to children (Table 1). [18][19][20][21], such as the Ph-like ALL variant, which is a prognostically adverse entity. Genetically, Ph-like ALL resembles in many ways Ph+ ALL but lacks its diagnostic hallmark (Ph chromosome and BCR-ABL1 gene rearrangement) and carries instead other genetic aberrancies (frequently CLRF2+ and abnormal tyrosine kinase and JAK/STAT pathway activation). Of note, independent European and North American studies in large patient series disclosed an incidence of 25-30% for Ph-like ALL in the 20-39 years range, which was higher than that observed in younger and older patients, respectively [22]. In addition, copy number alterations (CNA), i.e., gene deletion or inactivation affecting several molecular pathways regulating cell proliferation and apoptosis response, may occur and bear prognostic relevance. Single or multiple CNA were detected in association with major genetic abnormalities. This exerted a pejorative prognostic effect, particularly with IKZF1 (Ikaros) and CDKN2A/2B deletions and others [23], such as in IKZF1 plus BCP ALL, in which IKZF1 deletions co-occurred with CDKN2A/2B, PAX5 or PAR1 deletions conferring the worst outcome [24]. The analysis of CNA was recently incorporated in some of the most advanced prospective risk models (see Section 2.3). Due to its relative rarity, the age distribution of genetic/cytogenetic abnormalities and their prognostic significance are much less well known in AYA T-ALL. By inference with data collected in T-ALL in general, a novel four-gene prognostic classifier may reflect poor risk genetics (unmutated NOTCH1/FBXW7 and/or RAS and PTEN abnormalities) [25,26], whereas for many other genetic abnormalities detectable in T-ALL no firm prognostic significance has been established. Approximately 15% of T-ALL cases display an early-thymic precursor (ETP) immunophenotype with absent/weak CD5 expression, cross-lineage expression of immature myeloid markers and a typical, dysregulated gene expression profile (JAK/STAT, FLT-3, BCL-2, etc.). ETP ALL is considered a potential high-risk ALL entity [27][28][29]. Treatment: Traditional Adult vs. Modern Pediatric Regimens Although it is not the main focus of the present review, the importance of an optimal treatment regimen for AYA ALL must be correctly understood. Some excellent reviews have recently been dedicated to this topic [6,16,17]. A summary of the evidence collected in comparative and non-comparative trials assessing the feasibility and efficacy of "pediatrictype" chemotherapy in AYA and adult ALL is presented in Table 2. The trials herein considered were selected on the basis of patient number (minimum of 50) and timing for data analysis and outcome reporting (minimum of 3 years) and are presented according to increasing patient age groups from younger AYA only to adult trials including AYA patients. Overall, the available information generated by the use of "pediatric" treatments in AYA ALL can be summarized as follows: 1. Intensive chemotherapy regimens inspired to modern pediatric schedules and treatment principles are superior to historical adult-type programs, as demonstrated with very few exceptions by comparative analyses among successive Phase 2 trials and many large non-comparative trials [6,16,17]. Whenever available, retrospective comparisons with historical datasets (not shown in the table and available in study references) confirm an average improvement of outcome measures of 15-25%. 2. In these modern AYA or AYA-containing adult ALL studies, the projected survival rates at 5 years (range 3-7 years), assuming "cure" for most patients who remain disease-free at ≥5 years, is 50% and greater (overall survival, OS), with age-related variations and OS rates of 60-70% and occasionally higher in younger age groups. 3. Unlike OS, which reflects the cumulative survival effect of both first line and salvage therapies, relapse-free and event-free survival (RFS and EFS) depict the curative potential of upfront therapy only, in CR patients and all study patients, respectively. These figures range 55-70% (RFS) and 40-74% (EFS), once again with significantly better results in younger age groups. 4. The overall chemotherapy intensity is increased in pediatric-based regimens, with regard to vincristine, corticosteroids, antimetabolites (cytarabine, methotrexate and 6-mercaptopurine), L-asparaginase and, more recently, Pegylated-asparaginase (Peg-ASP). Consequently, drug-related toxicity may be higher, requiring higher clinical skills for the management and prevention of toxic side effects. 5. The improved pediatric-like protocol may consist of an unmodified or modified pediatric schedule, in the latter case adapting some treatment elements to an increasing patient age with attending risks of treatment toxicity. The issue of Peg-ASP dosing and toxicity is highly critical in patients at older age [52,53]. 6. Contrary to younger patients, it appears difficult to demonstrate an advantage by pediatric-type regimens in patients older than 55 years [44,47,51]. Non-AYA patients become progressively less tolerant to intensive treatment and display a higher incidence and severity of toxic side effects. 7. The patients who achieve CR, namely about 90% of all patients (≥95% in younger age groups), are usually risk-stratified to assess the individual risk class and decide about the application of risk-specific treatments that range, for high-risk (HR) patients, from chemotherapy intensification to allogeneic hematopoietic cell transplantation (HCT) and/or experimental new agents. Most patients at standard-risk (SR) or intermediaterisk can achieve cure with a full chemotherapy regimen including maintenance as standard of care, without HCT, this lowering the incidence of non-lethal and lethal toxicities (10-15% average mortality from HCT). 8. In the risk stratification process, by analogy with pediatric trials, the analysis of post-induction MRD is crucial since it has been demonstrated to be the most powerful predictive factor for marrow relapse in multivariable analysis from several studies [7][8][9][10]. Therefore, MRD is currently used together with other risk factors for the definition of risk groups and individual risk profiles. With regard to MRD-based risk stratification, the choice of MRD study method and time-point(s) as well as the layout of the final risk model have been quite variable across trials. In some studies, MRD was used alone or in combination with a minimal set of variables (i.e., very high-risk genetics such as KMT2A rearrangements), while in others it was part of complex risk models involving multiple risk factors. These differences are reported in a review article on the risk stratification criteria adopted by 11 European adult ALL Study Groups to orientate an allogeneic HCT decision (Table 3) [54]. This survey documents a consistent use of MRD (11/11, 100%) together with the frequent addition of selected adverse genetic/cytogenetic abnormalities (8/11, 73%) to support a diagnosis of HR ALL with an indication for allogeneic HCT. The increasing precision and complexity of MRD-based risk stratifications is further illustrated in another risk model from an ongoing International pediatric and AYA project (age range 1-45 years), i.e., the ALLTogether study, in which the very large enrolment basis and thorough diagnostic work-up allow very accurately refining the patient risk class for risk-oriented treatments ( Figure 1) [55]. Quite interestingly, in this study conceived by a childhood ALL Consortium, the diagnosis of T-ALL and a patient age >16 years qualified for an intermediate-high risk classification even with undetectable post-induction MRD and/or lack of other risk factors, to reflect the long-lasting notion of a worse outcome expected for patients aged ≥15 years in the pediatric setting. Unlike this, age or T-ALL diagnosis do not represent a clear risk classifier in most adult/AYA trials, although some differences are reported in favor of teens and AYA < 5 years ( Table 3). The ALLTogether trial design highlights the place of a mixed MRD/genetic prospective risk classification that influences the chemotherapy intensity from SR to HR groups and guides the application of an allogeneic HCT or the randomized evaluation of additional risk/subset-specific therapeutic elements in intermediate/highrisk patients (CAR-T cells, tyrosine kinase inhibitors and immunotherapy), many of whom express high MRD levels, as well as treatment de-escalation in SR patients. Combined risk stratification systems merging MRD and genetics were previously assessed in both childhood and adult ALL, enhancing the accuracy of risk stratification [23,26,56,57]. The UKALL-14 adult study introduced a novel prognostic index (PI UKALL ) integrating the WBC count and patient age as continuous variables with cytogenetics/genetics and two post-induction MRD time-points. The PI UKALL prognostic classification was a powerful predictor of outcome following chemotherapy and HCT in SR and HR patients, respectively [57]. The PI UKALL was successfully tested in a very large retrospective COG series of more than 21,000 children and adolescents [58]. In the era of precision medicine, risk assessment is an evolving process that depends equally on the optimization of MRD analysis and its interaction with ALL biology. MRD detection can be performed by flow cytometry or by a molecular approach. Figure 1. The ALLTogether MRD-based, mixed risk stratification system for risk-oriented therapy in patients with Ph− ALL 1-45 years. The master trial consisted of chemotherapy (chemo) of increasing intensity for SR, IR and HR patients, respectively, along with other risk-specific randomized or non-randomized interventions as indicated. TKI denotes tyrosine kinase inhibitor (imatinib) for cases with ABL-class fusions and CAR-T and BCP denote chimeric antigen receptor T-cell therapy and B-cell precursor ALL, respectively. Multiparameter Flow Cytometry Multiparameter flow cytometry (MFC) is based on a panel of monoclonal antibodies that bind specific cell surface markers useful for distinguishing normal cells from leukemic cells, identifying the leukemia-associated aberrant immunophenotype (LAIP). This method was standardized and constantly refined by the EuroFlow Working Group within the ESHLO Consortium (European Scientific Foundation of Laboratory Hemato-Oncology) which also includes the EuroClonality and EuroMRD European laboratory Groups working on different aspects of leukemia and lymphoma characterization and monitoring [59]. Flow cytometry is faster than the molecular approach but less sensitive (sensitivity of 10 −4 , i.e., the ability to detect one leukemic cell among 10,000 normal cells, or 0.01%). More recent studies using highly sophisticated eight-color MFC reported an increased sensitivity for MRD detection in B-precursor ALL, up to between 10 −5 and 10 −6 [60]. PCR for Fusion Genes and Transcripts One third of ALL patients present chromosomal translocation-derived fusion genes, including BCR-ABL1, TCF-PBX1, KMT2A-AFF1 or KMT2A and other partner genes and ETV6-RUNX1 [61]. These chromosomal abnormalities are highly stable over time, therefore are good markers for MRD detection during and monitoring. The MRD quantification is obtained by qRT-PCR (quantitative Reverse Transcription Polymerase Chain Reaction) that compares fusion gene levels detected in a follow-up sample to a standard curve of plasmid containing chimeric-transcript at fixed concentrations [62]. PCR for Ig and TCR Gene Rearrangements If the patients have been tested negative for chromosomal translocation at diagnosis, the molecular gold standard for MRD monitoring is the amplification of T-cell Receptor (TcR) and Immunoglobulin (Ig) gene rearrangements by Real Time quantitative PCR (RQ-PCR). Unlike the chimeric transcript, Ig/TcR are not directly involved in leukemia pathogenesis, but they mirror physiological events that occur during the ontogenesis of B and T lymphocytes. During lymphocyte differentiation, random insertions/deletions of nucleotides at the junctional sites of V (Variable)-D (Diversity)-J (Junctional) gene segments result in a diversity of antigen receptor; in the case of neoplastic evolution, all the tumor cells will express the same receptor sequence. Therefore, these insertions/deletions are specific to each patient and represent a fingerprint of leukemia. This approach, according to BIOMED-2 protocol [63], includes the identification at diagnosis of V-D-J regions of Ig and TcR gene rearrangements by PCR followed by heteroduplex analysis to distinguish between polyclonal and clonal, leukemia specific rearrangements. Nucleotide sequences of the identified clonal rearrangements are obtained by Sanger method. Allele Specific Oligonucleotides (ASO primer) ( Figure 2) are then designed on these leukemia-specific sequences and used in combination with family-specific primers and fluorescent probes to identify and quantify leukemia cells by Quantitative PCR (ASO-qPCR). The fluorescence emission detected by the instrument in a sample is directly proportional to the amount of target DNA present. Leukemia quantification in a follow-up sample is derived by comparing the measured fluorescence to serial dilutions of diagnostic material in a pool of mononuclear cells derived from eight healthy donors (standard curve). The MRD result is expressed as the logarithmic reduction compared to diagnosis. To ensure the monitoring of possible multiple leukemia clones, at least two ASO primers should be developed for each patient with a desirable sensitivity of 10 −4 or 10 −5 (i.e., the ability to detect one leukemic cell among 10,000 or 100,000 normal cells, which is 0.01% and 0.001%, respectively). This method has been developed and standardized by the EuroMRD/EuroClonality Consortium that, during the last 20 years, established precise rules to define both sensitivity of the assay and positivity or negativity of follow-up samples [64]. However, this method has some limitations including time consuming and complex procedures that only specialized laboratories can properly perform. It also requires a good quantity of diagnostic material to be used for standard curve in each follow-up evaluation. In addition, about 5-10% of patients do not have a leukemia specific probe either because no Ig/TCR rearrangements are detected at diagnosis or because the unique VDJ portion is very short or the designed ASO primer is not sufficiently specific and sensitive. Next Generation Sequencing Next-generation sequencing (NGS) is a new method of high-throughput DNA sequencing allowing to overcome some limits of the standard molecular approach. The EuroClonality-NGS working group developed an amplicon-based protocol for Ig/TcR marker identification in ALL. This assay employs a two-step PCR: in the first step, the most common Ig/TcR gene rearrangements are amplified in multiplex reactions with family specific primers (eight reactions), while, in the second step, short forward and reverse sequences (adaptors) are added to uniquely identify single patients and to allow the subse-quent reaction phases. Then, a pool of amplification PCR products is created and sequenced by the Illumina or Ion Torrent platform [65][66][67]. An alternative, capture-based approach has also been described [68,69]. In this method gene rearrangements are not amplified by PCR but hybridized with probes that recognize all V, D and J of all known Ig/TCR regions, including IG Lambda and TcR Alpha that are usually difficult to amplify. This approach can also detect rarely used V/D/J portion for which no primers have been designed in PCR-based methods. Sequences generated by NGS approaches can be analyzed to identify leukemia specific markers by bioinformatics tools, including the ARResT/Interrogate web tool [70] (created by Euroclonality-NGS Consortium) and Vidijil (free tool) [71]. NGS-based clonal identification is routinely used in most MRD reference laboratories because it allows a faster clonal marker identification starting from a small quantity of diagnostic material. It also increases the ability to discriminate rearrangements difficult to resolve by Sanger sequencing. NGS has been reported to be successful also in MRD monitoring [72] and has been described to be at least as sensitive as standard RQ-PCR and to provide MRD quantification without the need of diagnostic material in each assay (standard curve). Furthermore, it allows the certain discrimination between residual leukemia cells and normal lymphocytes that can have similar but not identical rearrangements. However, this approach is still lacking standardization within collaborative groups to be properly applied in treatment protocols. The EuroMRD-NGS group is working on this aspect also introducing an internal quantification reference to allow comparability of results in different MRD laboratories. Furthermore, the available NGS assays for MRD are more expensive and the time to result is longer than ASO-Q-PCR when applied on a routine basis in reference laboratories working on a high number of samples within clinical trial. Digital Droplet PCR A promising, effective, low cost, new third-generation PCR for absolute MRD quantification is named digital droplet PCR (ddPCR) [73]. This technique can be applied in leukemia cases in which diagnostic material was sufficient to identify clonal rearrangements but not for preparation of standard curve for MRD quantification, thus limiting the possibility of patients monitoring over time. The analysis is performed by partitioning the follow-up sample amplification into many independent PCR, by inclusion of reagents (ASO fluorescent assay) and DNA into small droplets created by emulsion. The partitioning is a random event following the Poisson distribution. Therefore, with the production of high number of droplets, there is the probability to have either zero or a single molecule of target rearrangements. After amplification, there will be fluorescent droplets (i.e., with PCR product inside) or no fluorescence. The analysis software for the ddPCR will count the number of fluorescent events and will express the result as number of copies of template per microliter of reaction (copies/µL), taking into account the final volume of reaction. This will result in an absolute MRD value. MRD Study Results for Risk Stratification MRD study results from representative clinical trials in AYA patients or AYA plus adults when the two groups were treated together are shown in Table 4. Depending on the response definition adopted by each study, MRD cut-off values separating SR from HR patients in a risk-oriented studies ranged from totally negative using highly sensitive molecular probes to <10 −3 (<0.1%), whereas MRD detection timepoints were set from as early as Day 21-24/EOI (end of induction) to treatment Weeks 16-22. The MRD-based risk stratification was quite often unrelated to the initial risk profile (SR or HR) and motivated the choice of an allogeneic HCT in case of MRD persistence. In some of these trials, however, the MRD analysis was available for a limited proportion of patients, which is one half or less of all CR patients, and/or was not clearly or not always meant to guide a risk-oriented approach with HCT, left to the discretion of treating physicians or indicated for very high-risk conditions such as t (4;11)+ ALL, etc. Despite these discrepancies that may affect to some extent trial result interpretation and inter-trial comparability, MRD was universally recognized as a major determinant of outcome supporting risk-oriented treatment decisions. Terminology of MRD Response for Clinical Purposes According to technical terminology, an MRD "negative" status (MRD neg ) should be intended as an undetectable/unmeasurable MRD using highly sensitive molecular probes (sensitivity 10 −4 to 10 −5 ) or comparable or only slightly less sensitive MFC techniques. In the clinical setting, however, a broader definition of favorable MRD response has been applied until now, from a molecular MRD < 0.1%/<10 −3 evaluated at an early time-point to a combination of different MRD reads at different time-points, namely < 0.1-0.01%/< 10 −3/−4 early on to <0.01%/< 10 −4 and MRD neg afterwards. Consequently, uniform MRD terminology for clinical use is presently lacking for adult/AYA ALL trials, although suggestions were provided by expert panels for an MRD ≥ 0.01%/≥ 10 −4 at about 12 treatment weeks (i.e., following at least three intensive chemotherapy courses) to represent true high MRD associated with high risk of recurrence (MRD pos ). Moreover, within the MRD pos group, increasing MRD levels from 10 −4 to 10 −1 are predictive of an increasingly worse survival [76]. On the contrary, an MRD < 0.01%/< 10 −4 has been consistently associated with the best clinical outcome independently of treatment type and ALL subset, with minor prognostic differences among MRD neg and MRD < 0.01%/10 −4 patient groups. The 0.01%/10 −4 threshold may therefore be considered the current benchmark for operational definitions in clinical trials, while the distinction of a MRD neg status should be maintained to identify the best MRD response and prognostic subset. MRD-Related Outcomes As shown in Table 4, whichever the risk stratification, the risk-oriented approach and the MRD study method, the patients who displayed persistent MRD following CR induction and/or early consolidation chemotherapy fared significantly worse than MRD responders. However, with few exceptions, MRD was not assessable in about one fourth of CR patients, due to either an insufficient diagnostic and/or follow-up sampling or a failure to generate a molecular probe, while, in this regard, the search for a case-specific LAIP was less troublesome, with success rates >95% when it was applied systematically [35,40,49]. Performing an adequate ALL cell sampling may become a highly critical issues, because, in some studies, only 50% or less of the patients underwent MRD analysis [26,37], or this same figure was slightly above 50% in others [41,44]. These low rates of MRD analysis were more frequent in adult trials and using molecular MRD assays. In the end, in MRD evaluable patients, the average rate of a favorable MRD response after induction-consolidation was about 60-70%, with a trend to higher figures in AYAs. About 35-50% of all patients achieved a significant MRD reduction early on (collectively at end of induction, EOI) and fared better than others in many reports, mirroring the comparable, much larger experience in childhood ALL. With regard to MRD-associated outcomes, the data in the table document the substantial prognostic advantage associate with a negative or favorable MRD course as defined in each study. Overall survival probabilities for MRD responders were in the range of 60-70% and greater, with variations by age group and other, including an early or late MRD response. A pertinent example concerning EOI MRD is illustrated by a sub-analysis of the NILG 10/07 trial [50] in 61 AYA patients aged 18-40 years. While the relapse risk was 24% at 5 years in the 42 MRD neg patients (69%), those with an EOI/Week 4 MRD < 0.01%/10 −4 confirmed at Week 10 experienced the lowest relapse incidence (15%) with an excellent CR duration (85% at 5 years) (Figure 3). Because treatment failure was most commonly caused by an ALL relapse, which in turn is strongly predicted by MRD pos , two main issues deserve to be further elucidated: an "unexpected" recurrence in MRD neg patients and the exact role of an allogeneic HCT salvage (and other therapies) in MRD pos patients. Risk of Relapse in MRD Responders Virtually all MRD-based trials and studies reported a fraction of treatment failures in MRD responsive patients, from 10% to 30-35%, depending on associated risk factors (patient age, genetics, ALL subset). This MRD neg group at increased risk of relapse constitutes a diagnostic and therapeutic challenge. The more likely explanation for this occurrence could lie in a combination of technical issues and disease biology. Because the sensitivity of the best available techniques currently available for large-scale clinical application does not exceed 0.001%/10 −5 , some MRD neg patients could still harbor undetectable residual ALL cells able to induce a subsequent relapse. In addition, genetically adverse ALL subsets do worse even if MRD neg , as already demonstrated by some studies [23,26,56,57]. This suggests that persistence of unmeasurable MRD may be more frequent in some ALL subsets than others, leading to practical considerations for the design of improved risk-oriented strategies, as discussed below. Allogeneic HCT for MRD Positive States Thus far, an allogeneic HCT has been the preferred therapeutic option for MRD pos patients in risk-oriented trials, to avert the high risk of recurrence expected with the use of chemotherapy only. This indication is shared by the American Society for Transplantation and Cellular Therapy and by all European Study Groups including AYA patients, as stated in an EBMT position paper [54,77]. Another option made available to younger AYA patients in some trials has been further chemotherapy intensification, however with final results below those obtained in MRD neg patients and/or difficult to interpret due to small patient groups. Currently, the MRD status should to be integrated with ALL genetics and other HR clinical characteristics (such as in Ph-like ALL and ETP-ALL) in the decision-making model driving patients to allogeneic HCT. Consequently, the role of HCT will have to be reassessed combined these new risk definitions, as well as pre-transplantation therapy with novel chemo-immunotherapy combinatory regimens aiming to induce a MRD neg pre-HCT condition (Section 5). Allogeneic HCT Results in MRD pos AYA Ph− ALL HCT results obtained in HCT-eligible AYA patients identified through an MRD-based risk stratification and/or other HR criteria are reported in Table 5. Most of these trials reported a significant improvement of EFS/OS with HCT compared to no HCT in these patients, in both AYA and non-AYA groups. Moreover, these trials were not specifically designed to compare allogeneic HCT to other therapeutic interventions in MRD pos patients. Several other retrospective studies and meta-analyses, quite heterogeneous as far as HR definition, population age and MRD evaluation, highlighted the advantage of HCT over intensive-pediatric based chemotherapy [8,[78][79][80][81][82], even if HCT results were quite variable and to some extent suboptimal because of the occurrence of HCT-related death and posttransplantation relapse in MRD pos patients (long-term OS: 45-70% with vs. ≤25% without). Pre-Transplantation MRD Status In MRD pos patients, a key time-point for MRD evaluation is just before an allogeneic HCT. MRD positivity at transplantation is the most powerful predictor of relapse and poor outcome, as evidenced by a meta-analysis [84] evaluating 21 retrospective or prospective studies published 1998-2016, all including AYA patients. The pooled results evidenced a higher relapse risk (hazard ratio 3.26; p < 0.05) and lower RFS (hazard ratio 2.53; p < 0.05) in patients with positive pre-transplant MRD in comparison to those with negative MRD. Further studies published 2018-2020 in different transplant settings (related and unrelated donor or haploidentical HCT) and mixed age populations with large percentages of AYA, confirmed the unfavorable prognostic role of pre-HCT MRD pos , with a relapse incidence of 32-73% vs. 19.7-24% in pre-HCT MRD neg subsets [85][86][87][88][89][90][91]. Thus, achieving an MRD neg status before HCT in MRD pos patients should improve the overall transplantation outcome. The role of new targeted immunotherapy in this setting is discussed in the following section. Blinatumomab, the first bispecific T-cell engager (BiTE), has been approved for relapsed/refractory (R/R) ALL. In this setting, blinatumomab proved more effective and better tolerated than conventional chemotherapy and able to induce molecular remissions [92]. For its favorable safety profile and mechanism of action, blinatumomab represents the ideal treatment of MRD, and, in fact, it has been extensively tested in this setting both in first or later CR. In two subsequent Phase II clinical studies conducted in MRD pos adult ALL patients in hematologic CR but with a high MRD level (≥10 −3 ), a single cycle of blinatumomab induced a major MRD response in about 80% of the patients [93,94]. Based on these results, the United States Food and Drug Administration (FDA) and European Medicines Agency (EMA) both approved blinatumomab as the first drug registered for the treatment of MRD. New Therapeutic Options for MRD Positive ALL Patients treated for MRD positivity in first CR and achieving MRD negativity most frequently are referred to HCT as further consolidation. However, for the time being, there is no evidence that OS is better in patients who did or did not undergo transplantation. On the contrary, the outcome of patients receiving blinatumomab in second or later CR and did not proceed to HCT, proved to be inferior [94]. A real-world effectiveness and safety study of blinatumomab in R/R Ph− B-ALL patients has been recently conducted in Europe. In total, 118 patients were included with a median age of 45 years; 22% had previous HCT. Within two blinatumomab cycles, 74% of patients achieved CR or CR with incomplete/partial hematologic recovery: among 44 evaluable patients, 45.5% had a complete MRD response. The majority (78%) of responders proceeded to alloHCT. The estimates for RFS and OS at 24 months were 50% and 58%, respectively [95]. The impact of blinatumomab on MRD has been recently addressed in high-risk firstrelapse childhood BCP ALL. In this study, patients were randomized to receive one cycle of blinatumomab (15 µg/m 2 /d for four weeks, continuous intravenous infusion) or chemotherapy as third consolidation before allogeneic HCT. After a median of 22.4 months of follow-up, the incidence of events in the blinatumomab vs. consolidation chemotherapy groups was 31% vs. 57% (log-rank p < 0.001). MRD remission by PCR was observed in 90% of patients in the blinatumomab group and in 54% in the consolidation chemotherapy group [93]. In a second randomized study, the effect of post-reinduction therapy consolidation with blinatumomab vs. chemotherapy was evaluated on DFS in children and AYAs with first relapse of B ALL. The 2-year DFS rate was 54% for the blinatumomab group vs. 39% for the chemotherapy group. This difference was considered not statistically significant (one-sided p = 0.03). The 2-year OS rate was 71% for the blinatumomab group vs. 58% for the chemotherapy group. This difference was statistically significant (one-sided p = 0.02). After the first cycle of randomized therapy, the MRD neg rate was 75% for the blinatumomab group vs. 32% for the chemotherapy group (p < 0.001). Finally, for the blinatumomab group, 70% proceeded to allogeneic HCT, compared with 43% for the chemotherapy group (p < 0.001) [96]. Inotuzumab Ozogamicin Inotuzumab ozogamicin (InO), a humanized anti-CD22 monoclonal antibody conjugated to the cytotoxic antibiotic calicheamicin has shown strong single agent activity in R/R BCP ALL patients [97]. In this study, patients who received InO versus standard chemotherapy achieved greater remission and MRD neg rates as well as an improved OS. Compared with MRD pos , MRD neg status with CR or CR with incomplete hematologic recovery was associated with significantly improved OS and RFS, respectively. Median OS was 14.1 versus 7.2 months, in the MRD neg versus MRD pos groups [98]. InO is being studied as frontline consolidation drug in AYA Ph− ALL in a North American Phase 3 trial (NCT03150693, C10403 chemotherapy backbone +/− InO) and for MRD pos IR patients in the new European ALLTogether children and AYA project [55]. Chimeric Antigen Receptor-Modified T-Cell Therapy In clinical trials for R/R B-ALL, chimeric antigen receptor-modified T-cells (CAR-T) targeting CD19 (Tisagenlecleucel, CTL019) produced CR rates exceeding 80% to 90% and became the first CAR T-cell therapy approved by FDA in August 2017. The single-arm, multicenter, global registration trial (ELIANA) conducted across 25 centers demonstrated a CR rate of 81% in 75 patients with R/R B-ALL treated with tisagenlecleucel, with undetectable MRD achieved in 100% of responders. At 12 months, RFS and OS were 59% and 76%, respectively. Additional clinical trials have been conducted with other CAR-T cell products. While the rate of CR has been largely confirmed [99][100][101][102][103][104] in adult patients, the duration of response has been significantly less impressive. A significant clinical benefit was observed particularly in responding patients who also achieved MRD neg status. A subsequent allogeneic HCT has also been proposed as an effective strategy to avoid an early relapse after CAR-T cell therapy [105]. Based on these results, it is likely that even for CAR-T cells the greatest benefit could come from an earlier use in the setting of MRD. To validate this hypothesis, in 2019, the COG cooperative group decided to launch the AALL1721/Cassiopeia study, a Phase 2 single-arm trial of tisagenlecleucel in children and young adults in CR1 with high-risk criteria and persistent MRD (by flow cytometry) at the end of chemotherapy. The primary end point of this study is 5-year DFS. Investigational Agents for T-ALL T-ALL represents about 20% of ALL cases and compared to B-precursor ALL offers fewer opportunities for the exploitation of MRD-targeting new agents, partly because they cannot discriminate between normal or regenerating T-cells and residual T-lymphoblasts and could therefore cause an extreme T-cell suppression (fratricide) leading to lethal infections. At present, the clinical experience with immunotherapy and other targeting agents against T-ALL is limited and confined to relapsed/refractory disease, which is in T-ALL an extremely difficult therapeutic setting after the failure of current highly intensive first-line treatments. The only exception to that is nelarabine, which after proving relatively effective at salvage has been evaluated in untreated T-ALL, either frontline or following detection of MRD. Nelarabine for MRD pos T-ALL In a large randomized COG trial [106], the use of nelarabine improved the 4-year RFS vs. the no nelarabine arm (88% vs. 82%; p = 0.029). In this 4-arm study conducted in children and AYA 1-30 years, the best outcome was obtained in patients simultaneously randomized to nelarabine and Capizzi-style methotrexate (vs. no nelarabine and/or high-dose methotrexate), with a projected RFS of 91%. While other variably successful nelarabine-based trials have been performed [107] or are near to completion in AYA/adult T-ALL [17,108], a German study was specifically focused on MRD pos T-ALL. In this experience, 6 out of 12 MRD pos T-ALL patients achieved MRD negativity (50%) following nelarabine, some were transferred to HCT and only two relapsed [74]. All these data prompts further evaluation of nelarabine in MRD pos AYA T-ALL, particularly in adverse subsets such as ETP-ALL, which is associated with a higher risk of MRD persistence [29,109] Immunotherapy for MRD pos T-ALL Some new immunotherapeutics could be profitably used against MRD in AYA T-ALL. The most promising agent for large-scale application is the monoclonal antibody daratumumab which is directed against the CD38 antigen, largely represented in T-lymphoblasts. Trials with daratumumab are ongoing, also in combination with chemotherapy, and will be informative about the induction of MRD negativity in clinically responsive patients [108]. In this regard, daratumumab effectively eradicated MRD in some small T-ALL series [110][111][112]. In a preclinical model, an anti-IL-7Rα monoclonal antibody was confirmed effective [113]. By analogy with B-lineage ALL, CAR-T cell therapy is also under investigation in T-ALL, although at a far earlier stage of development [17,108]. These first studies were mainly performed in patients with resistant or relapsing disease and involved anti-CD5, -CD7 and -T-cell receptor β CAR-T constructs. A new anti-CD7 CAR-T product generated by CRISPR/Cas9 gene editing technology demonstrated resistant to fratricide and was devoid of any alloreactive/graft-versus-host potential (UCART7) [114]. A comparable CAR-T product from China (TruUCAR GC027) induced four MRD remissions out of five AYA patients aged 19-38 years [115]. Molecular Profiling for Precision Medicine A massive amount of data regarding ALL biology and the development of resistance to standard anti-ALL therapy has been generated in recent years [116][117][118][119][120][121]. These studies paved the way to experimental therapeutic attempts targeting altered molecular pathways [122][123][124][125]. Although the clinical experience is thus far limited and involves mainly the late stages of disease, this is a rapidly expanding area with some notable examples that may be considered for the management of MRD pos states in AYAs. 1. Most relevant is Ph-like ALL, which is rather frequent in AYA patients and carries a higher risk of MRD persistence. Some of the associated gene abnormalities recognizable in this poor risk entity (ABL-class fusions, CLRF2 deregulation and JAK/STAT and IL7R pathway alterations, among others) are actionable by TK inhibitors, JAK inhibitors (ruxolitinib) and other similar drugs. Trials in children, AYA and adults have been incepted worldwide, sometimes with promising preliminary results [126][127][128][129]. The data from these studies may elucidate which of these new drugs or drug combinations with either chemotherapy, immunotherapy and/or other targeting agents could optimize the outcome of the distinct genetically defined Ph-like ALL subsets. 2. Among the BCP ALL subsets to consider for targeted therapy is t (v;11) + ALL or ALL carrying KMT2A gene rearrangements, most frequently t (4;11) + ALL. This entity stands out for its clinical aggressiveness. In this subset, the available molecular studies point to a therapeutic use of BCL-2 inhibitors (venetoclax and navitoclax) and DOT-L1 and histone-deacetylase inhibitors; however, the relative rarity of this ALL syndrome precludes an extensive clinical evaluation of these drugs outside large collaborative clinical trials. 3. There are many more candidates for targeted therapy of BCP ALL subsets or ALL in general, as extensively reviewed [122]. Most of these drugs are under investigation in early clinical trials, and it is too early to define exactly their place and/or anticipate their approval for use as standard agents for front-line therapy and/or MRD pos conditions. Worthy of mentioning are the proteasome inhibitors, again the BCL-2 inhibitors and the activators of P53-mediated apoptosis, given the frequent dysregulation of these molecular mechanisms. Likewise, the analysis of bone marrow immune cell contexture led to identify a poor risk ALL subset with PD1 + TIM3 + CD4 + bone marrow T-cells > 0.1% that might be targeted by PD1 checkpoint inhibitors [130]. 4. Many of the new drugs potentially active in BCP ALL could be exploited in T-ALL as well, namely inhibitors of the antiapoptotic BCL-2 family members and inhibitors of JAK/STAT, PI3K/Akt/mTOR, MAPK and Notch-1, the latter being a typical T-ALL target. While experience with Notch-1 inhibitors has been rather disappointing so far, BCL-2 inhibitors navitoclax and venetoclax induced a CR in six of 16 patients with refractory T-ALL, achieving undetectable MRD in four [129]. ETP ALL may be sensitive to the JAK-2 inhibitor ruxolitinib. Drug Sensitivity Profiling for Precision Medicine An effective, functional drug screening is now obtainable through ex vivo models that employ extensive drug libraries detecting expected (based on prior molecular screening) or unexpected drug vulnerabilities, together with the evaluation of drug activity in patientderived xenografts (PDX) (reviewed in [122]). These innovative studies may reveal and/or confirm highly promising single-agent or combinatory approaches with new targeting agents in specific ALL entities and/or individual patients. Effective drug combinations were identified for high-risk BCP ALL including Ph-like ALL (BCL-2 and MCL-1 inhibitors; TK inhibitors dasatinib and ponatinib) [131,132], T-ALL and ETP-ALL (ruxolitinib and dexamethasone; dasatinib; and venetoclax and bortezomib) [133,134]. This represents an exciting new area for treatment optimization of MRD pos states. Conclusions The recent international experience in AYA Ph− ALL confirms that approximately 65% or more of these patients may achieve cure. This represents an outstanding therapeutic achievement, not very far from the 85-90% cure rate documented in children and highly encouraging given the different prognostic patterns and the increasing treatment complexity of AYA compared to childhood ALL. In this field, MRD has emerged as a strong, dominant risk factor, necessary information by which we can modulate treatment intensity up to the level of allogeneic HCT or alternatively choose new treatment modalities (novel immunotherapeutics and new experimental agents) and design innovative risk-and MRD-oriented trials to improve even further the outcome of specific ALL entities. More than 40 years ago, David Pinkel, a pioneer of ALL therapy, stated that "historically, when therapists have found themselves stymied in improving the prognosis of a disorder, new understanding of its biology has provided the key to further progress" [135]. Today, the study of MRD, which can be regarded as a behavioral marker of ALL biology with function of therapeutic target across all disease and patient subsets, continues to offer new chances of improving the outcome of AYA patients with ALL. Institutional Review Board Statement: Ethical review and approval were waived for this study because it is not applicable to a review article. Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article. Conflicts of Interest: The authors declare no conflict of interest.
2021-05-01T06:17:21.875Z
2021-04-27T00:00:00.000
{ "year": 2021, "sha1": "a56efba6bcaf80ef74d27021b149a4b01c23eb02", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/13/9/2108/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2402144c42dc25c9df81f20fa4e236aa5483e454", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257059877
pes2o/s2orc
v3-fos-license
Analysis of Diabetic Foot Deformation and Plantar Pressure Distribution of Women at Different Walking Speeds Highlights Foot measurements show an insignificant increase at a more rapid walking speed. Faster walking speed results in higher mean peak plantar pressure in the forefoot and heel areas, along with a lower pressure time integral in all foot regions. Suitable offloading devices are recommended for people with diabetes during exercise at higher walking speeds. An insole with a different structure and material for each specific area contributes to plantar pressure offloading. Abstract Official guidelines state that suitable physical activity is recommended for patients with diabetes mellitus. However, since walking at a rapid pace could be associated with increased plantar pressure and potential foot pain, the footwear condition is particularly important for optimal foot protection in order to reduce the risk of tissue injury and ulceration of diabetic patients. This study aims to analyze foot deformation and plantar pressure distribution at three different walking speeds (slow, normal, and fast walking) in dynamic situations. The dynamic foot shape of 19 female diabetic patients at three walking speeds is obtained by using a novel 4D foot scanning system. Their plantar pressure distributions at the three walking speeds are also measured by using the Pedar in-shoe system. The pressure changes in the toes, metatarsal heads, medial and lateral midfoot, and heel areas are systematically investigated. Although a faster walking speed shows slightly larger foot measurements than the two other walking speeds, the difference is insignificant. The foot measurement changes at the forefoot and heel areas, such as the toe angles and heel width, are found to increase more readily than the measurements at the midfoot. The mean peak plantar pressure shows a significant increase at a faster walking speed with the exception of the midfoot, especially at the forefoot and heel areas. However, the pressure time integral decreases for all of the foot regions with an increase in walking speed. Suitable offloading devices are essential for diabetic patients, particularly during brisk walking. Design features such as medial arch support, wide toe box, and suitable insole material for specific area of the foot (such as polyurethane for forefoot area and ethylene-vinyl acetate for heel area) are essential for diabetic insole/footwear to provide optimal fit and offloading. The findings contribute to enhancing the understanding of foot shape deformation and plantar pressure changes during dynamic situations, thus facilitating the design of footwear/insoles with optimal fit, wear comfort, and foot protection for diabetic patients. • Foot measurements show an insignificant increase at a more rapid walking speed. • Faster walking speed results in higher mean peak plantar pressure in the forefoot and heel areas, along with a lower pressure time integral in all foot regions. • Suitable offloading devices are recommended for people with diabetes during exercise at higher walking speeds. • An insole with a different structure and material for each specific area contributes to plantar pressure offloading. Introduction Diabetes mellitus (DM) is a major health problem globally. The prevalence of diabetes is high, and the number of diabetic patients is still on the rise [1,2]. Diabetic foot ulcers (DFUs) are one of the most common and severe complications of diabetes, which may lead to lower extremity amputation without timely treatment [2,3], and have an immense mental and economic burden on patients [4,5]. The medical treatment of DFUs is expensive and there is still a high risk of recurrence after a DFU has healed [6]. Therefore, early prevention is better than treatment by using various strategies, such as public education [7,8], blood sugar monitoring [8], using offloading devices (footwear/insole) [9], regular foot assessment [10], and physical activity [11], etc. Physical activity is inversely associated with the risk of Type 2 diabetes [12,13], and poor physical fitness is regarded as a risk factor for the new-onset of diabetes [14]. Amongst the various types of physical activity, walking is preferred by many people including diabetic patients as their daily activity because it is relatively safe with few side effects [15]. Walking has also been proven effective for weight loss and glucose control for people with diabetes [16,17]. Lakhdar et al. [15] concluded that brisk walking has beneficial effects on anthropometric and biochemical parameters, physical performance, and glycemic control for diabetic patients. However, a fast walking speed can increase the plantar pressure at the forefoot and rearfoot areas [18,19], thereby increasing the risk of DFUs. The pronation of the foot or its side-to-side movement causes the foot to roll a bit inward with each step, with the big and second toes working to push off the foot while the other toes stabilize the foot. The heel strikes the ground, and the arch of the foot flattens and cushions the shock. Then, the foot rolls outward with toe-off, followed by rising and stiffening of the arch as the foot rolls outward and upward. However, a more rapid walking speed may force the foot to a pronated position that increases the plantar pressure at the forefoot [20,21]. Therefore, one of the objectives of this study is to analyze the plantar pressure changes of diabetics at different walking speeds, in order to design appropriate offloading insoles/footwear to protect their feet from high levels of pressure during daily physical exercise. Insole geometry is an important factor in the redistribution of plantar pressure [22]. The shoe/insole must align with the shape of the foot with different stances for an appropriate fit [23]; otherwise, an abnormal in-shoe pressure would result. Therefore, accurate and reliable foot anthropometric measurements can provide valuable information for shoe last/insole geometry designs. Previous studies have concluded that the geometry of the foot changes with different loads in both standing and walking conditions [24][25][26]. Xiong et al. [24] and Zhang et al. [27] stated that the foot increases both in length and width, and reduces in height with increases in weight bearing. The shape of the foot also becomes wider at the forefoot and heel during roll-over compared to a static stance [23]. On the basis of these findings, a wider toe box and insole with arch support are recommended to provide a better fit and prevent foot injury caused by a mismatch of the different foot dimensions. The heel cup of a foot orthosis also can add to the thickness of the heel fat pad, which can further reduce the peak plantar pressure by increasing the contact area during standing and walking [28,29]. In addition, the heel cup can also limit the deformation of plantar soft tissues [29]. However, these previous findings have mainly focused on foot deformation during static conditions or one walking speed only. To date, studies on foot deformation at different walking speeds have been few and far in between. Footwear research has investigated the biomechanical changes of the lower limbs at different walking speeds [30][31][32] and found that an increased speed of gait results in greater ground reaction forces (GRFs). The shape of the foot tends to easily deform when walking at different speeds with changes in loading. The shape of the foot therefore provides important information for shoe last modelling which is closely associated with footwear design and fit [33]. Footwear is the protective measure of the foot during motion, and studies such as [34] have concluded that foot shape must be taken into consideration in footwear design so as to prevent negative effects on foot morphology. However, the shoe last construction currently is mainly based on individual practical knowledge which relies on trial-and-error [23]. Therefore, it is essential to investigate the deformation of the foot geometry in dynamic situations at different walking speeds to gain more knowledge towards the design of shoe lasts. Another objective of this study is to investigate the effect of the walking speed on changes in the geometry of the diabetic foot during walking so that offloading devices with optimal fit can be precisely designed and prescribed. The objectives of this study are as follows: (1) to analyze the effect of walking speed on plantar pressure distribution in dynamic situations; (2) to investigate the effect of walking speed on the deformation of the foot geometry. Participants A total of 19 female subjects who ranged from 57 to 75 years old (mean: 66, SD: 5) participated in the study. They self-reported Type 1 or 2 diabetes mellitus (DM) in the early stages (based on diagnosis of a clinical physician). The inclusion criteria [35,36] were those with no history of ulcers or neurological disorders (except neuropathy) and who had the ability to walk a length of 20 m repeatedly without any walking aid. Those who showed the presence of active foot ulcers and severe foot deformities that would affect gait were excluded [37]. All of the participants provided written informed consent after they were given an introduction of the experimental procedures and requirements. The experiment protocols were approved by the Human Subjects Ethics Sub-Committee of the University (Reference Number: HSEARS20200128001). Before conducting the experiment, a preliminary short survey was conducted on a face-to-face basis with participants to collect first-hand information about the physical activity habits to maintain health in their daily life, as well as features and problems with regard to wearing footwear during their physical activities. The responses showed that most of them preferred walking in their daily activity, and footwear discomfort was found in different foot areas. Experiment Protocols All of the subjects were required to walk on the walkway (4.5 m in length) of a foot scanning system (3dMD LLC, Atlanta, GA, USA) at 3 different walking speeds. The normal walking speed was defined as the natural pace of each subject, and the fast and slow speeds of walking were 20% faster and 20% slower than the natural walking speed, respectively [38]. Two timing gates (Brower Timing System, Draper, UT, USA) were placed at each end of the walkway to determine the duration of each trial. The experiment was divided into 3 sections: (1) First, the subjects walked on the walkway 10 times at their natural pace to determine their normal speed and calculate the slow and fast walking speeds. Scanning and plantar pressure measurement trials that exceeded 5% of the predetermined speeds were rejected to minimize the effect of speed on foot deformation and plantar pressure distribution. (2) Then, both the left and right feet were scanned 3 times at the three walking speeds defined in the first step. (3) The plantar pressure of the subjects in their bare feet at the 3 defined walking speeds was recorded 3 times, and the subjects wore standard cotton socks to secure Pedar sensors onto the plantar of their feet. The subjects were given a break of 1 min after 3 scans. The 3 walking speeds were randomized to minimize the order effects. The experimental flow is shown in Figure 1. Foot Image Analysis The stance phase was divided into 5 frames: at first heel contact, first metatarsal head (MTH) contact, first toe contact, heel take off, and MTH take off, respectively, [25,26] based on the defined functional time frames of the foot during roll-over (see Figure 2). Thirteen relevant foot anthropometric measurements of each frame including the lengths, widths, heights, girths, and angles were extracted for an in-depth analysis [25]. Plantar Pressure Analysis The plantar of the foot was divided into 5 areas for the plantar pressure analysis: toes, metatarsal heads, medial midfoot, lateral midfoot, and heel (see Figure 3). The mean peak plantar pressure (MPP) and pressure time integral (PTI: accumulation of foot pressure during plantar contact time) of each analyzed area of the foot during the 3 different walking speeds were used to describe the foot biomechanics. Foot Image Analysis The stance phase was divided into 5 frames: at first heel contact, first metatarsal head (MTH) contact, first toe contact, heel take off, and MTH take off, respectively, [25,26] based on the defined functional time frames of the foot during roll-over (see Figure 2). Thirteen relevant foot anthropometric measurements of each frame including the lengths, widths, heights, girths, and angles were extracted for an in-depth analysis [25]. Foot Image Analysis The stance phase was divided into 5 frames: at first heel contact, first metatarsal head (MTH) contact, first toe contact, heel take off, and MTH take off, respectively, [25,26] based on the defined functional time frames of the foot during roll-over (see Figure 2). Thirteen relevant foot anthropometric measurements of each frame including the lengths, widths, heights, girths, and angles were extracted for an in-depth analysis [25]. Plantar Pressure Analysis The plantar of the foot was divided into 5 areas for the plantar pressure analysis: toes, metatarsal heads, medial midfoot, lateral midfoot, and heel (see Figure 3). The mean peak plantar pressure (MPP) and pressure time integral (PTI: accumulation of foot pressure during plantar contact time) of each analyzed area of the foot during the 3 different walking speeds were used to describe the foot biomechanics. Plantar Pressure Analysis The plantar of the foot was divided into 5 areas for the plantar pressure analysis: toes, metatarsal heads, medial midfoot, lateral midfoot, and heel (see Figure 3). The mean peak plantar pressure (MPP) and pressure time integral (PTI: accumulation of foot pressure during plantar contact time) of each analyzed area of the foot during the 3 different walking speeds were used to describe the foot biomechanics. Data Analysis All of the foot measurements and pressure parameters were analyzed by using SPSS Statistics 21 software (IBM Corp., Armonk, NY, USA). A Shapiro-Wilk test was used to determine the normality of the 13 foot measurements and plantar pressure obtained for each measured part of the foot at the 3 different walking speeds. The results showed that all of the measurements and parameters were normally distributed (p > 0.05). A one-way repeated measures analysis of variance (ANOVA) was used to compare the mean value of each foot measurement and plantar pressure measured at the 3 walking speeds, and to determine whether there were significant (p < 0.05) differences. Data Analysis All of the foot measurements and pressure parameters were analyzed by using SPSS Statistics 21 software (IBM Corp., Armonk, NY, USA). A Shapiro-Wilk test was used to determine the normality of the 13 foot measurements and plantar pressure obtained for each measured part of the foot at the 3 different walking speeds. The results showed that all of the measurements and parameters were normally distributed (p > 0.05). A one-way repeated measures analysis of variance (ANOVA) was used to compare the mean value of each foot measurement and plantar pressure measured at the 3 walking speeds, and to determine whether there were significant (p < 0.05) differences. Participant Information The description statistics of the participants in this study are listed in Table 1, including their age, body mass index (BMI), foot size, and years since diagnosis of DM. Effect of Walking Speed on Foot Deformation The maximum value of each foot measurement from the five selected frames was extracted for analysis. The one-way repeated measures ANOVA showed that there is no significant difference for each foot measurement at the three walking speeds. Table 2 shows that each foot measurement slightly increases with a more rapid walking speed. The increase (in percentage) of the toe angles and orthogonal heel width is larger than that of the other foot measurements. Participant Information The description statistics of the participants in this study are listed in Table 1, including their age, body mass index (BMI), foot size, and years since diagnosis of DM. Effect of Walking Speed on Foot Deformation The maximum value of each foot measurement from the five selected frames was extracted for analysis. The one-way repeated measures ANOVA showed that there is no significant difference for each foot measurement at the three walking speeds. Table 2 shows that each foot measurement slightly increases with a more rapid walking speed. The increase (in percentage) of the toe angles and orthogonal heel width is larger than that of the other foot measurements. The walking speed shows a positive effect on all of the foot measurements. All of the foot measurements increase at a more rapid walking speed due to the larger GRFs associated with a fast gait speed [39,40]. The medial arch plays an important role in shock absorption and transfer of weight bearing during walking or running [41]. Deformation occurs in the medial arch during walking, including extension, pull-up, and collapse of the medial arch. The toes control the height of the arch during walking, so more toe flexion means a higher pull-up of the foot arch and extension of the longitudinal arches. Caravaggi et al. [40] concluded that the metatarsophalangeal (MTP) joints show more dorsiflexion when walking at a fast pace, and the arch height and foot lengths thereby increase with speeding up the gait. During midstance in a gait cycle, the forefoot pronates with the medial arch stretched and flattened to store mechanical energy for propulsion. The ball angle (BA) thereby increases with speeding up the gait. It is found in this study that the measurements of the forefoot and heel areas show relatively larger deformation with increased walking speeds as opposed to the midfoot. During roll-over, only the forefoot and heel have contact with the ground at the beginning and end of the stance phase, of which the GRFs peak; see Figure 4 [23]. The foot measurements increase with larger loads imposed on the foot; therefore, greater deformation of the foot shape occurs with speeding up of gait [27], especially at the forefoot and heel. Effect of Walking Speed on Plantar Pressure Distribution The MPP and PTI are commonly used measures to assess the foot biomechanics with different movements and related to foot pain and the development of foot ulcers in diabetic patients [32]. The plantar of the foot in this study is divided into five specific areas: toes, metatarsal heads, medial midfoot, lateral midfoot, and heel. After implementing a one-way repeated measures ANOVA, the results showed that the MPP increases significantly at a faster walking speed except at the midfoot; see Table 3. However, the PTI decreases significantly with the exception of the medial midfoot and heel areas. Table 3. ANOVA of pressure measurements at slow, normal, and fast walking speeds. Regarding these foot deformations, shoes for daily wear may not best fit the foot shape during exercise with a rapid gait. Shoes with an arch-conforming shape are recommended to prevent the increase in stretch of the arch for pressure offloading during walking at a rapid pace [42]. In addition, a wider toe box should be used for diabetic footwear to accommodate the broadened forefoot with increases in walking speed. A heel cup in running shoes is also recommended to minimize the deformation of the soft tissues in the heel of the plantar. Effect of Walking Speed on Plantar Pressure Distribution The MPP and PTI are commonly used measures to assess the foot biomechanics with different movements and related to foot pain and the development of foot ulcers in diabetic patients [32]. The plantar of the foot in this study is divided into five specific areas: toes, metatarsal heads, medial midfoot, lateral midfoot, and heel. After implementing a one-way repeated measures ANOVA, the results showed that the MPP increases significantly at a faster walking speed except at the midfoot; see Table 3. However, the PTI decreases significantly with the exception of the medial midfoot and heel areas. As shown in Figures 5 and 6, regardless of whether there is a significant difference, the walking speed has a positive effect on the MPP with the exception of the midfoot, while it shows a negative effect on the PTI. The MPP increases along with a decreased PTI at a more rapid walking speed. This can be attributed to the fact that a faster walking speed is associated with larger GRFs [31,40,43] with a similar amount of foot plantar contact, so higher pressure is found in most of the foot regions. Moreover, a faster gait speed also has a shorter total contact time, thus resulting in a lower PTI [40,44,45]. However, higher peak plantar pressure at a more rapid walking speed and higher accumulation of plantar pressure at a slower pace are found in this study, which might lead to skin breakdown and foot pain. Previous studies have concluded that walking at a rate of 4 km/h and wearing an ethylene-vinyl acetate (EVA) insole can reduce plantar pressure at the forefoot [46]. A heel pad with an auxetic structure has been proven to reduce the pressure in the heel area during walking [47]. Chen et al. [48] also recommended dividing the plantar of the foot into support and soft areas, so that honeycomb and auxetic structures for support and pressure offloading could be applied, respectively. Therefore, suitable offloading devices combined with appropriate speeds of walking are recommended for DM patients to maintain fitness and minimize their risk of DFUs. the walking speed has a positive effect on the MPP with the exception of the midfoot, while it shows a negative effect on the PTI. The MPP increases along with a decreased PTI at a more rapid walking speed. This can be attributed to the fact that a faster walking speed is associated with larger GRFs [31,40,43] with a similar amount of foot plantar contact, so higher pressure is found in most of the foot regions. Moreover, a faster gait speed also has a shorter total contact time, thus resulting in a lower PTI [40,44,45]. While walking is the most common physical activity because it can be done anywhere and requires no equipment, there are still reservations about promoting physical exercise for DM patients, and sometimes the advice given is "rest to avoid injury". However, studies show evidence that exercise does not increase the incidence of ulceration and reulceration of DM patients [49,50]. The foot serves as the basis for locomotion, and effective push off from the ground requires the MTP joints to undergo substantial flexion and extension [51]. A larger MTP dorsiflexion angle is found with faster walking speeds [52]. However, insufficient MTP joint dorsiflexion may shorten the step length, and this gait pattern may increase the risk of falls and injury [53]. Therefore, flexibility exercises that focus on the foot such as manually mobilizing the forefoot into dorsiflexion and brisk walking are recommended for DM patients to maintain or improve their range of motion in the ankle and foot. Appropriate walking speeds while using a suitable insole will help to increase the range of motion of the foot and reduce joint stiffness and peak foot pressure. It is therefore essential to take extra good care of diabetic feet during and after walking rather than eliminating exercise. Considering the clinical implications of the results of this study, plantar pressure changes with different walking speeds would be useful to recommend activities for patients with specific foot problems. For example, patients with forefoot pain should slow down, since a fast walking speed needs more range of motion of the first metatarsophalangeal joint, which increases the shear force contributing to skin breakdown [18,54]. However, higher peak plantar pressure at a more rapid walking speed and higher accumulation of plantar pressure at a slower pace are found in this study, which might lead to skin breakdown and foot pain. Previous studies have concluded that walking at a rate of 4 km/h and wearing an ethylene-vinyl acetate (EVA) insole can reduce plantar pressure at the forefoot [46]. A heel pad with an auxetic structure has been proven to reduce the pressure in the heel area during walking [47]. Chen et al. [48] also recommended dividing the plantar of the foot into support and soft areas, so that honeycomb and auxetic structures for support and pressure offloading could be applied, respectively. Therefore, suitable offloading devices combined with appropriate speeds of walking are recommended for DM patients to maintain fitness and minimize their risk of DFUs. While walking is the most common physical activity because it can be done anywhere and requires no equipment, there are still reservations about promoting physical exercise for DM patients, and sometimes the advice given is "rest to avoid injury". However, studies show evidence that exercise does not increase the incidence of ulceration and re-ulceration of DM patients [49,50]. The foot serves as the basis for locomotion, and effective push off from the ground requires the MTP joints to undergo substantial flexion and extension [51]. A larger MTP dorsiflexion angle is found with faster walking speeds [52]. However, insufficient MTP joint dorsiflexion may shorten the step length, and this gait pattern may increase the risk of falls and injury [53]. Therefore, flexibility exercises that focus on the foot such as manually mobilizing the forefoot into dorsiflexion and brisk walking are recommended for DM patients to maintain or improve their range of motion in the ankle and foot. Appropriate walking speeds while using a suitable insole will help to increase the range of motion of the foot and reduce joint stiffness and peak foot pressure. It is therefore essential to take extra good care of diabetic feet during and after walking rather than eliminating exercise. Considering the clinical implications of the results of this study, plantar pressure changes with different walking speeds would be useful to recommend activities for patients with specific foot problems. For example, patients with forefoot pain should slow down, since a fast walking speed needs more range of motion of the first metatarsophalangeal joint, which increases the shear force contributing to skin breakdown [18,54]. Having said that, there are some limitations of this study. First, only 19 female subjects are involved, which is a fairly small sample size and limits the generalizability of the results. Future studies can involve a larger number of subjects. Despite the limitations, this study gives insight into the changes in the foot geometry and pressure at different Having said that, there are some limitations of this study. First, only 19 female subjects are involved, which is a fairly small sample size and limits the generalizability of the results. Future studies can involve a larger number of subjects. Despite the limitations, this study gives insight into the changes in the foot geometry and pressure at different walking speeds, thereby acting as a reference for studies that aim to optimize the wear comfort of diabetic footwear and insoles during daily physical exercise. Conclusions In summary, each foot measurement in this study increases with acceleration during locomotion, but the effect of the walking speed is insignificant. The shapes of the forefoot and heel show a larger percentage of size increase as opposed to the midfoot. Meanwhile, a faster walking speed significantly increases the mean peak pressure at the forefoot and heel areas. However, the PTI of each area of the foot decreases with the acceleration in gait, which suggests that a slower pace of walking cannot reduce the pressure accumulation with ground contact. Therefore, suitable offloading devices are recommended for diabetics during walking to prevent foot injury due to high pressure. Design features such as a wide toe box, medial arch support, and inclusion of heel cups are recommended to optimize the fit of the footwear for walking. Furthermore, custom-fabricated insoles with conforming 3D arch support and/or cushioning heel pads made of a plurality of materials such as ethylene vinyl acetate, polyethylene, etc., can help to redistribute the plantar pressure, hence reducing the risk of foot ulcers and preserving the mobility of diabetic patients.
2023-02-22T16:10:55.521Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "873fa4f39c2b63a4161e2829fd82234abd120840", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0a002873b5b26db8695933a038be7020b666aefe", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
11741447
pes2o/s2orc
v3-fos-license
Comparison of Clinicopathological Features and Treatment Results between Invasive Lobular Carcinoma and Ductal Carcinoma of the Breast Purpose The purpose of this study was to assess the incidence of invasive lobular carcinoma (ILC) and to compare the clinicopathological features and treatment results after breast conserving surgery (BCS) followed by radiotherapy between ILC and invasive ductal carcinoma (IDC). Methods A total of 1,071 patients who underwent BCS followed by radiotherapy were included in the study. Medical records and pathological reports were retrospectively reviewed. Results The incidence of ILC was 5.2% (n=56). Bilateral breast cancer, lower nuclear grade, and hormone receptor-positive breast cancer were more frequent in patients with ILC than in those with IDC. There were no cases of lymphovascular invasion or the basal-like subtype in patients with ILC. There were no statistically significant differences in patterns of failure or treatment outcomes between patients with ILC and those with IDC. The development of metachronous contralateral breast cancer was more frequent in patients with IDC (n=27). Only one patient with ILC developed contralateral breast cancer, with a case of ductal carcinoma in situ. Conclusion The incidence of ILC was slightly higher in our study than in previous Korean studies, but was lower than the incidences reported in Western studies. The differences we observed in clinico pathological features between ILC and IDC were similar to those described elsewhere in the literature. Although there were no statistically significant differences, there was a trend toward better disease-specific survival and disease-free survival rates in patients with ILC than in those with IDC. INTRODUCTION The breast is composed of adipose tissue and glandular tissue. A terminal ductal-lobular unit is the basic unit of the glandular tissue and is postulated as the site of origin of most breast cancers [1]. Invasive lobular carcinoma (ILC) arises in the lobule and is microscopically characterized by linear infiltration of small uniform cells. It is the second most common invasive breast cancer after invasive ductal carcinoma (IDC), accounting for 2% to 3% of breast cancers in Korean women [2][3][4][5][6]. Multicentric, bilateral, and estrogen receptor (ER)-positive cancers are frequent in patients with ILC compared to those with IDC [7][8][9]. Although some studies have claimed that the local recurrence rate after partial mastectomy in patients with ILC is high [10,11], the treatment outcomes of breast conserving surgery (BCS) followed by radiation ther apy are comparable to those of mastectomy [12]. The purpose of the current study was to assess the incidence of ILC among patients who underwent BCS followed by radio therapy and to compare the clinicopathological features and treatment results between ILC and IDC. Purpose: The purpose of this study was to assess the incidence of invasive lobular carcinoma (ILC) and to compare the clinicopathological features and treatment results after breast conserving surgery (BCS) followed by radiotherapy between ILC and invasive ductal carcinoma (IDC). Methods: A total of 1,071 patients who underwent BCS followed by radiotherapy were included in the study. Medical records and pathological reports were retrospectively reviewed. Results: The incidence of ILC was 5.2% (n = 56). Bilateral breast cancer, lower nuclear grade, and hormone receptor-positive breast cancer were more frequent in patients with ILC than in those with IDC. There were no cases of lymphovascular invasion or the basal-like subtype in patients with ILC. There were no statistically significant differences in patterns of failure or treatment outcomes between patients with ILC and those with IDC. The development of metachronous contra-lateral breast cancer was more frequent in patients with IDC (n= 27). Only one patient with ILC developed contralateral breast cancer, with a case of ductal carcinoma in situ. Conclusion: The incidence of ILC was slightly higher in our study than in previous Korean studies, but was lower than the incidences reported in Western studies. The differences we observed in clinico pathological features between ILC and IDC were similar to those described elsewhere in the literature. Although there were no statistically significant differences, there was a trend toward better disease-specific survival and disease-free survival rates in patients with ILC than in those with IDC. Treatment All patients underwent upfront BCS. Patients who received preoperative chemotherapy were not included in the study. If the surgical margins were involved by ductal carcinoma in situ or invasive tumor, a re-excision was performed. Sentinel lymph node biopsies were performed in clinically node-negative patients, and axillary lymph node dissections were performed in clinically node-positive or sentinel lymph node-positive patients. Adjuvant chemotherapy was recommended for node-positive patients as well as those with tumors larger than 1 cm or basal-like subtypes. The chemotherapy regimens consisted of cyclophosphamide, methotrexate, and 5-fluorouracil (CMF); doxorubicin and cyclophosphamide (AC); 5-fluorouracil, doxorubicin, and cyclophosphamide (FAC); and AC followed by paclitaxel. Anthracycline-based chemotherapy was adopted in 2001 and replaced CMF chemotherapy beginning in 2004. Hormone therapy was recommended for patients with hormone receptor-positive tumors. Radiation therapy was started 4 to 6 weeks after surgery or completion of adjuvant chemotherapy or was delivered between AC and paclitaxel. The radiation field was matched to the tangential field covering the whole breast and the lower part of the level I and II axillary lymph nodes. The field-infield technique or the wedge was used to improve the dose homogeneity. Supraclavicular fossa irradiation was performed in patients with pathological N2 or high-risk N1 disease. A median dose of 50.4 Gy (range, 50.0-50.4 Gy) at 1.8 to 2.0 Gy per fraction was delivered with 4 or 6 MV photon beams. An electron boost to the tumor bed with a median dose of 10.0 Gy (range, 6.0-12.0 Gy) was delivered to all patients except those with microinvasive carcinomas. Clinicopathological features Medical records and pathological reports were retrospectively reviewed to assess clinicopathological features including age, laterality, pathologic stage, nuclear grade, ER status, progesterone receptor (PR) status, human epidermal growth factor receptor 2 (HER2) status, extensive intraductal carcinoma (EIC), and lymphovascular invasion (LVI). Pathologic stage was classified according to the seventh edition of the American Joint Committee on Cancer Staging Manual [13]. The histologic grade was scored according to the Bloom-Richardson grading system and the Elston-Ellis modification of the Scarff-Bloom-Richardson grading system (Nottingham histologic score system) [14][15][16]. The hormone receptor status, HER2 status, and p53 protein expression were determined by immunohistochemical (IHC) staining. The tumors were classified into three IHC subtypes: luminal (ER-or PR-positive), basal-like (ER-, PR-, and HER2-negative), and erbB-2 overexpressing (ER-, PR-negative, and HER2-positive) [17]. EIC was defined as an intraductal carcinoma occupying more than 25% of the primary tumor with intraductal foci separate from the main tumor mass. Statistical methods The clinicopathological features of ILC and IDC were compared using Pearson chi-square test. Disease-specific survival (DSS) was measured from the date of surgery to the date of death from breast cancer, and deaths from other cancers or diseases were censored. Disease-free survival (DFS) was measured from the date of surgery to the date of any recurrence or to the date the patient was last known to be recurrence-free. Metachronous contralateral breast cancer was not considered recurrence. Kaplan-Meier analysis and log-rank tests were used to estimate and compare the DSS and DFS. Multivariate analysis was performed using the Cox proportional hazards model. A Bonferroni correction was applied for multiple testing. The SPSS statistical software version 18.0 (SPSS Inc., Chicago, USA) was used for statistical analyses. A p-value of less than 0.05 was considered statistically significant. Ethical consideration This study was approved by the Institutional Review Board of the Samsung Medical Center (IRB number: SMC 2015-05-018). Clinicopathological features Among 1,071 patients with invasive breast cancer, 56 patients (5.2%) were diagnosed with ILC. Table 1 shows the comparison of clinicopathological features between ILC and IDC. There were many cases in which EIC and LVI data were not reported, because it was not obligatory to report these until the development of a unified format for pathology reports in 2005. However, the proportions of cases in which the hormone receptor status and IHC subtype were unreported were less than 10%. The statistical analyses were performed excluding cases with unreported data. There were no statistically significant differences in age, pathologic stage, resection margins, EIC, or HER2 status. Statistically significant differences were found in laterality, nuclear grade, LVI, hormone receptor status, p53 status, and IHC subtype. Bilateral breast cancer was more frequent in patients with ILC than in those with IDC, at 7.1% vs. 1.5%, respectively. ILC was found to have a lower nuclear grade than IDC. There were no cases of ILC with LVI. The proportion of hormone receptor-positive breast cancers was higher in patients with ILC than in those with IDC. With respect to IHC subtypes, the erbB-2 overexpressing subtype was less frequent in ILC, and there were no instances of the basal-like subtype found among patients with ILC. Treatment results A total of 825 patients (77.0%) received chemotherapy after BCS. There was no statistically significant difference in regimens between patients with ILC and those with IDC (p = 0.494). A total of 699 patients (65.3%) received hormone therapy including tamoxifen or aromatase inhibitors. Among 722 patients (67.4%) with ER-or PR-positive cancers, 676 patients (63.1%) received hormone therapy. The median follow-up duration, calculated from the date of surgery, was 114 months (range, 5-238 months). During the follow-up period, 105 patients died of breast cancer and 15 patients died of other causes, including other cancers, myocardial infarction, intracranial hemorrhage, and pneumonia. The 10year DSS and DFS rates were 89.4% and 84.0%, respectively. Recurrence occurred in 163 patients (15.2%). There were no statistically significant differences in the patterns of recurrence between ILC and IDC ( Table 2). Twenty-eight patients (2.6%) developed contralateral breast cancer, which is not defined as recurrence, and all but one of these patients were initially diagnosed with IDC. Statistical analyses On univariate analyses, age, stage, nuclear grade, EIC, LVI, ER status, PR status, HER2 status, p53 status, and IHC subtypes were statistically significant prognostic factors for DSS. Age, stage, nuclear grade, LVI, PR status, HER2 status, and p53 status were statistically significant prognostic factors for DFS (Table 3). On multivariate analyses, age, LVI, and p53 status were statistically significant prognostic factors for DSS. Age, stage, and LVI were statistically significant prognostic factors for DFS (Table 4). DISCUSSION ILC is the second most common type of invasive breast cancer after IDC. In 2000, Li et al. [18] analyzed the data of the Surveillance, Epidemiology, and End Results (SEER) Program, including 240,018 patients with invasive breast cancer, and reported that 16,476 patients (6.9%) were diagnosed with ILC. In addition, they reported that the incidence rate of ILC increased steadily from 1987 to 1995 in women older than 50 years. Eheman et al. [19] reported on the changing incidences of IDC and ILC in the United States between 1999 and 2004. The incidence of ILC decreased from 11.7% to 9.3%. Interestingly, the authors noted differences in incidence rates according to race. The age-adjusted incidences of ILC were 11.2, 6.6, 4.4, and 3.6 per 100,000 in Caucasians, African-Americans, Asians, and American Indians, respectively. In Korean women, the incidence of ILC was reported as 2% to 3% [2][3][4][5][6]. In our study, it was 5.2%, slightly higher than in other Korean studies. However, even considering selection bias, the figure reported in our study was still lower than that in Western women. Many studies had reported several differences in clinicopathological features between ILC and IDC [7][8][9][20][21][22][23]. Tumor size, tumor grade, hormone receptor status, and incidence of contralateral breast cancer were the most commonly reported differences in various studies. ILCs were larger and had a lower tumor grade than IDCs. Approximately 76% to 93% of ILCs were hormone receptor-positive. Contralateral breast cancer was more common in patients with ILC, with a reported incidence of 14% to 21%. Although several studies reported statistically significant differences in average age between patients with ILC and those with other invasive carcinomas, the age gap was small (less than 3 years). In our study, there were no statistically significant differences in age, pathologic stage, resection margins, EIC, or HER2 status. Statistically significant differences were found in laterality, nuclear grade, LVI, hormone receptor status, p53 status, and IHC subtype. However, these results should be evaluated carefully, because our study included patients who were suitable for BCS, and EIC and LVI were unreported in more than 50% of cases in the patients with ILC. Several studies reported higher rates of local recurrence after BCS than after mastectomy and suggested mastectomy for patients with ILC [10,11]. In a recently published study by Fodor [24], DSS was not affected by the surgical extent, but the 15-year local recurrence-free survival rates were 77% and 89% after BCS and mastectomy, respectively (p = 0.005). However, among the 72 patients who underwent BCS, 19 patients (26.0%) did not receive adjuvant radiotherapy. Add itional analysis revealed that the 15-year local recurrence rates for the BCS groups with or without adjuvant radiotherapy were 10% and 53%, respectively (p < 0.001). As adjuvant radiotherapy after BCS has been proven in the National Surgical Adjuvant Breast and Bowel Project clinical trial to decrease local recurrence [25], previous results of BCS should be carefully investigated, with this factor taken into consideration. Many studies have found that there are no differences in survival or local recurrence between patients with ILC and those with IDC after BCS followed by radiotherapy [21,[26][27][28][29][30]. The 10-year local recurrence rates were found to be 9%-18% and 7%-12% for ILC and IDC, respectively. In our study, while the differences were not statistically significant, there was a trend toward better DSS and DFS in patients with ILC. The 10-year DSS rates were 98.0% and 89.0% in ILC and IDC, respectively (p = 0.262), while the 10-year local recurrence rates were 3.6% and 8.8% in ILC and IDC, respectively (p = 0.457). The better outcomes might be due to a higher proportion of hormone-positive breast cancers in patients with ILC. The number of patients with ILC was too small to show statistically significant differences. Our study does have some limitations. First, we included only patients who underwent BCS, and this population does not represent the entire spectrum of invasive breast cancer patients. Second, although LVI was an independent prognostic factor in both univariate and multivariate analyses, it was not reported in 695 cases (64.9%). Depending on the LVI status in the cases in which it was unreported, the results of the statistical analyses could change. Finally, there was an evolution in the chemotherapy regimens over time, from CMF to anthracycline-based chemotherapy, and this change could have affected regional or distant recurrences. In conclusion, the incidence of ILC in our study was 5.2%, slightly higher than that observed in other Korean studies, but still lower than those reported in Western studies. Bilateral breast cancer, lower nuclear grade, and hormone receptorpositive breast cancer were more frequent in patients with ILC than in those with IDC. There were no cases of LVI or the basal-like subtype among the patients with ILC. There were no statistically significant differences in the patterns of treatment failure. The development of metachronous contralateral breast cancer was more frequent in patients with IDC (n = 27). Only one patient with ILC developed contralateral breast cancer, with ductal carcinoma in situ. Although the difference was not statistically significant, there was a trend toward better DSS and DFS rates in patients with ILC.
2018-04-03T03:45:40.259Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "34b40e5c7f255b93aeab51a402376efe9f500882", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4048/jbc.2015.18.3.285", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "34b40e5c7f255b93aeab51a402376efe9f500882", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
116250824
pes2o/s2orc
v3-fos-license
Predicting Human Location Using Correlated Movements : This paper aims at estimating the current location, or predicting the next location, of a person when the recent location sequence of that person is unknown. Inspired by the fact that the behavior of an individual is greatly related to other people, a two-phase framework is proposed, which first finds persons who have highly correlated movements with a person-of-interest, then estimates the person’s location based on the position information for selected persons. For the first phase, we propose two methods: community interaction similarity-based (CISB) and behavioral similarity-based (BSB). The CISB method finds persons who have similar encounters with other members in the entire community. In the BSB method, members are selected if they show similar behavioral patterns with a given person, even though there are no direct encounters or evident co-locations between them. For the second phase, a neural network is considered in order to develop the prediction model based on the selected members. Evaluation results show that the proposed prediction model under the BSB scheme outperforms other methods, achieving top-1 accuracy of 71.13% and 69.36% for estimations of current and next locations, respectively, with the MIT dataset and 92.31% and 92.03% in case of the Dartmouth dataset. Introduction Human mobility prediction can be used in many potential and promising applications, such as location-based recommendation services, contagious disease control, patient tracking, geographic profiling, and urban planning [1][2][3][4][5]. In particular, in order to prevent the outbreak of infectious diseases (e.g., malaria and flu), which usually spread due to people's travels and interactions, estimating the location of persons who are at a high risk of infection is crucial [1,4]. In geographic profiling, in order to strengthen security, law enforcement agencies need to analyze movements of suspects and need to determine their most likely position [5]. If someone's recent location is known using a global positioning system (GPS) or other localization techniques (e.g., Wi-Fi, cellular-based systems), it is relatively simple to estimate that person's current or next locations. However, there are cases where someone's recent location sequences are not available. For instance, the GPS function is required to turn off due to lack of battery power in a smartphone even though people want to share their positions. In addition, there are some cases where data is sparse [6,7] (e.g., call detail records, health record of individuals, and credit card spending data). Moreover, in some cases, it is necessary to estimate locations even when someone may not want to share that information (e.g., geographic profiling of criminals). Therefore, in this work, we consider a new problem which is estimating someone's current or next location, particularly when recent location sequence information is not available. Specifically, the mobility information of other Persons with Correlated Movements (PCM) with a given person is used to estimate the person-of-interest's current or future location, based on the fact that the behavior of one individual is largely related to other members. In other words, the current or next location of a given person is estimated using the known position information of other specific people. Although there were several studies on the prediction of human location using social relationships [8][9][10], those works focused on finding direct social ties between people (e.g., friendships) using encounters or calling traces between them. However our work, in addition to friendships, considers behavioral similarity between people, which may not be reflected by direct encounters or co-location events between them. In this paper, we propose a two-phase human location prediction model. More specifically, in the first phase, PCMs with a given person are selected. For this phase, two novel PCM selection methods are proposed: community interaction similarity-based (CISB) and behavioral similarity-based (BSB). In the CISB method, interactions with other community members are considered as well as direct encounters between certain individuals, i.e., the strength of the social relationship between people is measured by not only their direct interactions but also their interactions with other members of society. The motivation is that if two people have a close social tie, then they may have similar patterns of meeting other members of the community. In contrast to the CISB method, which attempts to find PCMs with stronger social ties, the objective of the BSB method is to find PCMs with similar behavioral patterns. As a result, the selected PCMs under the BSB method may not have a strong social relationship with the person-of-interest. In the second phase, motivated by the fact that the movement of an individual is highly correlated with other people, the current or next location of a person is estimated using the position information of selected PCMs. A PCM-based location prediction (PLP) model is developed based on a neural network (NN) [11]. For this paper, two different datasets were used to evaluate the proposed prediction model. The first dataset provided cellular traces collected by the Massachusetts Institute of Technology (MIT) Media Lab [12], which recorded the time a mobile phone was associated with a base station. In a metropolitan area, small and short-range cells that cover distances of a few hundred meters are more popular [13]. Moreover, since people usually carry their phones all the time, the position of a base station can be used to represent a person's approximate location [10,[14][15][16][17][18]. The second dataset is the mobility data extracted from Wi-Fi traces collected in the Dartmouth university campus [19]. A log message was recorded whenever a mobile device associates or disassociates with an AP. Due to the relatively short range of the Wi-Fi technology, an AP can be used to reflect a person's location [20][21][22]. Experiments were performed in order to validate the proposed prediction model in various situations. We draw a comparison between the designed PLP model and the baseline one, called most frequent location (MFL), in which the most visited position is predicted as the location of the person-of-interest. Two proposed PCMs selection methods are also compared with three recent existing approaches: encounter frequency-based (EFB) [18,23,24], spatial closeness (SC) [6], and spatio-temporal closeness (STC) [6]. Specifically, EFB is rooted in encounter frequency, which is one of the natural characteristics of a friendship, i.e., two members who more frequently meet each other are likely to be closer friends. In SC, the persons who have the most similar distribution of visited locations to the person-of-interest are selected, meanwhile STC chooses members with the most synchronous movement with that of the person-of-interest. The main contributions of our work can be summarized as: • Our work considers a new problem in which human location should be estimated in a challenging situation, i.e., the recent historical location of the person-of-interest is not available. To address this challenge, a two-phase framework with low time and space complexity is proposed. Specifically, for a given person-of-interest, persons with correlated movements (PCMs) are first selected, then the location of the person-of-interest is estimated based on the spatio-temporal information of selected PCMs. The rest of this manuscript is organized as follows. Sections 2 and 3 discuss related studies of human mobility prediction and the two considered datasets, respectively. Then, the proposed framework is described briefly in Section 4. In Section 5, two methods for selecting PCMs in the first phase are presented. Section 6 explains the PCM-based location prediction model in detail. Then, in Section 7, we analyze the evaluation results of the proposed model with different PCMs selection methods. Finally, the conclusions from this work are drawn in Section 8. Related Work In this section, related studies on different aspects of human mobility are presented and compared with the proposed model. Human Mobility Analysis A number of studies were aimed at revealing human movement characteristics [25][26][27]. For example, Karamshuk et al. [27] classified the properties of human mobility into three groups: spatial, temporal, and connectivity. With regard to the spatial characteristic, they focused on geographic movement, i.e., how far a person moves and where a person goes. Flight was defined as a Euclidean distance between two consecutive spots visited by the same individual. Temporal features were also considered, e.g., pause-time indicates the time period a person stays at a specific location. Meanwhile, the connectivity property reflects the contact or encounter between two people, e.g., inter-contact time was defined as the elapsed time between two adjacent contacts for a pair of people. In addition, a few studies considered the effect of social relationships on the human movement [8,28]. Cho et al. [8] observed that people are likely to visit a distant place where a friend stays nearby, based on the datasets of online location-based social networks and cell phone location traces. Meanwhile, short-distance travel is less affected by social ties. Those studies mainly focused on capturing human mobility characteristics, whereas we plan to design a movement prediction model to estimate a person's current or future location. Note that embedding the properties of human mobility will be helpful for accurately predicting human movement. Therefore, in this work, a mobility prediction model is proposed, considering human movement characteristics such as encounter frequency and social correlation. Human Mobility Prediction Several studies tried to predict where a person stays given the prior information on historical locations of that person [2,6,18,[29][30][31][32]. Most of those studies are based on the Markov model. For instance, in order to predict the person's position in upcoming time slots, Pang et al. [29] proposed a modified Markov model considering spatio-temporal information (i.e., sojourn time and location transition preference). The authors claimed that the modified Markov model achieves higher prediction accuracy than the original Markov model. Alhasoun et al. [6] constructed a prediction model based on dynamic Bayesian networks with the assumption of knowing the last visited location of the person-of-interest and historical position data of 'strangers'. Strangers were defined as members who do not necessarily have a social link to the person-of-interest. Authors proposed three methods to determine strangers: temporal closeness, spatial closeness, and spatiotemporal closeness. In case of the temporal closeness approach, the members who have the most similar pattern of communication (e.g., call, sms, and data) to the person-of-interest were chosen. Meanwhile, the spatial closeness method selected the person with the most similar distribution of visited locations to the person-of-interest. The spatiotemporal closeness considered the chi squared test value to compute the closeness between two people p and q. Specifically, the contingency table was first constructed where the element at row i and column j represents the number of times persons p and q concurrently stay at location i and j, respectively. Then, the chi squared test value was used to measure the degree of association between two persons on the contingency table. In addition, Zeng et al. [32] first determined the missing data of human trajectories by using the Gibbs sampling algorithm. Then, a high-order Markov chain model was constructed to predict the most likely location of the person-of-interest. Meanwhile, Noulas et al. [2] considered two location prediction models based on linear regression and M5 model trees to address the problem of predicting the person's next location. Mobility features such as historical visits and temporal information were fed into the prediction model. The authors concluded that combining different mobility features achieved noticeably higher performance than using a single feature approach. Unlike models in [2,6,18,[29][30][31][32], we design a prediction model which does not require a prior location history of the person-of-interest. In addition to historical visit information, a number of factors can be used to predict human mobility, such as social friendships, location preferences, and temporal information. Among these factors, the strong relationship between person movement and friends was revealed in some studies [8,33,34]. Consequently, a number of mobility prediction models were designed with the support of friendship information [8][9][10]18,35,36]. For instance, Cho et al. [8] decomposed human mobility into two parts: periodic and socially correlated movements. The authors demonstrated that short-distance travel is usually affected by periodic mobility, whereas friendship tends to influence long-range movement. They first developed a human mobility prediction model assuming that people's periodic travels follow a mixed Gaussian distribution of home and workplace. Then, with consideration for social friendships, the probability of being in a location is estimated as a function of the time period during which a friend stays in that location and the distance from the person to his/her friend. Finally, a mobility probability distribution combining periodic and socially correlated movement is formulated using Bayes' theorem. Even though certain studies [8,9] did not require a location history for people, their models only work with datasets of geographic locations, e.g., GPS traces. Therefore, to address the limitation of [8,9], our work attempts to design a more flexible and applicable model that does not require a dataset of physical locations. Moreover, our model considers behavioral similarity between people as well as friendship. Note that behavioral similarity may not be revealed based on direct interactions or encounters between people. In regard to datasets of symbolic locations, among the models considering social correlation features to predict person movements, a few approaches worked with datasets of non-geographic locations [10,18]. In [18], a location prediction model was considered based on two factors: periodic movements and social relationships. Specifically, a Markov-based model was constructed to capture periodic movements while colocation frequency was used to measure the closeness between people. In order to reflect the impacts of both factors on human mobility, the location prediction model was built where a different weight was assigned to each factor. Meanwhile, Zhang et al. [10] proposed an algorithm called NextCell to predict the future locations of people. A boosting technique was used to combine two predictors that are based on periodic behaviors and social interplay. The periodic behavior predictor considers a probability distribution over locations. Meanwhile, in the social interplay predictor, the probability that two people co-locate at a given time was estimated as a function of phone call features. Although the above studies [10,18] consider the datasets of symbolic coordinations, those models do not take into account the impact of time features on human mobility which are believed to be an important factor for human mobility prediction [2,8,29]. Meanwhile, our work aims at considering both temporal and spatial information of social friends to design the movement prediction model. Moreover, for predicting the current or future location of a person, the proposed framework also takes into account other human mobility characteristics, e.g., encounter frequency, community interactions, and behavioral similarity. Social Community Detection A social network can be partitioned into different disjoint communities where people in the same community tend to have strong connection and similar behavior. In other words, being able to detect social communities can facilitate the prediction of human movement. Therefore, in our work in order to select PCMs for a given person, behavioral similarities and community interactions are considered, in addition to encounter frequency. A number of studies were conducted to detect social communities using graph clustering [24,37] where a network was divided into disjoint communities by using clustering techniques. There were several studies on community detection based on the contact history of members in the network, e.g., encounter frequency and duration [38] and the total number of past encounters of a person [39]. Eagle and Pentland represented the behavior of individuals from a set of primary vectors called eigenbehaviors [39]. Then, community affiliation can be inferred by computing the social behavior distances (e.g., the total number of past Bluetooth encounters) between a person and other members of a social circle. Dataset In this work, we consider two different datasets. The first one is called the MIT Reality Mining dataset [12], which was collected during a period of nine months with the attendance of 106 subjects, including students and faculty members of MIT. Since subjects in the MIT dataset are involved in the same university, the social relationships between people exist with a high probability. The MIT dataset provides cell tower logs including the tower transition events and a set of base stations seen by the participants. In cellular networks, a mobile phone can be within the ranges of several cellular towers. However, the phone is only associated with the tower with the strongest signal. The events of tower transitions are recorded with cell tower ID and a timestamp. Due to the fact that small and short-range cells that cover distances of few hundred meters are more popular in metropolitan areas [13], the cell tower logs can be used to represent human locations [10,[14][15][16][17][18]. Hence, this work uses the dataset of cellular traces in order to evaluate the movement prediction model. The subjects in the dataset participated in the experiment in different time periods, and some subjects have no data or very little data [24]. Therefore, by considering overlapping periods and available data, 43 people with sufficient mobility data were selected. M and m are defined as the set of chosen people and the cardinality of M, respectively. The second one, called Dartmouth dataset [19], provides the mobility traces extracted from logs of APs in the Dartmouth university campus. A log message including the timestamp, user ID, and AP ID was recorded when a mobile device connects or disconnects to the AP. Because of the short range of the Wi-Fi technology, human mobility can be represented as a sequence of connected APs [20][21][22]. For the Dartmouth dataset, a 4-month period from 3 January to 30 April 2004 was considered since the during this period the academic campus was relatively consistent [20,40]. Similar to the MIT dataset, persons whose mobility data was provided less than 75% over the experiment period was filtered out. Then, the Dartmouth dataset includes 162 mobile users. Location Extraction In this subsection, the way to extract locations for people is described. In this work, location information based on time slots is used. Note that a mobile device may be connected to several base stations (e.g., cell towers or access points) during a time slot. Therefore, in such cases, the way to determine the locations of people in a time slot is needed. Let λ denote a threshold for location extraction (0 ≤ λ ≤ 1). By using λ, the representative base station with which the phone is associated the most is determined, since a mobile device may connect to several base stations during a time slot. Specifically, if the ratio of the time a person spends at a base station to the time slot length exceeds λ, this base station is regarded as the representative human location in the specific time slot. In this work, λ and time slot length are set to 0.5 and 30 min, respectively. In cases where no base station satisfies the conditions for location extraction, or where a mobile phone does not receive any signal during a time slot, the person's location in that time slot is marked as undefined. In case of the MIT dataset, positions at which people rarely stay should be pruned. The locations are first arranged in descending order of occurrence frequency in the dataset. Then, a location set that contributes 98% of the cellular traces is selected for use in this work. Let L denote the set of all symbolic locations in the dataset after the pruning process. In the MIT dataset, the total number of positions remaining after the above pre-processing is 482, i.e., |L| = 482. Meanwhile, in the Dartmouth dataset, there are total 399 APs. Table 1 summarizes the main characteristics of both datasets. Table 1. Summary of two datasets after locations and users extraction. Problem Definition Recall that M and L are the set of m members in the entire community and the set of locations, respectively. We take into account the problem of predicting the position of person p (1 ≤ p ≤ m) at current or future time t with the precondition that the recent locations of person p are unknown. Assume that the historical positions of (m − 1) other members can be observed. Therefore, we can make use of the historical information of the rest of people for predicting where person p stays at the current or future time. Let us define y t as the location of person p at time t and R as the spatio-temporal information of the rest of members during the historical period. Specifically, The problem of predicting human location is formulated as follows. The objective is to predict where person p is the most likely to visit at time t given the recent information R, or to maximize the following probability: Two-Phase Mobility Prediction Framework The proposed human mobility prediction framework consists of two phases, i.e., detection of persons with correlated movements (PCMs) and PCM-based location prediction, as shown in Figure 1. Let r denote the number of selected PCMs. In the first phase, r from among (m − 1) remaining members are extracted as PCMs of person p. Then, given the location information of the chosen members, the current or future position of person p is predicted. In the first phase, we measure independently the social correlation between person p and each of the (m − 1) other members. In order to choose r useful PCMs for person p, these (m − 1) people are arranged according to their correlation scores, and r members with the highest scores are selected. The training set is used in the first phase where the input is vector x that represents positional information of the (m − 1) remaining members. Table 2 shows an example of vector x when the location of an arbitrary person, p, is estimated, where x consists of spatial and temporal information. More specifically, x loc 1 , x loc 2 , ..., x loc p−1 , x loc p+1 ,..., x loc m denotes the positions of the (m − 1) remaining members, while x day and x time account for the day and time slot indices, respectively. The first phase returns r PCMs of person p as the output. Then, the location prediction phase estimates the most likely current or future position of person p using the spatio-temporal information of the r selected PCMs. Each sample of the training set is given by (z i 2 i 1 , y t ), where label y t is location information of person p at time t, and feature vector z i 2 In the proposed framework, a conditional probability distribution is estimated with the objective of maximizing probability P(y t |z i 2 i 1 ). More specifically, given the location information of PCMs at time slots between i 1 and i 2 , the position of person p at time t needs to be predicted. In this work, , the location of PCMs in the previous time slot (t − 1) is used to predict the next location of person p (i.e., estimate person p's future location at time t). Meanwhile, (i 1 , i 2 ) = (t − 1, t) indicates that the model estimates the current position of the person using previous and current location information of PCMs. Whereas, if i 1 = i 2 = t, the current position of the person is estimated given the information of PCMs at time t. Hereafter, the model is regarded as estimating the location of a person at time slot t. In the second phase, the PCM-based location prediction model is used to label an unknown sample as one out of |L| possible locations. There are several benefits from the proposed framework. Our model does not require a recent location sequence of person p when predicting the current or future location of person p, which is beneficial in cases where the location information of that person is not available. Note, however, that the location information of person p is necessary for training parameters of the prediction model. The proposed framework first selects r PCMs based on human mobility characteristics, e.g., encounter frequency and behavioral patterns. Then, the location information of only these chosen PCMs is used to predict a person's location at time t. As a result, by reducing redundant input features of the second phase, the overfitting problem can be mitigated. Moreover, the proposed framework allows for low time and space complexity. Detection of Persons with Correlated Movements In this section, two methods for selecting PCMs are proposed: community interaction similarity-based (CISB) and behavioral similarity-based (BSB) methods. Each uses a different measurement score to estimate the closeness between people's movement patterns. Then, the r PCMs with the highest scores are selected. Community Interaction Similarity-Based Method Recall that the encounter frequency-based (EFB) method only uses direct encounters between certain individuals to estimate their friendship. In contrast, to measure correlation scores between persons p and q, CISB considers interactions between these persons and other community members as well as direct encounters between them. The CISB method is inspired by the fact that interactions and relationships with other members in the community have a great impact on a person's behaviors. The CISB method utilizes the community interaction-based similarity (CIS) tensor shown in Figure 2, which represents the spatio-temporal encounters between people. The 3D CIS tensor, S, consists of m layers each of which is a matrix, e.g., S :,:,q reflects the interactions between person q and the rest of the (m − 1) people. More specifically, the p th row in the matrix, i.e., S p,:,q , indicates the temporal encounters of persons p and q. Let n day and n time denote the total number of days in the overlapping period and the number of time slots per day, respectively. n = n day × n time indicates the total number of samples per person during the whole period. a q p,t := S p,t,q accounts for the encounter between persons p and q in time slot t. Vector S p,:,q , representing the encounters between persons p and q, is defined as follows: 1, if persons p and q are in the same cell during time slot t 0, otherwise Person p is not considered able to encounter himself or herself, because tensor S represents the interactions between a person and the entire community, i.e., ∀t, p : S p,t,p = 0. The CIS tensor owns a symmetric property, i.e., ∀t : S p,t,q = S q,t,p . From the CIS tensor, we reshape every 2D matrix, S :,:,q , to a one-dimensional vector with (m × n) elements, which is denoted by s (q) . In order to evaluate the closeness between persons p and q, we consider the meet/min correlation coefficient [41] for measuring the similarity between s (p) and s (q) , which can be used if the vectors include a lot of 1 s. Thus, score d p,q between nodes p and q is calculated as follows: where |s (q) | is the L 1 -norm of vector s (q) . After obtaining scores between person p and (m − 1) others, r people with the highest scores are chosen as PCMs. Because of the characteristics of CIS, i.e., ∀t, p : S p,t,p = 0, and the dot product in the numerator of Equation (3), when measuring similarity d p,q , the interactions between persons p and q vanished unintentionally. For example, in calculating d 1,2 , two vectors s (1) and s (2) are as follows: Since the first n elements of s (1) and the (n + 1) th to (2n) th elements of s (2) are zeros, similarity score d 1,2 that is estimated using Equation (3) does not consider the encounters of persons 1 and 2. To avoid this vanishing problem, when calculating d p,q , the CIS tensor is temporarily modified as follows: S p,t,p = S p,t,q and S q,t,q = S q,t,p . For example, the modified vectors s (1) and s (2) corresponding to s (1) and s (2) , respectively, can be obtained as follows: By using the modified CIS tensor, the CISB method takes into account the interactions with the entire community members. Behavioral Similarity-Based Method Persons p and q are said to have similar behavior if they experience a correlated visit behavior pattern, and hence, the location of a person can be estimated by using the other's positional information. The places two people with similar behaviors visit at the same time are denoted as correlated locations. Note that correlated locations may be different. As seen in Table 3, persons p and q show a lot of similar behaviors even though they do not encounter each other. For example, person p exercises at a gym whenever person q visits a café. It is beneficial to measure the behavior similarity between persons because the location of a person can be inferred by using information of another person with a correlated behavioral pattern. For instance, the location of p at 3 p.m. on day 2 can be predicted using person q's location (café). Based on this observation, in the BSB method, persons with the highest behavioral correlation are chosen as PCMs. The key difference between the BSB method and friendship-based methods (i.e., CISB and EFB) is as follows. When selecting a PCM, the BSB method does not consider real friendships between people while friendship-based methods extract co-location information to measure the friendships between them. Specifically, in the EFB method, members with the highest number of encounters with a given person are chosen as PCMs. As shown in Table 3, persons p and u encounter each other several times (in math class and the laboratory), which indicates a close friendship between them. Therefore, in friendship-based methods, person u is likely to be chosen as a PCM of person p. Meanwhile, in case of two people with similar behaviors but different correlated locations, a person is not likely to be selected as a PCM of the other person. For example, persons p and q have the same behavior of having dinner at 5 p.m. at restaurant B and restaurant C, respectively. Since there is no encounter between people p and q, in friendship-based methods person q does not obtain a high enough score to be selected as a person p's PCM. On the other hand, the BSB method considers this correlated behavior for choosing PCMs. In the BSB method, a feed-forward neural network (NN), shown in Figure 3, is constructed to evaluate the behavioral similarity between persons p and q. x loc q,t is defined as the location of person q at time t, while x Here, the training process is described. First, symbolic location x loc q,t is embedded into an |L|-dimensional indicator vector using one-hot encoding, because there is no ordinal relationship between locations. Similarly, time slot and day indices (i.e., x day t and x time t , respectively) are also converted to indicator vectors. Then, these vectors are used as input units of the neural network including one hidden layer with 150 logistic sigmoid activation nodes. These input features propagate progressively throughout the network until reaching the softmax output layer, which obtains a conditional probability distribution over locations. Θ is defined as the parameters of the NN consisting of weights, denoted by W, and biases that are randomly initialized. Let η denote the number of samples in the training dataset. After retrieving the conditional probability at the output layer, regularized cost function J(Θ) using cross-entropy error [42] is calculated as follows: where f (j) ) and t (j) are the estimated output and target vectors of the j th instance, respectively. Note that log(.) is an element-wise operation. Ω(W ) is the L 2 -based regularization term. The neural network model is trained to minimize cost function J(Θ) using a back-propagation algorithm combined with a gradient descent method. In order to select r PCMs of person p, (m − 1) NNs are trained separately, where each NN measures the similar behavior pattern between individuals p and q (p, q ∈ M, p = q). The prediction accuracy, ξ p,q , of the q th NN, which indicates the behavioral similarity of persons p and q, is defined as the ratio of the number of correctly predicted instances to the number of test samples. In the BSB method, r PCMs among the (m − 1) remaining members with the highest ξ p,q are selected. It is worthwhile to note that there are alternative ways of selecting a set of PCMs. For example, forward search (FS) [43] begins with an empty set and then progressively incorporates each feature (i.e., PCM) into the set. However, FS has an inherent weakness, i.e., high time and space complexity when a large number of features need to be selected. Another simple approach that can be considered is that positional information of all (m − 1) remaining people is used as input features for a prediction model that labels the location of a person at time t with one of the |L| positions. In our work, this simple method is called ignored feature selection (IFS) since there is no feature selection process. Note that IFS may need to train a model with tens or hundreds of thousands of features, which may result in a long training time, and might suffer from the overfitting problem. In order to gain insight into computational costs, the number of training parameters for three methods including IFS, FS, and BSB is compared. For IFS, there is no PCM selection phase, i.e., the location of person p is predicted given the positional information of the (m − 1) other people. With FS, to choose the first PCM, (m − 1) NNs are trained independently. To select the second PCM, a combination of the first PCM and one of the (m − 2) remaining members is examined; (m − 2) NNs need to be learned, each of which estimates person p's location at time slot t given the contextual information of two examined people. Similarly, to select the r th PCM, (m − r) NNs are used to evaluate the subset of the (r − 1) already selected PCMs and one of the (m − r) remaining members. In the FS method, a large r value leads to a significant increase in the number of training parameters. In the BSB and FS methods, it is assumed that three selected PCMs are used for the location prediction phase, where a neural network with three hidden layers of 500, 150, and 50 neurons is considered. For a fair comparison, the number of hidden neurons for IFS is proportional to that of the BSB/FS methods in the mobility prediction phase with the ratio (m − 1) : r because (m − 1) and r are the numbers of people whose location information is given to the IFS and BSB/FS methods, respectively. Table 4 compares the number of training parameters of the three methods, where l BSB , l IFS , and l FS denote the number of parameters, i.e., weights and biases, in behavioral similarity-based, ignored feature selection, and forward search methods, respectively. When the total number of people, m, increases, the three considered methods need to train more parameters. However, the BSB method always learns the least number of parameters. Moreover, the increase rate is quite different in each method, e.g., when m varies from 43 ≈ 54.6 times with l FS . In summary, because only location information of PCMs is used to predict human mobility of a given person, the proposed BSB method incurs a much lower computation cost in terms of time and space complexity than IFS and FS, which makes the BSB method more practical, particularly when there is a large number of people. The PCM-Based Location Prediction (PLP) Model In this section, we propose the PCM-based location prediction (PLP) model based on the neural network to predict human mobility given the movement patterns of r selected PCMs. Figure 4 illustrates the NN-based prediction model where the location of a person at time t is estimated given the information of r PCMs, i.e., z i 2 , is mapped onto real vectors using the one-hot encoding function c(.), where output is the indicator vector in which only one element is set to 1 while others are 0. Since |L| is the size of the extracted location set, c(z loc 1,i 1 ), c(z loc 1,i 2 ), ..., c(z loc r,i 2 ) are vectors in B |L| , where B is a set of binary numbers; c(z day t ) is a day-index vector in B 7 . Meanwhile, the cellular trace between 8 a.m. and 12 p.m. is extracted and each time slot lasts 30 min; therefore, z time t is mapped onto a 32-dimensional binary vector. In addition, grouped time slots are considered. In this case, c(z time t ) is a three-dimensional vector where each dimension corresponds to one of the three parts of a day, i.e., morning, afternoon, and evening. 2. The NN classifier maps input sequence c(z i 2 probability distribution over locations. Let v (i) denote an indicator vector in which only the i th element is 1 and others are 0 (1 ≤ i ≤ |L|). Vector t is defined as the target vector of the NN classifier. The output of the NN is denoted by vectorŷ NN where the i th element,ŷ i , estimates posterior probabilityŷ i = P(t = v (i) |c(z i 2 i 1 )) that the person stays in location i given the information of r PCMs. 3. The vector v max = arg max v (i) P(t = v (i) |c(z i 2 i 1 )) that indicates the most likely location of the person is mapped onto symbolic position y pred NN by the inverse function, i.e., y pred NN = c −1 (v max ). Now, the neural network training process that includes feed-forward propagation, a cost function calculation, and a parameter update is presented. First, feed-forward propagation is described. In this work, µ is defined as the number of hidden layers, and a neural network is considered with logistic sigmoid activation units denoted by σ(.). Hereafter, the input layer is referred to as layer 0, the j th hidden layer as layer j (1 ≤ j ≤ µ), and the output layer as layer (µ + 1). Let Θ = {W (1) , W (2) , ..., W (µ+1) , b (1) , b (2) , ..., b (µ+1) } denote all learning parameters consisting of weights and biases of the NN classifier. More specifically, W (j) and b (j) are the matrix of weights and the vector of biases, respectively, that indicate connections from layers (j − 1) to j; h (j) is defined as the state vector at layer j, and h (j) is given by: Note that h (0) is a vector of input features at layer 0. For the output layer, the unnormalized output vector h (µ+1) is calculated by Then, the output of a softmax function represents the categorical distribution over locations where the i th element of the output vector reflecting the probability that the person stays at time t in location i is calculated as follows:ŷ is the i th value of unnormalized output vector h (µ+1) . Secondly, the calculation of the cost function is briefly described below. Assume that there are η training instances. Cost function J(Θ) using the cross entropy metric and regularization method can be expressed as follows: whereŷ (j) NN and t (j) are, respectively, the estimated output and target vectors of the j th instance, and W denotes the weight matrices. Note that logŷ (j) NN is an element-wise operation. Regularization Ω(W ) is implemented using the L 2 parameter norm penalty. Finally, parameters including weights and biases are obtained by applying a back propagation algorithm, which minimizes the cost value using the gradient descent method with momentum. Let α and γ denote the learning rate of gradient descent and the momentum coefficient, respectively, (0 ≤ γ ≤ 1). Letθ denote a training parameter in the set Θ, i.e.,θ ∈ Θ. Then, eachθ is updated in the j th epoch as follows: where partial derivative ∂J(Θ) ∂θ is the gradient of the cost function with respect toθ, and ω is the current velocity vector with the same dimensions as parameterθ. Evaluation Results and Discussion In this section, the performance of the human mobility prediction framework is examined under different PCMs detection methods and is compared with the baseline prediction model, most frequent location (MFL). The whole dataset is randomly partitioned into training, validation, and test sets at the ratio of 5:2:3. Recall that in case of the Dartmouth dataset the 118-day period is selected and for each day human mobility from 8 h to 24 h is considered. Since each time slot lasts 30 min, there are total 32 time slots per day. Therefore, the number of samples in the Dartmouth dataset is 118 × 32 = 3776. The training, validation, and test sets consist of 1886; 755; and 1133 instances, respectively. The training set is used to fit the model while the purpose of validation set is to determine the appropriate hyper-parameters for the neural network, e.g., the number of hidden layers, hidden units, activation units, and learning rate. Note that only the training set is used in the first phase of the prediction model and the performance of model is estimated based on the test set. Table 5 presents the setup for the mobility prediction framework. Specifically, the two proposed PCM selection methods consisting of community interaction similarity-based (CISB) and behavioral similarity-based (BSB) are evaluated and compared with three recent selection approaches: encounter frequency-based method (EFB) [18,23,24], spatial closeness (SC) [6], and spatiotemporal closeness (STC) [6]. Table 6 shows a list of acronyms which are used in the manuscript. Recall that Alhasoun et al. [6] constructed a prediction model based on dynamic Bayesian networks which leveraged historical location data of selected PCMs to predict the most probable position of the person-of-interest. However, the prediction model in [6] requires to know the last visited place of the person-of-interest, which is not applicable to our considering problem where the historical position data of the person-of-interest is assumed to be unknown. Note that since there was no existing prediction model designed for the considering problem, the PCM-based location prediction (PLP) model is used in the experiments. Additionally, we make a performance comparison between our two proposed PCMs selection approaches and counterpart methods in [6]. There were three similarity measurement metrics to select PCMs in [6], and specifically the first method, temporal closeness, compares patterns of communication (e.g., call, sms, and data) between people. Due to the requirement of extra communication information of members, the temporal closeness is not considered as a counterpart method for selecting PCMs in our work. Therefore, we compare the proposed PCMs selection methods with only two approaches in [6]: STC and SC. For the PLP model, a variety of NN architectures which have a different number of layers, hidden units, and activation functions were examined. Then, among the considered setting, the most appropriate NN architecture was selected by using the validation set. Specifically, there is 1 hidden layer of 150 sigmoid activation units in the NN of the BSB approach. In the NN-based predictor, 500, 150, and 50 sigmoid hidden units are used in the first, second, and third hidden layers, respectively. Weight and bias values are initialized with normal distribution. The L 2 regularization coefficient is set to 0.01. Learning rate in the gradient descent and the momentum coefficient are set to 0.4 and 0.1, respectively. Full batch learning with 1500 iterations is examined. Note that only the training set is used to select the r PCMs in the first phase of the prediction models. For example, in case of the Dartmouth dataset, r PCMs are determined by using the training set of 1866 samples which are randomly selected from the total 3776 instances. Moreover, three types of temporal information are considered. The number of selected PCMs varies from 1 to 5, while top-k accuracy is considered with k = {1, 2, 3, 4}. In addition, un-grouped and grouped time slot features are compared. The bold text in the second column of Table 5 denotes default values of the prediction model. In this work, performance metrics including average and standard deviation of prediction accuracy are obtained over 5 runs. The results are collected via averaging the entire people. The most noticeable results are summarized as follows. From the conducted experiments, the performance of designed PLP model outperforms that of the baseline method, MFL. With regard to the PCMs selection methods, the proposed BSB shows significantly better performance than other PCM extraction approaches. In addition, the generalization capability of the proposed framework increases as a larger number of PCMs are embedded into the model. In particular, this increase is more clearly shown in case of BSB. Performance Comparison of PCM Selection Methods In this subsection, the performance results of PCMs extraction methods are discussed with different temporal features of selected PCMs. The location of a person during time t is predicted with the support of PCMs' positions during the time slot (t − 1) (denoted by pre-loc), time slot t (denoted by cur-loc), and both (t − 1) and t time slots (denoted by pre-cur-loc). Recall that the time slot lasts 15 min. The obtained results are shown in Figure 5. First, we evaluate the predictability of the baseline prediction model, most frequent location (MFL). As can be seen in Figure 5, the MFL model achieves much lower performance than the proposed PLP architecture, and specifically leads to top-1 accuracy of 38.62% and 70.92% with MIT and Dartmouth datasets, respectively. Since the MFL model does not use information of PCMs to predict where person p stays at time t, the temporal feature of PCMs does not affect the performance of MFL model. Now, the performance of PLP model is analyzed with different PCM selection methods. In EFB and CISB methods, members with the most similar movement paths are selected as PCMs, i.e., people who stay the most frequently with a given person. The main difference between these two methods is that the EFB scheme weights the link of two people based on direct encounters between them, whereas the CISB method compares community interactions between persons to estimate their social correlation. Even though in some cases the CISB method obtains slightly higher accuracy than the EFB method, the two generally achieve similar performance. This indicates that a social relationship between two persons can be mainly reflected by direct interactions between them. In contrast, the BSB method aims at selecting PCMs with the most correlated behavioral patterns. Our work shows that discovering behavior patterns helps to significantly improve prediction accuracy of the model compared to the friendship-based method. In fact, for members who work in the same university or company, they tend to have similar lifestyles (e.g., behavioral pattern) owing to the common schedules of the workplace. Therefore, considering behavioral similarity between people can be helpful in selecting better PCMs for predicting people's locations. Specifically, when embedding pre-cur-loc information of PCMs selected by the BSB scheme into the PLP model, we achieve top-1 prediction accuracy as high as 71.00% and 92.31%, respectively, with MIT and Dartmouth datasets. In the SC scheme, regardless of encounters between two people, members who have similar spatial distribution, or longevity of visiting places are selected as PCMs. The SC method may choose PCMs who have different frequency or regularity of staying in locations with the person-of-interest. For example, if two people p and q spend 4 h at the library everyday, the SC method is likely to choose person q as a PCM of p. However, if person p stays at the library in the whole afternoon while q spends 2 h in the morning and 2 h in the evening, using location information of q may not be appropriate to predict the mobility of person p. Meanwhile, in case of the STC method, two people have a high closeness score if they move in a synchronous manner or their movement are highly dependent. Note that PCMs in both STC and BSB tend to move in a synchronous way with the person-of-interest. However, a fundamental difference between BSB and STC is that the STC method does not consider the temporal information when measuring the similarity between two people. Specifically, the STC approach computes the spatial distribution of person p given the location of person q. Meanwhile, the BSB method investigates the spatial distribution of person p given both spatial and temporal information of person q. By considering the temporal data when selecting PCMs, the BSB approach is able to measure movement association with both spatial and temporal aspects. As a consequence, the BSB method achieves significantly higher prediction accuracy than the STC one with both datasets. However, the performance gap between BSB and STC in the Dartmouth traces is noticeably smaller than that in the MIT one. The gap difference may come from different characteristics of human mobility in two datasets. Recall that the STC scheme does not consider the dependence between human movement and the temporal information. Therefore, if human mobility extracted from a dataset (e.g., the Dartmouth) is not highly related to the temporal data, the STC can achieve better accuracy than the case of strong relationship between human movement and the temporal information. One should note that BSB, SC and STC approaches do not take into account the actual encounter or friendship between people. The experiment results in Figure 5 indicate that friendship-based methods (EFB and CISB) causes the lower performance than other approaches (BSB, SC, and STC) in which friendship is not considered. In addition, the proposed BSB approach achieves the most accurate prediction among the considered PCMs selection methods. For example, using location data of PCMs at time (t − 1) and t, the PLP model predicts human locations at time t with top-1 accuracy of 81.64%, 82.57%, 85.92%, 82.07%, and 92.31% using EFB, SC, STC, CISB, and BSB methods, respectively, in case of the Dartmouth dataset. Also, from the observations in Figure 5, among three considered temporal features, pre-cur-loc and cur-loc lead to quite similar performance in both datasets. For example, in case of the MIT traces, the PLP model achieves 71.13% and 71.00% accuracy in predicting the location of person p given pre-cur-loc and cur-loc information of PCMs, respectively. As anticipated, using data of PCMs during time t (cur-loc) can result in a more accurate prediction than the use of (t − 1) (pre-loc). Top-k Accuracy In this subsection, we evaluate the top-k accuracy of the proposed PLP model when k = {1, 2, 3, 4}. Other parameters are set to the default values shown in Table 5. For top-k accuracy, the model outputs a list of k location(s) with the highest probability. If the correct location belongs to the list of k element(s), the prediction is considered accurate. As expected, Figure 6 shows that the obtained accuracy increases from top-1 to top-4. For example, in case of the MIT dataset, the BSB approach results in 71.00%, 82.17%, 85.53%, and 87.59% accuracy corresponding to top-1, top-2, top-3, and top-4, respectively. In all examined PCM selection schemes, the steepest rising rate is observed when k changes from 1 to 2. Then, the outcomes tend to plateau. As can also be observed from Figure 6, there is higher prediction accuracy in the Dartmouth dataset than the MIT one. This observation is attributed to the fact that human movement in the MIT dataset was collected within a wider area than that in the Dartmouth traces. More specifically, all APs in the Dartmouth dataset are located in the university campus while movement trajectories of people in the MIT dataset are not bounded in the school area. Therefore, people mobility in the MIT dataset tends to be more divergent and more difficult to predict than that in the Dartmouth traces. Effects of the Number of PCMs In this subsection, the number of selected PCMs varies from 1 to 5. As seen in Figure 7b, using location information of one PCM (i.e., r = 1), the PLP model obtains top-1 accuracy of 86.53% with the BSB method compared with 70.92% in case of the MFL model. Meanwhile, the PLP model with EFB, SC, STC, and CISB approaches results in 76.58%, 76.94%, 79.57%, and 76.70% accuracy, respectively. It should be emphasized that when adding more PCMs, the performance of the prediction model is generally enhanced. In friendship-based methods (CISB and EFB), this is attributed to the fact that a person usually interacts with multiple PCMs rather than one person during the day. For example, in the morning, person p encounters co-worker q at the workplace. At lunch, p and r have an appointment at their favorite restaurant. Then, persons p and s go to the fitness center to exercise in the afternoon. In other PCMs selection approaches, the performance gain with more PCMs is due to the fact that a person has movement correlation with various PCMs depending on the time of a day. For instance, assume that person p is a middle-aged man. He may have a similar behavioral pattern with younger co-worker q during the daytime, whereas after work, he may have behaviors similar to other middle-aged men, rather than with the younger co-workers. Moreover, the performance gap between the BSB and friendship-based methods (CISB and EFB) becomes larger as the number of PCMs embedded in the model increases especially in the MIT dataset. This is because selected PCMs of CISB and EFB tend to have more similar encounter patterns with person p than under the BSB method. To show this, KL divergence [44] is considered to measure similarity between temporal encounter distributions of PCMs and person p. Vector q denotes the probability distribution of encounters between persons p and q. The i th element of q represents the probability that persons p and q encounter each other during time slot i. Vector r indicates a distribution of encounters between persons p and r. The length of r and q equals the number of samples for each person. Using KL divergence, the difference between the two probability distributions is given by: where q i and r i are the i th elements of vectors q and r, respectively. If the score from KL divergence is small, encounter distributions between PCMs and person p are close, which indicates similar movement patterns between PCMs. Assume that a mobility prediction model with r PCMs is considered. If another PCM with highly similar movement patterns to those of the r existing PCMs is added at input, the location information of the added PCM would not be helpful in predicting location of person p. As shown in Table 7, in cases where the number of selected PCMs is 5, the average value of KL divergence between PCMs' patterns of encounters with a person-of-interest is 11.01, 11.03, and 15.19 in EFB, CISB, and BSB methods, respectively. We can also observe from Table 7 that KL divergence values of SC and STC approaches are generally higher than friendship-based ones since SC and STC do not consider encounters between people. Lower KL divergence values from EFB and CISB methods indicate that they tend to select PCMs with more similar mobility patterns than the BSB method, and accordingly, movement patterns of the selected PCMs under the BSB method are more divergent than those under EFB/CISB methods. This contributes to a large accuracy gap between the BSB and friendship-based methods. The results also agree with the assumption that correlated locations in the BSB method do not have to be the same places. In addition, as shown in Figure 8, it is clear that grouping time slot feature achieves higher prediction accuracy than with un-grouped features. The results reflect that people tend to spend time with each other for a relatively long period. This becomes more understandable in environments like university campuses or companies, where people usually stay with their colleagues for a significant amount of time during the day. After work, they may enjoy leisure activities with friends in the evening. Another reason for the low performance with the un-grouped case may be the small number of data samples compared with the large input dimensions in the un-grouped case. Concluding Remarks Human mobility prediction has been a key point for the success of a variety of potential applications. For instance, with the development of device-to-device communication technologies, being able to accurately predict human locations will facilitate the design of an efficient data routing protocol in opportunistic networks. Moreover, the location prediction model inspires further applications such as urban data mining, location-based recommendation services, and contagious disease control. In addition, location estimation is required even when someone may not be willing to share their locations (e.g., geographic profiling of criminals). Therefore, in this work, we address the mobility prediction problem in which the current or next location of a person is estimated, especially without requiring the positional history of that person. Since the movement of a person is highly related to other people, a two-phase framework is proposed in which persons with correlated movements (PCMs) with the person-of-interest are determined in the first phase. Then, information of these selected PCMs is leveraged in the second phase to estimate the location of the person-of-interest. For selecting PCMs, the communication interaction similarity-based (CISB) method considers encounter interactions between people, whereas the behavioral similarity-based (BSB) scheme selects PCMs who have similar behavioral patterns, instead of considering co-location between people. For the considered problem, our approach is robust, because it can reduce overfitting by only using the location information of the selected PCMs to estimate the positions of the person-of-interest. Low time and space computational complexity is also achieved. Furthermore, geographic locations are not required in our model. Experimental results show that the BSB method gains significant improvement over other PCMs extraction approaches. In addition, the performance generally increases when more PCMs were embedded into the designed prediction model. In particular, a large number of PCMs is more beneficial under the BSB method than others. However, our PCMs selection methods are centralized approaches because of requiring mobility traces of all people. Therefore, the proposed PCMs extraction methods are restricted to small or medium networks. In addition, information assurance needs to be taken into account since mobility data of people is exchanged in the network. As a future work, to address the limitations of the centralized approaches, we plan to design a PCMs extraction method which does not require mobility traces of all people. Furthermore, datasets with more people will be used to examine the proposed prediction model in the future work.
2019-04-16T13:29:21.955Z
2019-01-03T00:00:00.000
{ "year": 2019, "sha1": "e6c181627a51ca68f056b13fe05fd7a66025c862", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/8/1/54/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4725829f138250c5819c71c24a6318af151abe27", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
17384112
pes2o/s2orc
v3-fos-license
Cyclic Weighted Centroid Algorithm for Transmitter Localization in the Presence of Interference This paper addresses the problem of localizing a non-cooperative transmitter in the presence of a spectrally overlapped interferer in a cognitive receiver (CR) network. It has been observed that the performance of non-cooperative weighted centroid localization (WCL) algorithm degrades in the presence of a spectrally overlapped interferer. We propose cyclic WCL algorithm that uses cyclic autocorrelation (CAC) of received signals at CRs in the network to estimate the location coordinates of the target transmitter. Performance of the proposed algorithm is further improved by eliminating CRs in the vicinity of the interferer from the localization process. In order to identify and eliminate CRs in the vicinity of the interferer, the ratio of the variance and the mean of the square of absolute value of the CAC, referred to as feature variation coefficient, is used. Theoretical analysis of the cyclic WCL algorithm is presented in order to compute the root mean square error in the location estimates. We further study the impacts of the interferer's power and location, CR density, and fading environment on the performance of cyclic WCL. The comparison between cyclic WCL and traditional WCL is also presented. Cyclic Weighted Centroid Algorithm for Transmitter Localization in the Presence of Interference Shailesh Chaudhari and Danijela Cabric Abstract-This paper addresses the problem of localizing a non-cooperative transmitter in the presence of a spectrally overlapped interferer in a Cognitive Receiver (CR) network. It has been observed that the performance of non-cooperative Weighted Centroid Localization (WCL) algorithm degrades in the presence of a spectrally overlapped interferer. We propose Cyclic WCL algorithm that uses cyclic autocorrelation (CAC) of received signals at CRs in the network to estimate the location coordinates of the target transmitter. Performance of the proposed algorithm is further improved by eliminating CRs in the vicinity of the interferer from the localization process. In order to identify and eliminate CRs in the vicinity of the interferer, the ratio of the variance and the mean of the square of absolute value of the CAC, referred to as Feature Variation Coefficient (FVC), is used. Theoretical analysis of the Cyclic WCL algorithm is presented in order to compute the root mean square error in the location estimates. We further study impacts of the interferer's power and location, CR density, and fading environment on the performance of Cyclic WCL. The comparison between Cyclic WCL and traditional WCL is also presented. Index Terms-Cyclic autocorrelation, cyclic cross-correlation, cyclostationarity, feature variation coefficient, non-cooperative localization, ratio of quadratic forms in Gaussian random vector. I. INTRODUCTION The knowledge of transmitter location is required in heterogeneous networks and cognitive radio networks for advanced spectrum sharing techniques including location-aware smart routing and location-based interference management [2]. In this paper, we address the problem of estimating the location coordinates of a non-cooperative transmitter (target) in the presence of a spectrally overlapped interference in a cognitive radio network. Under spectrally overlapped interference, the traditional Weighted Centroid Localization (WCL) algorithm [3] results in higher localization errors, since it uses only the received signal power for localization. In this work, we utilize distinct cyclostationary features of the overlapping signals that arise from different symbol rates, in order to estimate the location of the target transmitter. We propose Cyclic WCL and improved Cyclic WCL algorithms to estimate the target location in the presence of spectrally overlapped interference. Spectrally overlapped interference arises in the following two scenarios. First, let us consider a scenario where the Shailesh Chaudhari and Danijela Cabric are with the Department of Electrical Engineering, University of California, Los Angeles, 56-125B Engineering IV Building, Los Angeles, CA 90095-1594, USA (email: schaud-hari@ucla.edu, danijela@ee.ucla.edu). Part of this work has been published to the proceedings of IEEE GLOBE-COM, 2014, Austin, Tx, USA [1]. This material is based upon work supported by the National Science Foundation under Grant No. 1117600. target transmitter (Tx-1) and the interferer (Tx-2) belong to two different networks sharing a common frequency spectrum in a heterogeneous network. For example, consider that Tx-1 is a Wi-Fi transmitter, while Tx-2 is an LTE base station coexisting in the same frequency band. Such coexistence of LTE and Wi-Fi networks is considered for LTE-unlicensed band around 5GHz [4]- [9]. The medium access protocol of the two networks is different and the lack of coordination among the two networks results in interfering transmissions of Tx-1 and Tx-2. In this case, the cyclostationary features of the signals transmitted from Tx-1 and Tx-2 are different due to different symbol rates in LTE and Wi-Fi systems. The cognitive receivers (CRs) in the network can exploit the distinct cyclostationary properties of Tx-1 in order to estimate its location coordinates. For the second scenario, we consider transmitter localization problem in a CR network in the presence of a jammer [10]- [12] that transmits energy in the same band as Tx-1. In this case, the traditional WCL algorithm [3] results in higher localization error because it uses only the received signal power at each CR. On the other hand, the proposed Cyclic WCL and improved Cyclic WCL provide robustness against such interference by using distinct cyclostationary properties of the Tx-1 signal. Non-cooperative localization is commonly used in CR networks where the target transmitter does not cooperate with CRs in the localization process. In this scenario, localization techniques based on Time of Arrival (ToA) or Time-Delay of Arrival (TDoA) are not applicable. In this paper, we consider techniques that do not require ToA or TDoA information and estimates the target location using the received signal at the CRs in the network. A. Related Work Several non-cooperative localization techniques have been proposed in the literature to estimate the location coordinates of the target transmitter. These techniques can be broadly classified as range-based [13]- [15] and range-free [3], [16]- [26]. Range-based techniques provide better estimate of the target location than range-free techniques, but require accurate knowledge of wireless propagation properties such as path-loss exponent. Accurate information of the path-loss exponent is difficult to obtain and it may not be available. In such a case, range-free techniques such as centroid localization [16]- [18] are used to perform coarse-grained target localization without any knowledge of the path-loss exponent. In the Weighted Centroid Localization (WCL) algorithm [17], [18], the target location is approximated as the weighted arXiv:1903.07654v1 [eess.SP] 18 Mar 2019 average of all CR locations in the network. The CR locations are known at the central processing node where the WCL algorithm is implemented. There are different variations of the WCL algorithm proposed in the literature to improve the localization performance [3], [20]- [26]. Theoretical analysis of the WCL algorithm and its distributed implementation are presented in [3]. Different weighting strategies and CR selection techniques are presented in [20] to reduce the adverse impact of the border effect and to improve root mean square error (RMSE) performance of the algorithm. The technique presented in [21] selects reliable CRs that are located closer to the target and uses them in the localization process. Papers [22] and [23] propose direction vector hop (DV-hop) and received signal power based fuzzy logic interference model, respectively, to assign WCL weights. The WCL algorithm that takes into account self-localization error of CRs is presented in [26]. In the algorithms presented in [3], [17], [18], [20]- [26], the weights for each CR location are computed based on the received signal power from the target. However, in the presence of a spectrally overlapped interference in the network, the received signal power at each CR is the summation of powers received from the target and the interferer. Therefore, localization errors in existing algorithms increase significantly due to the spectrally overlapped interference. Hence, there is a need to modify the WCL algorithm when a spectrally overlapped interferer is present in the network. B. Summary of Contributions and Outline In this paper, we propose Cyclic WCL algorithm to estimate the location coordinates of the target transmitter (Tx-1) in a CR network in the presence of a spectrally overlapped interference (Tx-2). The location coordinates of the CRs in the network are known at the central processing node, where the localization algorithm is implemented. The cyclostationary properties of the target signal are used to compute weights in the Cyclic WCL algorithm. The proposed localization algorithm does not require any knowledge of the path-loss model and the location of the interferer. The contributions of this paper are as follows: 1) A cyclostationarity-based localization algorithm referred to as Cyclic WCL is proposed in order to estimate the target location in the presence of a spectrally overlapped interference. In the proposed algorithm, the central processing node estimates the target location as a weighted sum of the CR locations, where weights for CR locations are computed based on the cyclic autocorrelation (CAC) of the received signal at that CR. 2) Theoretical analysis of the proposed algorithm is presented. The RMSE in the location estimate is computed as a function of the target and the interferer locations, their transmitted powers, and the CR locations. 3) The performance of Cyclic WCL is further improved by eliminating CRs in the vicinity of the interferer from the localization process. The ratio of the variance and mean of the square of absolute value of the CAC of the received signal at each CR is used to identify and eliminate CRs in the vicinity of the interferer. Theoretical analysis for the improved Cyclic WCL is also presented in order to compute the RMSE. This paper is organized as follows. Section II describes the system model, Cyclic WCL and improved Cyclic WCL algorithms. Section III presents theoretical analysis to compute the RMSE. Simulation results in various scenarios are provided in Section IV. Finally, Section V concludes this work. In this paper, we denote vectors by bold, lowercase letters, e.g., a or θ. Matrices are denoted by bold, uppercase letters, e.g., A. Scalars are denoted by non-bold letters, e.g., x k or R r k . Transpose, conjugate, trace and determinant of matrices are denoted by (.) T , (.) * , Tr(.), and det(.), respectively. Finally, the terms Tx-1 and target are used interchangeably. II. SYSTEM MODEL AND ALGORITHM A. System Model In this paper, we consider a target-centric CR network, where the target is located at the origin and its coordinates are given by L t = [x t , y t ] T = [0, 0] T . There are K CRs in the network, distributed in a square of side a meters around the origin. The locations of the CRs are known at the central processing node and are denoted by The knowledge of the interferer's location is not required while estimating the target's location coordinates. We assume that the target and the interferer locations are different, i.e., L t = L i . The schematic of the system is shown in Fig. 1. Let p t and p i denote transmitted powers from the target and the interferer, respectively. The signals transmitted from the target and the interferer are √ p t s t (t) and √ p i s i (t), respectively. Both s t and s i are unit power signals. The signals s t and s i partially or completely overlap in the frequency domain. The algorithm presented in this paper is applicable to single carrier as well as multi-carrier OFDM signals, since both signal types exhibit cyclostationary properties. The single carrier and multi-carrier signal models are described below. 1) Single carrier signal model: Single carrier target and interference signals can be expressed as where T s is the sampling period, a l and b l are transmitted data symbols, g and h are pulse shaping filters, and T g and T h are symbol periods of the signals s t and s i , respectively. The carrier frequencies of s t and s i are f t and f i , respectively. The data symbols a l and b l are assumed to be i.i.d. and zero mean. For simplicity of notation, we write g n,l = g(nT s −lT g ) and h n,l = h(nT s − lT h ). CRs Target (Tx-1) 2) Multi-carrier signal model: An OFDM target signal with N c,t sub-carriers and sub-carrier spacing ∆f t can be expressed as [27] where c κ,l are data symbols on κ th sub-carrier, g n,l = g(nT s − lT g ) is the window function. The duration of OFDM symbol is T g = 1/∆f t + T cp , where T cp is the duration of cyclic prefix. The data symbols c κ,l are assumed to be i.i.d. and zero mean. Similarly, a multi-carrier interfering signal can be expressed as where N c,i , d κ,l , and h n,l denote the number of sub-carriers, symbols on κ th sub-carrier, and the window function, respectively. Let α t and α i be the cyclic frequencies of the target and the interferer signal, respectively. For single carrier signals, the values of the cyclic frequencies of a signal depend on its modulation type and symbol rate [28]. For the CAC used in this paper, the cyclic frequencies are at integer multiples of the symbol rate. It should be noted that OFDM signals also have cyclic frequencies at integer multiples of symbol rate due to the existence of cyclic prefix [27]. In this paper, we assume that the cyclic frequencies of the target and interferer signals are different, i.e., α i = α t . The cyclic frequency of the target signal is known at the CRs in the network. To simplify the analysis, we consider sampling frequency f s = 1/T s > max(α t , α i ). Let us denote the powers received at the k th CR from the target and the interferer by p t,k and p i,k , respectively (p dB t,k and p dB i,k in logarithmic scale). In order to compute the received powers, we take into account path-loss and shadowing effects. We assume that the signals s t and s i undergo independent shadowing due to the location difference in the target and the interferer. Therefore, p dB t,k and p dB i,k are given by where p dB t is the power transmitted (in dB) by the target and p dB i is the power transmitted by the interferer, γ is the pathloss exponent and d 0 is reference distance. The variables q t,k and q i,k are used to model the shadowing effect on the target and the interferer powers. As mentioned above, q t,k and q i,k are independent variables and are not identical due to different locations of the target and the interferer. Let us consider a vector q t = [q t,1 , q t,2 , ...q t,K ] T consisting of shadowing variables for the power received from the target at K CRs. We consider uncorrelated shadowing at K CRs in the network. For log-normal shadowing effect, q t is modeled as a Gaussian random vector with zero mean and covariance matrix σ 2 q I K , where I K is a KxK identity matrix (q t ∼ N (0, σ 2 q I K )). We assume that the shadowing statistics for the target and the interferer power are the same. Therefore, the vector corresponding to the interferer power is q i = [q i,1 , q i,2 , ...q i,K ] T ∼ N (0, σ 2 q I K ). Further, the noise power at each CR is denoted by σ 2 w . The AWGN noise at k th CR is w k ∼ CN (0, σ 2 w ). Each CR samples the received signal at the sampling frequency f s = 1/T s . The received signal at k th CR is given as |r k (n)| 2 e −j2παtnTs 4: Compute the target location estimate at the central pro- B. Cyclic WCL In Cyclic WCL, the CRs do not require the knowledge of transmitted powers (p t , p i ) or received powers (p t,k , p i,k ) in order to estimate the target location. Each CR in the network observes N samples of the received signal r k and computes the non-asymptotic CAC of r k at the known cyclic frequency α t = 1/T g of the target signal usinĝ In the above equation, (ˆ) indicates non-asymptotic estimate based on N samples. In order to reduce the impact of the interferer signal at the cyclic frequency α t , the number of samples N > 10 fs ∆α , as shown in Appendix C, where ∆α = |α t − α i | and x denotes the smallest integer not less than x. In order to find N , the knowledge of ∆α is required. In our system, α t is known and α i can be estimated using cyclostationary spectrum sensing [28]. From the estimate of α i , ∆α and hence N can be estimated. It should be noted that the knowledge of α i is required only to obtain a lower bound on N and the proposed algorithm does not depend on the accuracy of estimation of α i , as shown in results in Section IV-C. UsingR αt r k from (7), the central processing node estimates the location coordinates of the target aŝ It should be noted that the locations of the CRs, the target and the interferer should remain constant until N samples of the received signal are collected. Therefore, we assume that the locations do not change for time duration N T s . We also assume that received powers p t,k and p i,k remain constant for this duration. C. Improved Cyclic WCL From the expression of the Cyclic WCL estimates in (8), it is observed that the target location estimates are the weighted average of the CR locations and the weights are computed using the strength (square of absolute value) of the CAC of the received signal at each CR. Therefore, the target location estimateL t is closer to the CR which has the highest value ofR r k . From (6) and (8), intuitively it is expected that the location estimates are closer to the target's actual location when the interferer power is low. As the interferer power increases, the value ofR r k at CRs in the vicinity of the interferer increases. Therefore, the location estimates move away from the target and towards the interferer's location. However, by eliminating the CRs in the vicinity of the interferer from the localization process, the impact of the interferer can be reduced. Based on this argument, we propose the improved Cyclic WCL algorithm that reduces the location estimation error due to the interferer by eliminating the CRs in the vicinity of the interferer. First, let us define the transmit power ratio (ρ) and the received power ratio at the k th CR (ρ k ) as In the above equation, it is observed that if the k th CR is in the proximity of the interferer, we have p i,k > p t,k . On the other hand, for a CR in the proximity of the target, we have p i,k < p t,k . Therefore, the received power ratio (ρ k ) for a CR in the proximity of the interferer is smaller than for a CR in the proximity of the target. Further, we define Feature Variation Coefficient (FVC) at the k th CR as As shown in the Appendix C and D, for a sufficiently large value of N N > 10 fs ∆α , φ k is a strictly monotonically decreasing function of ρ k and 0 ≤ φ k ≤ 1 and it can be written as In order to illustrate how φ k depends on the location of the k th CR, contour plots of φ k at various locations in the network are shown in Fig. 2. It is observed that in the proximity of the target, φ k → 0 . On the other hand, φ k → 1 in the proximity of the interferer. In the four figures in Fig. 2, we can see how higher interferer power (lower ρ) changes the φ k values at various locations in the network. From the above discussion, it is clear that if the central processing node has the knowledge of φ k at each CR, it can set a threshold φ 0 on the FVC value and eliminate CRs for which φ k > φ 0 , since they are closer to the interferer. Thus in the improved Cyclic WCL, each CR computes φ k from the received signal according to (10) and sends out this information to the central processing node. The CRs do not require the knowledge of ρ and ρ k to compute φ k . It should be noted that in (10), φ k depends on var(R αt r k ) and E[|R αt r k | 2 ]. In a practical system, the k th CR estimates the variance ofR αt r k and mean of |R αt r k | 2 using M realizations ofR αt r k . The FVC at k th CR is an estimate based on M realizations ofR αt r k and is denoted byφ M k . Therefore, we havê where v s is sample variance given by v s = Here (R αt r k ) i is the i th realization ofR αt r k . The number of realizations M required to estimate accurate value of φ k is obtained using confidence interval of the estimateφ M k . The analysis to find sufficient number of realizations M for satisfactory performance of the algorithm is presented in Section III-B1. From the knowledge ofφ M k , the central processing node estimates the target location by including CRs for whichφ where (R αt r k ) M is the M th realization ofR αt r k and 1(φ M k ≤ φ 0 ) is an indicator function that is 1 forφ M k ≤ φ 0 and 0 otherwise. It should be noted that the locations of the CRs, the target and the interferer should remain constant until M N samples of the received signal are collected at each CR. We assume that the locations remain constant for the time duration M N T s . We also assume that received powers p t,k and p i,k remain constant for this duration. In (13), φ 0 is a design parameter of the improved Cyclic WCL. Two important points should be noted while selecting the value of φ 0 . First, as observed in Fig. 2, if φ 0 is reduced for a fixed transmit power ratio, then CRs in a smaller area are included in Cyclic WCL. It means that only the CRs At each CR k = 1, 2..K, collect N samples of the received signal r k (n), n = 1, 2..N . 4: Compute CAC estimate: 5: end for 6: Using above M realizations ofR αt r k , computeφ M k using (12). 7: Compute the suboptimal FVC threshold φ sub 0 and estimate the target location using (13). confined in the area whereφ M k ≤ φ 0 are included in the Cyclic WCL. Second, if φ 0 is fixed and the interferer power is increased, again CRs in a smaller area are included in the algorithm. Computation of the optimum value φ opt 0 of the FVC threshold that minimizes the RMSE is presented in Section III-B3. This computation requires the information of p t,k , p i,k , s t and s i . Since the central processing node might not have this information, it computes a suboptimal threshold φ sub 0 as described below. Without the loss of generality, considerφ M 1 ≤φ M 2 ... ≤φ M K . It can be deduced from the analysis presented in Section It should be noted that as the value of φ 0 approaches φ opt 0 , ||L t improved (φ 0 )|| 2 approaches ||L t || 2 . On the other hand, when the value of φ 0 is significantly different from φ opt 0 , ||L t improved (φ 0 )|| 2 takes values that are significantly different from ||L t || 2 . Based on this notion, the central processing node ..φ M K } in two sets: S opt and S non−opt , using kmeans algorithm [29]. This is due to the fact that for φ 0 =φ M K , all the CRs, including those in the vicinity of the interferer are included in the localization which makes the value of ||L t improved (φ 0 )|| 2 differ significantly from ||L t || 2 . The other set obtained by k-means algorithm is then identified as S opt . Further the suboptimal FVC threshold φ sub 0 is computed as the average of {φ 0 : ||L t improved (φ 0 )|| 2 ∈ S opt }. The algorithm steps of the improved Cyclic WCL are described in Algorithm 2. III. THEORETICAL ANALYSIS A. Analysis of Cyclic WCL In order to analyze the Cyclic WCL algorithm, we first formulate the estimates of the x-and y-coordinates of the target in the form of ratios of quadratic forms of a Gaussian vector (RQGV). The RMSE in the target location estimates is computed using the RQGV form. Let us define Cyclic Cross-Correlation (CCC) between signals any two signals u(n) and v(n) at cyclic frequency α t asR In the above equation, (ˆ) indicates the estimate based on N samples and ( * ) indicates complex conjugate. Substituting (6) in (7) and using the above definition, we can write the CAC of the received signal at k th CR aŝ Now, let us define three vectors:θ r ,θ i and p k . The vectorŝ θ r andθ i contain real and imaginary parts of CAC and CCC, and p k contains powers received at the k th CR from the target and the interferer. Therefore, we havê Using (15) and (16), we rewrite the estimate of x-coordinate of the target in terms of ratio of weighted sum of a vector norm: Further, we define power matrix P = [p 1 , p 2 , ...p K ] and position matrices X = diag(x 1 , x 2 , ...x K ) and Y = diag(y 1 , y 2 , ...y K ). From P, X and Y, symmetric matrices A x , A y , and B are obtained as shown below: As shown in the Appendix A,x t can be written in terms of as: Similarly, the estimate of the y-coordinate can be written as: The target location estimates in (19) and (20) that contains real and imaginary parts of estimates of CACs and CCCs as defined in (16). The estimates of CACs and CCCs are computed using N samples of corresponding signals and are modeled as Gaussian random variables for a sufficiently large value of N [30, Eqn. (20)]. It follows that the vectorθ is a Gaussian vector and is a function of number of samples observed N . As shown in Appendix B, CACs and CCCs are computed from moments of s t and s i , which in turn are functions of data symbols, a l , b l and pulse shapes g n,l , h n,l for single carrier signals. For OFDM signals, the moments are functions of subcarrier symbols c κ,l , d κ,l , subcarrier spacings ∆f t , ∆f i , and window functions g n,l , h n,l . Therefore, the mean E[θ] and the covariance matrix Σθ of the Gaussian vectorθ are derived in terms of these parameters as shown in Appendix B. The target location estimates in (19) and (20) also depend on the locations of CRs and the power received at the CRs through matrices A x , A y , and B. In order to compute the RMSE, we find the second moments of location estimates E[x 2 t ] and E[ŷ 2 t ]. It should be noted thatx t andŷ t are in the RQGV form with vectorθ. The second moments of RQGV are given in [31,Thm. 6]. To utilize the result presented in [31], the matrix B should be positive semidefinite. By definition, B = diag(PP T , PP T ). Any matrix of the form PP T is positive semi-definite. Since B has PP T on its diagonal, B is also a positive semidefinite matrix. First, we compute Cholesky factorization of Σθ = CC T . Then, the eigenvalue decomposition of C T BC gives the orthogonal matrix V with eigenvectors on its columns and diagonal matrix Λ with eigenvalues on the diagonal. Define a matrix A * = V T C T A x CV and a vector µ t ] is computed, using [31, Thm. 6], as: where, ∆ = (I n + 2tΛ t ] is obtained by replacing A x with A y . From the second moments, we get the theoretical value of the RMSE ( ) using B. Analysis of Improved Cyclic WCL 1) Estimation of Feature Variation Coefficient φ k : As mentioned in Section II-C, in the practical system, the k th CR estimates the value of φ k using M realizations. We denote the estimate of φ k using M realizations byφ M k and it is given where v s and e s are sample variance ofR αt r k and sample mean of |R αt r k | 2 , respectively. The sample variance v s follows the Chi-square distribution, but it can be approximated by the Gaussian distribution for M > 50 [32, pp.118]. We consider M > 50 for simplicity of analysis. The mean (µ vs ) and variance (σ 2 vs ) of v s are given by [33] µ vs = var(R αt r k ) and where Similarly, e s is a Gaussian random variable with mean µ es and variance σ 2 es as follows: , and σ 2 es = var(|R αt The analytical expressions of µ vs , σ vs , µ es , σ es in terms of E[θ], Σθ and p k are presented in Appendix E. It should be noted that σ 2 vs and σ 2 es → 0 as M → ∞ since v s and e s are consistent estimators of var(R αt r k ) and E[|R αt r k | 2 ]. We compute the confidence interval forφ M k using the fact that it is a ratio of two Gaussian variables v s and e s . Let β be the required confidence level, i.e., the probability that the true value of φ k lies within the given confidence interval. Further, let C be the center of the confidence interval corresponding to the confidence level β and S be the standard error in the computation of φ k . Then from [34], we get and the confidence interval is C.I. = C ± z β S, where Q = 1 − z 2 β σ 2 es /e 2 s and z β is Student's-t variable corresponding to the confidence level β and the number of realizations M . The value of z β is obtained from standard tables such as [33, Table 8.2]. It should be noted that z β S reduces with if the M is increased. While computingφ M k , the number of realizations M should be large enough to satisfy z β S < δ for a small value of δ. The value of δ is selected as the minimum difference between the FVC at any two CRs, i.e., δ = min for i, j ∈ {1, 2, ...K}, i = j. This value of δ ensures that only CRs for which φ k ≤ φ 0 are included in the algorithm with confidence level β. In other words, if φ k ≤ φ 0 , then we haveφ M k ≤ φ 0 . Other other hand, if φ k > φ 0 , then we havê φ M k > φ 0 with probability β. 2) RMSE in the Improved Cyclic WCL Estimates: As mentioned in the previous section, the improved Cyclic WCL includes only CRs withφ M k ≤ φ 0 , where φ 0 is the FVC threshold. In order to write the estimates of x-and y-coordinates as a function of φ 0 , we introduce a threshold based K ×K diagonal matrix, called selection matrix S 0 . The k th diagonal element of S 0 is 1 ifφ M k ≤ φ 0 and 0 otherwise. In other words, the k th diagonal element of the matrix S 0 is the indicator function The selection matrix is incorporated in previous definitions of symmetric matrices in (18) as The location estimates using the improved Cyclic WCL are given asx It should be noted thatx t (φ 0 ) andŷ t (φ 0 ) are also in RQGV form in the Gaussian vectorθ. Further, in order to show that B is positive semidefinite, we note that PS 0 S 0 T P T is positive semi-definite, if there is at least one non-zero element on the diagonal of S 0 . The number of non-zero elements of the diagonal of S 0 is the number of CRs satisfyingφ M k ≤ φ 0 . If φ 0 is selected such that at least one CR satisfiesφ M k ≤ φ 0 , then PS 0 S 0 T P T is positive semi-definite, which in turn makes B also a positive semi-definite matrix. Therefore, for a fixed value of φ 0 , the RMSE for improved Cyclic WCL is computed in similar way as Cyclic WCL: where E[x 2 t (φ 0 )] and E[ŷ 2 t (φ 0 )] are obtained by replacing A x , A y and B with A x and A y and B , respectively in (21). 3) Optimum FVC threshold: The optimum value of the FVC threshold φ 0 that minimizes the RMSE (φ 0 ) is obtained with the knowledge of A x , A y , B , E[θ] and Σθ. Without loss of generality, let us considerφ M 1 ≤φ M 2 ... ≤φ M K . For φ 0 <φ M 1 , no CR will be used for localization and the target location estimates cannot be computed. Therefore, we must have φ 0 ≥φ M 1 for the improved Cyclic WCL algorithm to work. If the value of FVC threshold is φ 0 =φ M k (1 ≤ k ≤ K), then CRs 1, 2, 3..k will be included in the localization process. It should be noted that the RMSE for any value of φ 0 in the range [φ M k ,φ M k +1 ) remains constant. This is because the same k CRs are used for localization if the value of φ 0 is in the given range. Therefore, ). It follows from the above argument that (φ 0 ) has K unique values in the domain of φ 0 ∈ [0, 1] and the unique values We obtain the unique values of (φ 0 ) using (28) The optimum value of the FVC threshold is then given by {φ opt 0 : (φ opt 0 ) ≤ (φ M k ), k = 1, 2..K}. C. Complexity Analysis In this section, we compare the computational complexities of the proposed algorithms, Cyclic WCL and improved Cyclic WCL, with traditional WCL algorithm. We assume that the number of operations (OPS) required for addition, subtraction and comparison is 1, and for multiplication and division is 10, as considered in [3]. The number of OPS required for Cyclic WCL, using (6) and (8), is 21KN + 23K + 36. Similarly, the number of OPS required for improved Cyclic WCL, using (6), Here, ηK is the number of OPS required to obtain φ sub 0 using k-means clustering. The computational complexity of the kmeans algorithm is O(K) [35]. Therefore, we assume ηK is the number of OPS required for k-means clustering. Finally, the WCL algorithm in [3] is a special case of Cyclic WCL with α t = 0. The number of OPS required for WCL is 11KN +23K +26. Therefore, the computational complexities of WCL and Cyclic WCL are O(KN ), while that of improved Cyclic WCL is O(M KN ). IV. SIMULATION RESULTS AND DISCUSSION We consider a CR network with K CRs located in a square shaped area of size 100m x 100m. The path-loss exponent γ = 3.8 and noise PSD is N 0 = −174dBm/Hz. Initially, we show results with single carrier 4-QAM signals with carrier frequency f t = f i = 2.4GHz. The symbol rates of the target and the interferer signals are α t = 20MHz and α i = 25MHz, respectively. The sampling frequency is f s = 200MHz. The number of samples N is selected such that N > 10 fs ∆α = 400. First, improved Cyclic WCL is studied in Section IV-A. Then, we compare the performances of improved Cyclic WCL and Cyclic WCL in Section IV-B in the absence of shadowing (σ q = 0dB). The comparison between the traditional WCL, Cyclic WCL and the improved Cyclic WCL under the shadowing environment is presented in Section IV-D. This section also shows results with OFDM signals. Finally, simulation results with multipath fading channels are shown in Section IV-E. A. Performance of the improved Cyclic WCL In improved Cyclic WCL, the number of realizations M are computed using confidence interval as described in Section III-B1. Further, following the analysis in Section III-B2, the RMSE in the localization estimates is obtained. Note that above analysis holds for given fixed locations of the CRs, the target and the interferer. Therefore, in order to evaluate the performance of the improved Cyclic WCL, first we show the simulation results for fixed locations of the CRs in Section IV-A1. In Section IV-A2, we consider uniformly distributed CRs in the network and compute the average RMSE over 1000 iterations. [20,20], respectively. The number of realizations M required to obtainφ M k are computed with confidence level β = 0.9 and δ = 0.01. For the given parameters, it was observed that z β S < 0.01 for M ≥ 60, therefore M = 60 realizations are used to computeφ M k . Further, the optimum threshold φ 0 was obtained as discussed in Section III-B3. In the scenario considered here, φ opt 0 = 0.09 and the corresponding RMSE is (φ opt 0 ) = 0.03744. The impact of selecting different FVC thresholds on the error is shown in Fig. 3. The y-axis in this figure also represents ||L timproved (φ 0 )|| 2 since the target is located at the origin. In this particular case, the k-means clustering results in ||L timproved (φ 0 )|| 2 ∈ S opt for 0.08 ≤ φ 0 ≤ 0.57. Further, the suboptimal threshold φ sub 0 = 0.16 and the corresponding RMSE is (φ sub 0 ) = 0.03815. Hence, the localization error increased by only 0.0007m if suboptimal threshold is used in this setting. 2) Performance of improved Cyclic WCL with uniformly distributed CRs: Now we consider that the CRs are uniformly distributed in the network. The average RMSE of 1000 realizations of CR locations is plotted against the transmit power ratio (ρ) in Fig. 4. The figure compares performance of the algorithm with optimal (φ opt 0 ) and suboptimal (φ sub 0 ) FVC threshold. As described in Section II-C, unlike φ opt 0 , Anal. Sim.: Interferer @ [10,10] Sim.: Interferer @ [20,20] Sim.: Interferer @ [30,30] Sim.: Interferer @ [40,40] (b) Improved Cyclic WCL does not require the knowledge of p t,k , p i,k , s t and s i . From Fig. 4, it can be observed that the performance of the improved Cyclic WCL with suboptimal FVC threshold is comparable to the performance with optimal threshold. The suboptimal threshold results in increased error of at most 1m for transmit power ratio ranging from 10 dB to −40dB. Therefore, the knowledge of the transmit powers of the target and the interferer and their signals is not necessary for satisfactory performance of the proposed coarse-grained localization algorithm. B. Comparison between Cyclic WCL and improved Cyclic WCL In this section, the impacts of the location of the interferer, CR density and the number of samples N on Cyclic WCL and improved Cyclic WCL are studied. In each scenario, the algorithm is simulated for 1000 realizations with uniformly distributed CRs. The average RMSE over 1000 realizations is plotted. The improved Cyclic WCL is implemented with suboptimal thresholdφ sub 0 . 1) Impact of the location of the interferer: The RMSE in the target location estimates for different locations of [20,20]. the interferer is shown in Fig. 5. The results shown in the figures are counter-intuitive, since the interferer located further away from the target causes higher error as compared to the interferer located closer to the target, especially with increased interferer power. This is due to the fact that at high interferer power, the centroid of |R r k | 2 is closer to the interferer. Further, if the interferer location is away from the target, the centroid and hence the target location estimates move away from the target and closer to the interferer. This phenomenon results in increased error as seen in the figure. For a fixed position of the interferer, it is observed that the RMSE increases with higher interferer power, since the impact of the interferer on the centroid of |R r k | 2 becomes more prominent. As shown in Fig. 5a and 5b, the RMSE in improved Cyclic WCL is significantly lower than RMSE in Cyclic WCL. Further, the RMSE for different locations converge at ρ = 10 dB for Cyclic WCL, and at ρ = −20dB for improved Cyclic WCL. Therefore, improved Cyclic WCL provides more robustness against interferer's power and location. 2) Impact of CR density: The impact of increased CR density in the network on the localization error is shown in Fig. 6. It has been observed that increasing the number of CRs from 10 to 200 at ρ = 10dB reduces the error by 9m, as shown in Fig. 6a. However, at ρ = −40 dB increasing the number of CRs reduces the error by only 0.75m. This is due to the fact that increasing K increases the number of CRs in the vicinity of the interferer as well which contribute to increased error at higher interferer power (ρ = −40dB). Therefore, any gain obtained by increasing the CR density is compensated by increased interferer power which results in essentially a flat curve at ρ = −40dB. Further, the localization error in improved Cyclic WCL, as shown in Fig. 6b, decreases by approximately 5m if K is increased from 10 to 50 for all values of ρ, while the error is reduced by approximately 1m if K is increased from 50 to 200. Therefore, the performance of the proposed algorithm changes only by a small amount if K ≥ 50, irrespective of the interferer power. 3) Impact of number of samples N: In the Cyclic WCL, non-asymptotic estimate of the CAC of the received signal (7) is used to compute weights for each CR location. The nonasymptotic estimates are based on N samples of the received signal r k . Therefore, performance of the Cyclic WCL and the improved Cyclic WCL depends on the value of N . The impact of the value of N on the performance of the algorithm is shown in Fig. 7. It is observed that increasing the number of samples reduces the error in Cyclic WCL. For example, the error is reduced by up to 5m when N is increased from 500 to 5000. With increased value of N , the estimateR αt r k approaches the true value of R αt r k resulting in lower value of the error. On the other hand, the error in the improved Cyclic WCL does not change significantly with increased N as shown in Fig. 7b. Therefore, the performance of improved Cyclic WCL is independent of number of samples N as long as N > fs ∆α . C. Impact of imperfect knowledge of α i In both Cyclic WCL and improved Cyclic WCL, the first step is to select the number of samples N based on ∆α = |α t − α i |. In this section, the cyclic frequency of the interferer α i is varied from 22MHz to 50MHz. In order to compute N , we assumeα i = 25MHz. The number of samples used are N = 500, which satisfies N > 10 fs ∆α = 400, where ∆α = |α i − α t | = 5MHz. The impact of the imperfect knowledge of ∆α is shown in Fig. 8. If ∆α < ∆α = 5MHz, the number of samples do not satisfy the condition N > 10 fs ∆α , which results in higher interference component in the CAC. This phenomenon results in higher error in both Cyclic WCL and improved Cyclic WCL. On the other hand, if ∆α > ∆α = 5MHz, then the error is reduced and the improvement in the performance of the Cyclic WCL depends on the transmit power ratio (ρ). Further, it can be observed that the performance of the improved Cyclic WCL remains the same for different ρ and ∆α in the regime ∆α > ∆α = 5MHz. Therefore, the improved Cyclic WCL is robust to both interference power and error in the estimation of α i . [20,20]. D. Comparison between traditional WCL, Cyclic WCL and improved Cyclic WCL In this section, we compare the performances of the traditional WCL, Cyclic WCL and the improved Cyclic WCL in shadowing environment. It is observed that, even in shadowing environment (σ q = 6 dB), the error in Cyclic WCL is smaller than traditional WCL, as shown in Fig. 9. In the case of improved Cyclic WCL, the error is reduced by a factor of three as compared to the traditional WCL for ρ = −40dB. Further, it has been observed that Cyclic WCL algorithms are robust to shadowing. For example, in improved Cyclic WCL, the error has increased by only 2m when shadowing variance has increased four fold from σ q = 0dB to σ q = 6dB. This is due to the fact that the shadowing effect over K CRs averages out in the WCL algorithms. Next, we show the performance of proposed algorithms with OFDM signals. In these results, s t is a WLAN signal with 64 sub-carriers, and sub-carrier spacing ∆f t = 312.5kHz. Sim. Sim. Anal. Anal. Anal. in improved Cyclic WCL in Fig. 9 and 10 shows that the proposed algorithm performs equally well under OFDM and single carrier signals. E. Performance under multipath fading channels The three algorithms are also studied under multipath fading channels suitable for indoor and outdoor settings. For indoor setting, TGn channel model from WLAN standard is used [36]. This channel model has delay spread of 40ns and Doppler spread of 10Hz. For outdoor setting, extended typical urban (ETU) channel model is considered with 5µs delay spread and 300Hz Doppler spread [37]. This is a commonly used model in LTE cellular system. The channel coefficients h t,k and h i,k are obtained from the aforementioned channel models, where h t,k is the channel between the target and the k th CR, while h i,k is the channel between interferer and k th CR. The received signal is then obtained as r k = √ p t,k [h t,k * s t ] + √ p i,k [h i,k * s i ] + w k , where * denotes the convolution operation. Further implementations of the three algorithms remains the same as in AWGN channel. The localization errors under the two multipath channels are shown in Fig. 11. As in AWGN case N = 100 samples are used at sampling rate 500kHz for localization. It has been observed that the localization errors in all three algorithms increase as compared to AWGN channel. This increase in error can be explained as follows. The signal observation interval at each CR is 100/500kHz = 0.2ms, which is smaller than the coherence time of TGn and ETU channels. Therefore, the received power at each CR in this duration is affected by the small scale fading and the impact of fading is not averaged out as the observation duration is smaller than the coherence time. This leads to increased localization errors. However, we can observe that the localization error in WCL and Cyclic WCL is increased by up to 4m. On the other hand, the error in improved Cyclic WCL increases by up to 2m only. Therefore, we can conclude that the proposed improved Cyclic WCL more robust against multipath fading channels and continues to provide significant performance gain over the WCL algorithm. Fig. 11: RMSE in WCL, Cyclic WCL, and improved Cyclic WCL with different channel models. Signal parameters are same as in Fig. 10. V. CONCLUSION In this paper, we have proposed the Cyclic WCL algorithm to mitigate the adverse impact of a spectrally overlapped interference in CR networks on the target localization process. The proposed algorithm uses cyclic autocorrelation of the received signal in order to estimate the target location. Theoretical analysis of the algorithm is presented in order to compute the error in localization. We have also proposed the improved Cyclic WCL algorithm that identifies CRs in the vicinity of interferer and eliminates them from the localization process to further reduce the error. We have studied impacts of interferer's power, its location, and CR density on performance of the proposed algorithms. The improved Cyclic WCL is observed to be robust against interferer's location and its transmit power. The comparison between the traditional WCL and the improved Cyclic WCL shows that the proposed algorithm provides significantly lower error in localization when there is a spectrally overlapped interference in the network. It has been observed that the improved Cyclic WCL is also robust against shadowing and multipath fading environment. The proposed localization algorithm performs equally well with single carrier as well as OFDM signals. APPENDIX A PROOF:x t IS A RATIO OF QUADRATIC FORM INθ From (17), we havê After rearranging the terms and taking the summation operator inside, we havê Further, using the definitions of P and X from (18), we get with cos(2πα t nT s ) cos(2πα t mT s ), sin(2πα t nT s ) cos(2πα t mT s ), cos(2πα t nT s ) sin(2πα t mT s ) and sin(2πα t nT s ) sin(2πα t mT s ), respectively. Therefore, only the computations of the form E[R αt uR αt * v ] are shown. A. Moments of CACs and CCCs of signals s t and s i The moments of CACs and CCCs are computed using the moments of s t and s i , as shown later in this section. Therefore, the moments of s t and s i are computed first. Since a l s and c κ,l zero mean, we have E[s t (n)] = 0 under both single-and multi-carrier models. In single carrier signal s t , the second moments of s t are Further, the subscript l is dropped after taking the expectation in the derivations, since a l s and c κ,l are i.i.d. for all l. In order to compute E[|s t (n)| 2 |s t (m)| 2 ] for single carrier signals, note that |s t (n)| 2 = ∞ k=−∞ ∞ l=−∞ a k a * l g n,k g n,l and |s t (m) l a p a * q ]g n,k g n,l g m,p g m,q . Since a k s are i.i.d. and zero mean, we have: (30) Similarly, the moments are computed for s i (n) using single carrier and OFDM signal models. From (30) and (32), first and second order moments of the estimates of the CAC are obtained as follows: Further, the cross correlation between estimates of CACs of s t and s i is Similarly, the mean of the CCCs between s t and s i is The above equation follows from the fact that s t and s i are independent and zero mean signals. Ze −j2παt(n−m)Ts . In the equations (33)- (37), the required moments of s t and s i are obtained using (30) and (31). The moments related to the CAC of the noise are obtained using (33)-(36) by substituting s t and s i by w and are also derived in [28]. The moments are as follows: In order to compute E[R stwR * stw ], we substitute s i by w in (37) and use Then we have, (40) 14 The above equation follows from the fact that, for m = n, Y = 0, and for m = n, Y = where N is the length of the rectangular window. It should be noted that f s = 1/T s > max(α t , α i ) to avoid aliased components in H (α, N, T s ). Thus, (41) can be expressed as E[R αt si ] = E[|b| 2 ]H (α t , N, T s ). In order to find a condition on N such that the interference caused by h (nT s ) at α t is negligible, we consider only the first spectral line located at α i = 1/T h . If the power of this spectral component is reduced to a negligible amount at α t , the remaining spectral components (at l/T h , l > 1) will have even less power α t and will not cause significant interference. Then, (41) can be written as: where ∆α = |α t − α i | and H (α i ) is Fourier series coefficient = v k e k = ρ 2 k v t + e i + ρ k e ti ρ 2 k e t + e i + ρ k e ti .
2017-02-22T01:54:44.088Z
2016-06-01T00:00:00.000
{ "year": 2019, "sha1": "3781a58a6ef187f073c1bf0673e7d67b638f71a6", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://doi.org/10.1109/tccn.2016.2586078", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "3781a58a6ef187f073c1bf0673e7d67b638f71a6", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
225363527
pes2o/s2orc
v3-fos-license
Smart campus: a user case study in Hong Kong Smart campus, as a high-end form of a smart education system and a mini-scopic version of a smart city, has received increasing research and attention globally. The existing smart campus concepts are mostly technology-driven, which simply introduces interconnection from a technological perspective to serve its residents but not necessarily adhering to the needs and interests of stakeholders in such a community. To fill this gap, this study presents a human-centred approach for smart campus design and development, where a user case survey study is undertaken in Hong Kong primary and secondary schools. The overall aim of the survey is to accurately and timely capture and understand the perspectives of school stakeholders on education applications in the context of the smart campus. The findings from survey analysis are presented, with insights and suggestions for future smart campus development provided. The findings in this study are also expected to result in a benchmark reference of the smart campus concept for international educational providers, government, and technology companies that will deliver smart solutions. Introduction Many places in the world are undergoing tremendous revolution towards smart cities, where citizen's daily life is increasingly penetrated with 'smart' things, ranging from small devices, such as smart watches, smartphones, and smart meters, to large systems, such as smart homes, smart buildings, and smart grids [1,2]. Among them, a smart campus, as a high-end form of a smart education system and also a mini-scopic version of a smart city, connects the components and users through the common information-based platform to intellectualise learning, teaching, research, management, and campus life. In recent years, the smart campus has received increasing research and attention globally, and there are extensive literatures focusing on defining and envisaging smart campus. For example, a vision on an intelligent campus is provided in [3]. The relationship between smart learning and smart city is investigated in [4]. A smart university taxonomy is identified in [5] with its main features, components, technologies, and systems. A technological smart campus architecture is proposed in [6] to provide both basic and value-added smart services. The realisation of a smart campus in a large-scale university is presented in [7]. It is observed that most of the state-of-the-art smart campus concepts are technologydriven, which simply introduces interconnection from a technological perspective to serve its residents but not necessarily addressing the needs and interests of stakeholders in such community. The survey serves as an effective and efficient tool for subjective information collection and analysis, which has been widely adopted to guide future smart campus development around the world. For example, in [8], a questionnaire survey is conducted on university principals and senior assistants in Malaysia to investigate the impact of information and communication technology (ICT) on smart school management. A survey is conducted in [9] to investigate the factors and driving mechanism of learner's technology engagement towards ubiquitous gamebased learning in the smart campus context. A survey at 13 Dutch universities is conducted in [10] to explore the use of smart campus tools to improve space use on campus. The factors of teachers' acceptance and concerns on using smart mobile devices in their lessons are investigated in [11] through a user-case survey in South Korea. In [12], a questionnaire survey aiming at understanding the stakeholders' perceptions of various smart campus applications is taken at the American University of Sharjah, UAE. The survey studies in the above literature focus on specific smart campus functions, such as school management, mobile-based learning, ubiquitous game-based learning, campus space use etc., but lack an integral view on smart campus design and development. Some campus-level survey studies have also been implemented recently. For instance, the survey study in [13] examines the impact of the iCampus pillars on schools' key performance indicators and investigates the schools' plans for implementing iCampus solutions. In [14], a survey study is conducted at Czech Technical University in Prague and Thammasat University in Thailand to learn about students' preferences and perceptions on the smart campus concept. User surveys at a public university in the United Kingdom are employed in [15] to gain insight into the possible role of user experiences and data in making a smart campus. Overall, these survey-based projects mainly focus on gathering information about technological progress to guide future campus development. In the literature, there are many technologies deployed in smart campus, such as internet of things (IoT) [16], cloud computing [17], augmented reality (AR) [18,19], and artificial intelligence (AI) [20][21][22], and even the latest 5G technology [23], and most survey-based research presumes that smart campuses are technology-driven systems. In fact, in a smart learning environment, although technology must play an important role, the human involved is found to be the heart of such an environment [24]. Therefore, there is a research gap that such technological innovation does not necessarily transfer into benefits to enhance user experience and learning outcomes for a smart campus unless those technologies are carefully selected and integrated into the smart campus system from users' perspective. To fill this gap, we are inspired by the human-centred design (HCD) concept [25,26] that is widely used for technological product development in the early phase [27]. HCD is also considered suitable in smart campus design and development for the following reasons: (i) smart campus involves dense interplay between new technologies and humans; and (ii) smart campus is still at its conceptual stage and there is still much space for its design improvement. HCD is an iterative process involving the understanding of stakeholders' context, identification of stakeholders' requirements, designing and development of solutions, and evaluation of the outcome against stakeholders' requirements. This iteration process continues until the evaluation results are satisfactory. It integrates the use of a mixture of investigative tools (e.g. surveys) and generative methods (e.g. brainstorming) to realise a pervasive approach that can not only develop an understanding of stakeholders' needs but also help achieve a seamless adaption of technology in stakeholders' campus life. Serving as an essential step in the HCD of smart campus, a user-case survey is provided in this paper to study the practical stakeholders' needs of smart technologies in primary and secondary schools. This survey is undertaken in Hong Kong (HK), which is a modern metropolitan city of innovation and infrastructure ready for smart campus role out. The overall aim of the survey is to capture and understand the perspectives of school stakeholders on learning and education applications in the context of the HK smart campus. This paper presents the findings from survey analysis and provides insights and suggestions for future smart campus development in HK. The findings in the survey are expected to result in a benchmark reference of the smart campus concept for international educational providers, government, and technology companies that will deliver smart solutions. The main contributions of this paper are summarised as follows: (i) Unlike the technology-driven smart campus concept in the literature, the survey in this paper adheres to the HCD concept using a pervasive approach. It helps understand how technologies have been working properly on stakeholders and what technologies could be applied in alternative ways to achieve better user experience in a smart campus. (ii) Most data collection in the literature merely targets on tertiary education, whereas few of them on the primary or secondary education institutes. Since primary and secondary education formulates the foundation for children and teenagers, it is also essential to collect and study the smartness needs of these schools. The user-case survey in this paper is conducted on HK primary and secondary schools, which helps expand and generalise the smart campus concept into different education levels. (iii) The recommendations provided in this paper are centric on social and human components, which contribute to a real guide for effective smart campus development and serves as a valuable case to figure out the motivation and guideline for continuing international research in this area in the next stage. The rest of the paper is organised as follows. The HCD of the smart campus is briefly introduced in Section 2; the current technology deployment in HK education system is investigated in Section 3; the user-case survey is presented in Sections 4, 5, and 6, respectively, for survey methods, results, and recommendations; and Section 7 concludes the paper. Concept of human-centred smart campus In recent years, the smart campus has attracted worldwide attention, and there are extensive literatures focusing on defining and envisaging smart campus. However, most smart campus concepts are technology-driven, and the main stakeholders of education including teachers, students, principals, and parents are not necessarily the focus. As reflected by the HCD concept, we need to re-examine the role of technology in the smart campus revolution and make sure the provision of intelligent services is centred on human factors. The smartness of the future campus tends to shift from technology-led to learner-centric and further into human-centred, highlighting the educational needs of all the people involved in the education system. In our previous work [28], we highlighted the HCD of smart campus and defined smart campus as an educational environment that is penetrated with enabling technologies for learning-oriented services to enhance educational performance while meeting stakeholders' interests, with broad interactions with other interdisciplinary domains in the smart city context. From an integral point of view, a smart campus is an educational institution involving various stakeholders. The main stakeholders of smart campuses typically include students, teachers, parents, and school management teams, which all take different roles. Based on their different needs and contributions, the stakeholders' expectations on campus smartness will also be deviated [3]. To integrate the HCD concept in smart campus development requires the service providers to capture and understand the stakeholders' needs and interests in an accurate, timely, and coordinated manner, which motivates the user-case survey study in this paper. Technology deployment in HK education system HK has shown strong technological deployment ready for the smart transition. In the 21st century, the HK government and enterprises are utilising the power of information technology (IT) to strengthen and facilitate smart city development [1,2]. The Digital 21 Strategy released in 1998 sets out the blueprint for the overall development of HK's information and communications technology, which leads the government, society, business, industry, and academia to work together to achieve the goal of moving HK to the forefront of global IT development. Since then, to be in line with the ever-changing technological development and social needs, this strategy has been revised four times, respectively, in 2001, 2004, 2008, and 2014 [29]. HK has so far made considerable progress in its digitalisation. For instance, according to IMD World Competitiveness Yearbook [30], HK achieved the first rank in technology infrastructure in 2012 and 2013. The internet connection speed, broadband, and mobile phone penetration rates in HK (85 and 231%, respectively) are also among the highest in the world. The newest Digital 21 Strategy promotes the theme of 'Smart HK, Smart Living' to create a vibrant information environment for HK. With the widespread IT deployment and development, the HK education system has also experienced a significant revolution in recent years to maximise its benefits. The HK government has invested over $10 billion in IT in education (ITEd) and other elearning initiatives since the 1998/99 school year. The three strategies on ITEd implemented have made significant progress in schools' IT infrastructure, e-learning resources, teachers' professional capacity, and students' digital literacy. Building on the previous advantages and experience of strategies on ITEd, the HK Education Bureau launched the newest Fourth Strategy on ITEd (ITE4) in 2014/15, which is formulated to unleash the learning power of all students through realising the potential of IT in enhancing interactive learning and teaching experiences. ITE4 has a profound impact on the overall development of school education, as well as emphasises the technological readiness of HK to start a smart campus revolution. The strategy states that with the popularity of mobile devices (e.g. smartphones and tablets) and the rich web information in future smart campuses, students can study without the restriction of time and space, meaning the teaching/ learning is not limited to classrooms or restricted by class schedules and designated textbooks. Learning tends to be more autonomous, collaborative, and humane. As a key field in HK smart city plan, the construction of smart campuses has attracted great attention from governments and enterprises. High IT penetration in HK primary and secondary schools has been achieved over the last two decades and actually has made significant progress in schools' IT infrastructure, e-learning resources, teachers' professional capacity, and students' digital literacy. This means the enhanced technology in the HK education sector has laid the foundation for the next generation of smart campus. Although the HK government and relevant associations have made a significant investment in the education sector to strengthen the use of ICT in teaching and learning, schools in HK still have technological limitations with regard to the transition from digitalised education to smart education. The limitations can be summarised in the following areas: (i) Lack of a sophisticated framework or plan of how to construct a sustainable human-centred smart campus. (ii) Limited emerging technologies (e.g. IoT and AI) to support campus smartness. (iii) Difficulties for most elder teachers and non-science, technology, engineering and mathematics (non-STEM) teachers to equip with the knowledge and skills required to handle the new technologies and pedagogies. (iv) Concerns on cybersecurity and privacy. Given the current technological advancement and limitation in HK education system, it is expected the further collaborations among the government, education bureau, school sponsoring bodies, individual schools and relevant departments to make the fulfilment of stakeholders' needs and realise the further potential of smart technologies, to achieve greater success in quality HK education. Survey methods The survey is performed through an offline invitation and an online questionnaire-based survey. This section presents the respondent statistics and the design method of the survey. Respondent statistics A total of 34 primary and secondary schools in HK were invited to join the survey. The participating schools come from different wellestablished SSBs, indicating the survey is broadly distributed to HK schools to minimise biases in the survey results. The user group of the survey focuses on the academic staff and parents of primary and secondary schools in HK. For each participating school in HK, the principal, five teachers, and five parents were invited by the school (totally 11 participants) to complete the survey. As a result, the user groups of this survey include principals, STEM teachers, non-STEM teachers, and parents, all of which are the key stakeholders of the smart campus. The primary and secondary schools in HK are mainly divided into four types based on their funding sources. They are government schools, aided schools, directed subsidy scheme (DSS) schools, and private schools. A brief introduction of each school type is provided below: • Government schools are funded by the government and directly managed by the HK Education Bureau. All the teaching staff of government schools are all civil servants. All government schools have no religious background, and most of them are coeducational. • Aided schools, also known as 'subsidised schools', are nonprofit organisations receiving government subsidies to provide free education. The vast majority of education funding of aided schools comes from the government, while management is the responsibility of the school's board of directors. Most schools in HK fall into this category, accounting for >70% of all schools. • DSS schools are entitled to government subsidies based on the number of eligible students in the school, but tuition fees may still be charged. Compared with government and aided schools, DSS schools have relatively high autonomy and can customise their courses and admission requirements. • Private schools are self-funded and run by school sponsoring organisation and managed by the school board. Private schools do not receive government subsidies and tuition fees are relatively high. Private schools may determine their own enrolment policies. These four types of schools account for >90% of HK primary and secondary schools, which ensures a reliable coverage of the school funding types in HK. The distributions of each school funding type in HK and the invited schools are shown in Fig. 1. It can be seen that the proportion of each type in the invited schools is consistent with their actual distribution in HK, which avoids producing significant statistical biases in the survey results. Design method The quantitative method by web-based questionnaire surveys was used for data collection. Also, as reported in [31], the differences between the reliability of web-based surveys and traditional paperbased surveys are insignificant. However, due to the convenience in handling multiple questionnaires and processing data, a webbased approach was performed on Qualtrics platform [32]. The questionnaire consists of 49 questions, where some of the questions are separately designed for school staff and parents due to their deviated experience and roles in smart campus operation and development. Among the 49 questions, 44 and 40 questions were exposed to school staff and parents, respectively. The questions are in various forms, including 'single selection', 'multiple selection, 'ranking questions', 'Likert scale', and 'scenario-based questions'. An 'others, please specify' option is provided for most 'single selection' and 'multiple selection' questions to collect open-ended data as supplementary to those questions. A 'Do not know' option was also provided for most questions to eliminate random guess from respondents. System data cleaning and manual data polishing were performed to detect and correct any corrupted, incorrect, and invalid data entries. For example, for the 'other -please specify' option, if the textual response actually duplicates any other option already provided in the question, manual data entry correction is required. This questionnaire-based survey mainly involves quantitative data, so various statistical methods were adopted in its data analysis. The collected data collected was analysed based on user groups (i.e. principals, STEM teachers, non-STEM teachers, and parents), school education levels (i.e. primary schools and secondary schools), and school funding types (i.e. government schools, aided schools, DSS schools, and private schools). The questions were separately designed from the following aspects to frame up a human-centred smart campus: • Demography • Knowledge on smart campus • Current school smartness • Stakeholder expectation and school fulfilment • Acceptance of smart learning technologies (SLTs) • Cybersecurity and privacy Different descriptive statistics were computed according to different question types. For single-selection questions, counts and percentages were computed; for multiple-selection questions, counts, percentages, and counts to respondent ratio (CTRR), were calculated; for closed-end questions, arithmetic means were computed as the score to quantify the tendency; and for ranking questions, means were calculated as the ranking index of each item to be ranked. Based on the above statistical quantities, Kreskas-Wallis tests [33] were performed to investigate the significance in the data, and Spearman tests [34] were also performed to mine the correlation significance between respondents' knowledge level and their attitudes towards smart campus. Kruskal-Wallis test and Spearman test were chosen for the significance test because they are non-parametric test methods that can suit any unknown data distribution. Demography Demographic information reflects the characteristics of the various respondents. Collecting demographic information prior to stepping into the main questionnaire is essential for evaluating the reliability of the obtained dataset. The demographic distribution of the 162 respondents is shown in Fig. 2, where the stakeholder types, education levels, and funding types are displayed separately. Regarding the stakeholder types, the principal represents the smallest group, which is due to the small number of principals in each participating school. The number of STEM teachers, non-STEM teachers, and parents occupy more balanced shares among the rest of the respondents. Regarding education levels, due to the voluntary nature of the survey, the share of respondents from primary schools (60%) and secondary schools (40%) as shown in Fig. 2b are close to the ratios in participating schools (i.e. 58.8% for primary and 41.2% for secondary schools), which represents a reliable distribution for statistical survey purposes. Regarding school funding types, the overall distribution of the four funding types among respondents (i.e. as shown in Fig. 2c) follows the distribution among participating schools as shown in Fig. 1. The share of DSS schools in respondents is higher than in schools means the staff in DSS schools are more active to participate in the survey. Overall, the demographic information indicates the reliability of the collected data, and to a large extend, the findings from the data should reflect the real circumstances in HK primary and secondary education sectors. Knowledge on smart campus Understanding stakeholders' knowledge level on a smart campus is important as it affects their perceptions and attitudes on the smart revolution in education. Such knowledge can be interpreted as how familiar the respondents are with the smart campus, and four knowledge levels are defined for the respondents to choose from. They are '1 -never heard of it', '2 -general knowledge (e.g. read about it online, heard about it on the news etc.)', '3 -have done some research on smart campus', and '4 -have engaged with the implementation of some smart functions on your campus'. By assigning a numerical value to each knowledge level, the qualitative information about respondents' knowledge can be analysed in a quantitative way, where a higher value represents more knowledgeable. The overall distribution of the response is shown in Fig. 3. Most respondents selected levels 1 and 2 and the mean knowledge level is 2.12, which indicates the lack of knowledge on smart campus among stakeholders. This pushes the need of reinforcing training and advocation of smart campus in HK. To further investigate the respondents' knowledge among different demographic indices, the distribution with respect to user groups, school education level, and school funding types is collected, with statistical results presented in Table 1. In terms of stakeholders, principals are more knowledgeable on smart campus than the other three groups, which is reasonable as the leadership role of principals requires them to be more forward-looking on cutting-edge concepts. Moreover, STEM education encourages the fusion of emerging technologies and pedagogies. The higher knowledge of STEM teachers implies the potential interplay between STEM education and future smart revolution. In terms of school education levels, staff and parents of secondary schools are generally more knowledgeable than those from primary schools, which encourage smart campus promotion in primary schools. In terms of school funding types, DSS schools and private schools show significantly higher knowledge on smart campus than government schools and aided schools, which shows a positive correlation between government funding flow and users' understanding of smart campus. Current school smartness Provided the high penetration of ICT in the HK education sector, a certain level of smartness has been achieved in some HK schools. This part of the survey collects the current situation of smart function deployment in HK primary and secondary schools, to provide a database to indicate the sufficiency in supporting smart campus development. In the survey, the smart features in school are divided into five categories: smart sensing infrastructure (SSI), smart environment and resource management (SERM), energy sustainability (ES), SLTs, and smart pedagogies (SPs). For each category, multiple items are listed in the form of multiple selection questions for the respondents to answer. The scope of each category is as follows. Sensing infrastructure serves as the basis for IoT-based smart functions in a smart campus. By deploying sensors to monitor campus environment and personnel in real-time, the context-aware feature of smart campus can be achieved. This category involves the various sensing technologies as the items, such as card reader, quick response (QR) code, video surveillance, facial recognition, environment sensing etc. SERM refers to the remote and efficient management of the space, physical environment, energy, and waste in the context of the smart campus, which helps maintain a comfortable and convenient campus environment for stakeholders. Owing to the recent concerns on the change in climate and the lack of fossil fuels, ES is a significant objective in smart campus development. In this category, the different sustainable energy resources, such as solar energy and wind energy, used to operate the campus are surveyed. SLTs include emerging technologies to improve teaching and learning. They are learning-oriented and are essential in achieving the smartness in schools. Some examples are AR, virtual reality (VR), cloud computing, and AI. Along with the rapid technological development, innovative pedagogical approaches, namely SPs, have also been developed recently to enhance the learning experience for students. In this smart pedagogy category, some typical examples are personalised learning that can meet the interests of each individual student, remote learning that enables the ubiquitous learning ability of students and collaborative learning that encourages incorporation and inspiration among students, teachers, and schools. The overall survey results on current school smartness show a low percentage of 'none of the above' response, which indicates a good level of smartness in HK schools. In the sensing infrastructure category, higher popularity is shown on conventional sensing facilities, such as card readers and QR codes, whereas responses on AI-based items, such as facial recognition and voice recognition, are much lower. In the smart learning technology category, sufficient technology deployment in HK is shown to support online/remote learning. By contrast, AR/VR technologies are deployed just at a moderate level, and AI-based learning is still in its infancy among HK schools. In the smart pedagogy category, the survey result indicates the advanced pedagogical approaches have been generalised into the majority of HK schools. However, the applied pedagogies are mainly real-time student monitoring and collaborative learning, but lacks attempts on e-learning and personalised learning. To facilitate the conditional analysis in each category, the CTRR for each demographic index is calculated, and the results are shown in Tables 2-4. CTRR refers to the average number of items (except the 'none of the above' item) selected by the respondents in each category, which indicates the development level of school smartness. It can be seen that the CTRR values in all categories show similar trends. In Table 2, the CTRR of principals and STEM teachers are higher than non-STEM teachers and parents, which means principals and STEM teachers can perceive more smart functions than non-STEM teachers and parents although they are from the same school with the identical level of smartness. This phenomenon could be explained by a better understanding of principals and STEM teachers on smart campus as shown in Table 1. In Table 3, secondary schools show much higher CTRR than primary schools, which indicates the uneven smart deployment in different education levels. In Table 4, the CTRR of DSS schools and private schools are higher than government schools and aided schools. This result again verifies the correlation between government funding and smart development in education. Stakeholder expectation and school fulfilment Adhering to HCD, the stakeholders' needs on school teaching/ learning should be collected to guide future smart campus development. In this survey, a set of ranking items are provided to the stakeholders to express their priority in school teaching/ learning. Two ranking questions are designed for each respondent. On the one hand, the first question is asking the respondents to rank the items from the most to the least important in teaching/ learning. This question focuses on collecting stakeholders' perceived needs in smart learning. On the other hand, the second question is asking respondents to rank the items from the best to the worst managed by the school. This question aims to test schools' fulfillment of stakeholders' needs. Owing to the deviated roles and responsibilities, the expectation of school staff (i.e. principals, STEM teachers, and non-STEM teachers) and parents on the school teaching/learning items are evaluated separately, and the results are shown in Figs. 4a and b, respectively. The items listed on the right-hand side table are provided for ranking. For each item, the mean rank value is computed as the score to evaluate its importance perceived by stakeholders. A smaller score value means the item is more important or better managed. The overall scores of each item in the two questions are aggregated into a scatter plot. The score in the horizontal axis represents the stakeholders' expectations on the teaching/learning items, whereas the vertical axis represents the school fulfillment level on their needs. The red diagonal line indicates the school perfectly fulfils the expectation of stakeholders. The points above (or below) this line means the school's management on those items is lacking behind (or exceeding) stakeholders' needs. As shown in Figs. 4a and b, the 'understanding learning needs' are identified by the stakeholders as the most important item in school teaching/learning, whereas the 'sustainability' is ranked as the least important. The items are distributed close to the red diagonal line, showing the current schools' management on school teaching/learning closely aligns with the expectation from different stakeholders in general. Nevertheless, by comparing Figs. 4a and b, it can be seen that the items in Fig. 4b are located closer to the red diagonal line than in Fig. 4a, which indicates the current school management is slightly better at adhering to parents' needs rather than the school staff's needs. As indicated in Fig. 4a, there is still space for school improvement in some aspects, such as maintaining teacher-student relationship, understanding students' learning needs, and maintaining teacher and student mental health and wellbeing, to fulfil the school staff's expectations. Acceptance of SLTs Hearing user's voices is always important before the launch of new products. By doing so, the technology and service providers can know the users' perceived benefits and concerns on the product. Likewise, investigating stakeholders' attitude on emerging technologies to be deployed is also important in smart campus development. A smart campus can be seen as a cyber-physicalsocial system where stakeholders and technologies have closely interacted under an interdisciplinary framework. In the smart campus context, maintaining stakeholders' positive attitudes on new technologies is crucial in achieving effective and efficient school operation. In this survey, a Likert scale question is designed to collect and compare stakeholders' acceptance of various SLTs. The SLTs asked in these questions include cloud-based learning, collaborative learning, mobile-based learning, AR/VR-based learning, and IoT-based learning. For each technology, the question includes a statement with seven acceptance levels for the respondents to choose from. The statement is in the form of 'I am comfortable with XXX technology being used at my school'. The seven acceptance levels in the options are '1 -strongly agree', '2agree', '3 -somewhat agree', '4 -neither agree nor disagree', '5somewhat disagree', '6 -disagree', '7 -strongly disagree'. The values 1-7 assigned to each acceptance level are used for quantitative analysis, where significance tests and correlation tests are performed to extract findings from the responses. The analysis is performed based on the acceptance score, which is the mean acceptance level among respondents. A lower score value means higher acceptance from the respondents. The analysis is conducted in different user groups, different school education levels, and different school funding types. The statistical results are shown in Tables 5-8. In Table 5, an overall acceptance score of 2.24 (much smaller than 4) is provided by the respondents, which shows their positive attitudes towards SLTs in general. By comparing among the different technologies, respondents show significant averse on mobile-based learning and AR/VR-based learning. The main concern on mobile-based learning is that although mobile devices can provide a ubiquitous environment for learning, students can also be distracted by the many attractive alternatives on mobile devices, such as games, social media etc. In Table 6, principals and STEM teachers show higher overall acceptance of SLTs than non-STEM teachers and parents. STEM teachers even show significantly higher acceptance of cloud-based learning and collaborative learning. This high acceptance from STEM teachers is mainly because these two techniques have been widely employed in STEM education in recent years and received very positive feedback. In Table 7, respondents from primary and secondary schools show similar attitudes towards SLTs. However, as analysed in Section 5.3, the current smartness of primary schools lacks behind secondary schools in HK. This requires equal smart deployment among primary and secondary schools due to the balanced needs of their stakeholders. In Table 8, no significance was detected among different school funding types, meaning the different types of schools show similar acceptance on new learning technologies although they receive different levels of funding and their current school smartness levels are uneven. Cybersecurity and privacy Cybersecurity and data privacy are always critical concerns when dealing with cyber systems and sensitive human-related data. These systems and data must be handled according to the legal/ regulatory requirements. As the realisation of campus smartness highly relies on the availability of high-dimensional data, in particular, personal data with subjective opinion, protecting the data for students and staff would be a challenge in smart campus development, especially on an information platform based on emerging technologies such as cloud computing and IoT [35]. This survey area focuses on investigating stakeholders' concern on the security and privacy issue in the context of a smart campus. A smart campus is exposed to various cybersecurity issues. A cyber-secure campus can provide a peace-of-mind cyber environment for the students and staff when they use the online teaching/learning facilities, This means they are free to use online materials based on their needs, without worrying about the disruption of services or leakage of sensitive data. The main cybersecurity concerns on a cloud-IoT-based smart campus are related to data insecurity. For example, intruders may get access to the campus data and perform attacks because the cyber system of a campus is usually open. A solution for mitigating the attack issues could be increasing the data redundancy and protection against denial-of-service attacks [36]. The data system in a smart campus needs to meet the requirements of confidentiality, authenticity, integrity, and availability. In this survey, a Likert scale question is again designed to collect stakeholders' concerns on cybersecurity and privacy issues. This Likert scale question consists of four sub-questions as below to investigate stakeholders' opinion: (iv) Are you comfortable with increased data collection? (v) Do you believe your school's cyber system can reliably protect the data? (vi) Do you believe your school can ethically manage the data? (vii) Do you believe the benefits of data collection outweigh its privacy risks? By answering the above four sub-questions, the overall attitude of stakeholders on cybersecurity and privacy in smart campus can be obtained. The users' acceptance of increased data collection is discretised into seven levels, where the smaller value indicates more positive attitude towards increased data collection (i.e. less concerns on cybersecurity and privacy). Significance tests and correlation tests are also performed on the responses. The analysis is performed based on the responses in different user groups, different school education levels, and different school funding types. The statistical results are shown in Tables 9-12. In the results, the acceptance score from all respondents is way <4, which means stakeholders' very positive attitudes towards increased data collection on campus. In Table 9, the significance test result also shows that the acceptance score on Q4 is significantly higher than the other three sub-questions, meaning more respondents concern about the privacy risk rather than data security issues. In Table 10, there are no significant differences among the responses from the four user groups. However, we can observe that non-STEM teachers show lower acceptance of increased data collection. The reason for this could be their lack of knowledge of smart technologies and their less reliance on data-driven applications in teaching non-STEM subjects. In Table 11, although no significance was detected between primary and secondary schools, the respondents of secondary schools still show fewer concerns on cybersecurity and privacy. Owing to the higher age and experience of secondary students than primary students, teachers and parents are more confident that secondary students are more aware of cybersecurity and privacy issues and are more capable of dealing with those problems, which could explain the fewer concerns from secondary schools. In Table 12, the acceptance scores for the four school funding types are comparable, which means there is no significant link between school funding source and the security/privacy concerns among stakeholders. Correlation tests Furthermore, two Spearman tests have been conducted to investigate (i) the correlation between respondents' knowledge on smart campus and their acceptance on SLTs, and (ii) the correlation between respondents' knowledge on smart campus and their concerns on cybersecurity/privacy. On top of that, significance tests were performed to find the technologies and/or concerns that are significantly correlated with respondents' knowledge on smart campus. The p-values are computed by the significance test and are presented in Tables 13 and 14, respectively, for acceptance on SLTs and concerns on cybersecurity/privacy. In Table 13, AR/VR-based learning is shown significantly correlated with respondents' knowledge. This result implies that the respondents' aversion to AR/VR-based learning in Table 5 can be explained by their lack of knowledge on them. Since AR/VR are still new technologies in supporting smart learning, respondents' knowledge on smart campus decides their knowledge on AR/VRbased learning. For respondents with higher knowledge on smart campus, they tend to be more acceptable to exploit AR/VR techniques in learning. In Table 14, it can be seen that the responses to all subquestions show a significant correlation with respondents' knowledge on smart campus. The respondents with more knowledge on smart campuses tend to see the benefits of data collection with much lower concerns on cybersecurity/privacy. This finding indicates that people's aversion to new technologies (e.g. cybersecurity/privacy concerns), to a large extent, can be explained by their misunderstanding of the technologies. This survey provides statistical information to guide future development and planning of smart campus and aims to build a better understanding of the current smart campus landscape in HK. Adhering to HCD, the recommendations with regard to smart campus development will be summarised in six main areas: promotion, funding and infrastructure, curriculum and pedagogy, teacher support, privacy, and sustainability. Promotion The survey shows that for respondents who have a higher level of knowledge on smart campus, they tend to be more acceptable to exploit new techniques in learning. In other words, respondents' knowledge on smart campus determines their understanding and acceptance of new technology-based learning. This finding indicates that the stakeholders tend to use the new technologies only if they have a good understanding of them. The survey revealed that HK users' aversion to AR/VR-based learning is precise because AR/VR technology is still new to them, and the lack of knowledge or experience creates barriers between the users and the technology. Therefore, improving stakeholders' knowledge on smart campus is shown to help accelerate the popularity of smart campus, which encourages the cultivation of stakeholders for future smart campus development. This could be achieved by increasing the promotion of smart campus and providing more opportunities for real smart experience for the public. The promotion activities could be not only on students, teachers, and other school staff but also on parents. It is necessary to seek effective measures to motivate parents to understand more about smart campuses and enhance their knowledge of the smart features available at their children's schools. Funding and infrastructure The data suggest that HK government and enterprises increase funding support on the smart facility reinforcement in primary schools. The quality of early education is critical to children's growth. Especially in the current stage of transition towards smart education, children's early exposure to smart technology is essential to lay the foundation for their future mastery of smart functions in secondary school. Therefore, it is worth considering how to accelerate the smart infrastructure of primary school and enable students to adapt to smart technologies in learning at an earlier age. In particular, AI-based learning has been rarely adopted, so there is a pressing need to renovate the current sensing infrastructure into more AI-based mode to meet the needs of future smart campus development. Curriculum and pedagogy Comparing STEM subjects, the fusion of emerging technologies and pedagogies is rarely experienced on non-STEM subjects. As a result, non-STEM teachers lack knowledge and capabilities on smart technologies and show less reliance on data-driven applications. Hence, more efforts are suggested to exploring the use of smart technologies to enhance the teaching and learning performance on non-STEM subjects. Moreover, as AI-based learning and AR/VR-based learning are still in their infancy in HK schools, it is essential to increase the penetration of AI and AR/VR in daily school learning and management. While encouraging the implementation of remote learning and personalised learning, it would be more beneficial if the developers could establish an open-source mobile platform to provide students with a more balanced view between learning and recreation. A good mobile platform can facilitate the teachers to browse students' interests. The open-source feature would make the mobile platform more adaptable to new technologies and pedagogies. Teacher support Although smart technologies enhance teaching performance, they also bring pressure and challenges to the teachers. There is definitely a need for on-going professional development for teachers. Rather than equipping teachers with the capability to use new technologies in teaching, it is more in terms of pedagogical use of new technologies in specific subjects. Moreover, the survey revealed that parents in HK have high expectations for teachers in a smart campus. The survey results indicate that teachers still have space for improvement in areas of 'teacher-student relationships' and 'understanding students' learning needs', so more attention is expected on these aspects in future teacher training. In addition, since children's early exposure to smart technology will bring great benefits to their future learning, primary schools require more staff training to prepare for the future smart education revolution. It is worth noting that the current school management is better at adhering to parents' needs rather than the staff's needs. Teachers' demands and their mental health and wellbeing are often overlooked by the schools. Therefore, it is recommended that schools and relevant departments could help teachers withdraw from various administrative and routine tasks by using smart technologies so that sparing enough time and energy for them to focus on SPs and concentrate on their students. Privacy In smart campus implementation, it is important to find an effective measure to mitigate the privacy risks to make users more comfortable with data collection and more confident in the data management system. It is recommended for future smart campus research and development to focus on the following problems: • How to model and decide the trade-off between stakeholders' perceived benefits and their perceived privacy risk from the increased data collection in the smart campus context? • How to optimally balance the trade-off between the benefits and privacy risk in smart campus design and operation? (i.e. how to optimally protect stakeholders' privacy while maintaining the effectiveness of smart campus applications?) • How to satisfy the deviated privacy needs from individual stakeholders? • How to incorporate legal and regulatory constraints to protect stakeholders' privacy? With the above considerations, the recommendations could be given in both technical and political areas to achieve a privacyaware transition from the existing campus to a smart campus. On the one hand, upon providing smart services, the teams that design, plan and implement the data-driven smart campus applications need to take the responsibility to investigate individual's privacy concerns and provide personalised data control to protect individual data, as well as follow the legal/regulatory requirements and principles on data protection. On the other hand, the government and policymakers need to timely update the law, policies, and regulations to protect an individual's privacy while ensuring a certain level of context awareness within the campus. Moreover, the correlation test on the survey data suggests that an effective way to alleviate people's concerns on cybersecurity and privacy is to improve their knowledge on smart campus. It is believed that along with the constantly reinforced promotion, training, and use of smart services, people will be getting more familiar with and giving more trust on the smart things and gradually accept them as ordinary tools to support their living, learning, and socialising activities, which progressively builds their confidence on smart campus. Sustainability The survey indicates that a significant portion of HK schools currently lacks facilities for smart waste management and sustainable energy. Also, 'sustainability' is ranked as the least important by parents and school staff. To promote the sustainable development of the smart campus, it is recommended that the relevant departments could generalise the integration of sustainable energy and elevating citizen's environmental awareness by establishing attractive energy policies and products, e.g. feed-in tariff. Conclusion Adhering to HCD, this paper presents a user case survey conducted among HK primary and secondary schools, aiming to capture and understand the perspectives of school stakeholders on education applications in the context of the smart campus. This survey is structured to investigate six areas: demography, knowledge on smart campus, current school smartness, stakeholder expectation and school fulfillment, acceptance on SLTs, and cybersecurity and privacy. All the extracted findings indicate technological readiness in primary/secondary schools and positive attitudes of school stakeholders towards smart campus transition in HK. This includes fast developing knowledge on smart campus, adequate infrastructure and technology to support school smartness, excellent school fulfillment on stakeholders' needs and expectations, wide acceptance on new technologies and pedagogies, and manageable concerns on cybersecurity and privacy. Targeted recommendations with regard to the areas of promotion, funding and infrastructure, curriculum and pedagogy, teacher support, privacy, and sustainability are also given to support the next stage of smart campus planning and development. With one of the region's well established educational systems and decades of continuous efforts in adopting the latest technologies in school education in Hong Kong, the findings and recommendations in the survey are also expected to result in a benchmark reference of the smart campus concept for international educational providers, government and technology companies that will deliver smart solutions.
2020-09-10T10:22:34.287Z
2020-08-17T00:00:00.000
{ "year": 2020, "sha1": "6f9ad9bd8a3b74aaa4c38549aa032e43da2134b5", "oa_license": "CCBYNC", "oa_url": "https://ietresearch.onlinelibrary.wiley.com/doi/pdfdirect/10.1049/iet-smc.2020.0047", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "863645c57d5ba159074674020b97fdcf7e3444a0", "s2fieldsofstudy": [ "Education", "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
206233838
pes2o/s2orc
v3-fos-license
High-resolution DNA methylome analysis of primordial germ cells identifies gender-specific reprogramming in mice Dynamic epigenetic reprogramming occurs during mammalian germ cell development, although the targets of this process, including DNA demethylation and de novo methylation, remain poorly understood. We performed genome-wide DNA methylation analysis in male and female mouse primordial germ cells at embryonic days 10.5, 13.5, and 16.5 by whole-genome shotgun bisulfite sequencing. Our high-resolution DNA methylome maps demonstrated gender-specific differences in CpG methylation at genome-wide and gene-specific levels during fetal germline progression. There was extensive intra- and intergenic hypomethylation with erasure of methylation marks at imprinted, X-linked, or germline-specific genes during gonadal sex determination and partial methylation at particular retrotransposons. Following global demethylation and sex determination, CpG sites switched to de novo methylation in males, but the X-linked genes appeared resistant to the wave of de novo methylation. Significant differential methylation at a subset of imprinted loci was identified in both genders, and non-CpG methylation occurred only in male gonocytes. Our data establish the basis for future studies on the role of epigenetic modifications in germline development and other biological processes. Dynamic epigenetic reprogramming occurs during mammalian germ cell development, although the targets of this process, including DNA demethylation and de novo methylation, remain poorly understood. We performed genomewide DNA methylation analysis in male and female mouse primordial germ cells at embryonic days 10.5, 13.5, and 16.5 by whole-genome shotgun bisulfite sequencing. Our high-resolution DNA methylome maps demonstrated gender-specific differences in CpG methylation at genome-wide and gene-specific levels during fetal germline progression. There was extensive intra-and intergenic hypomethylation with erasure of methylation marks at imprinted, X-linked, or germlinespecific genes during gonadal sex determination and partial methylation at particular retrotransposons. Following global demethylation and sex determination, CpG sites switched to de novo methylation in males, but the X-linked genes appeared resistant to the wave of de novo methylation. Significant differential methylation at a subset of imprinted loci was identified in both genders, and non-CpG methylation occurred only in male gonocytes. Our data establish the basis for future studies on the role of epigenetic modifications in germline development and other biological processes. [Supplemental material is available for this article.] In post-implantation mammalian embryos, a population of pluripotent epiblast cells gives rise to primordial germ cells (PGCs), the precursors of spermatozoa and oocytes; the fate of these PGCs is specified during gastrulation (Mochizuki and Matsui 2010). In mice, PGCs initially form a small cluster of 30-50 alkaline phosphatasepositive cells in the extra-embryonic mesoderm on embryonic day 7.25 (E7.25) (Ginsburg et al. 1990). Once the fate of these PGCs has been determined, they start proliferating and migrating into the developing gonadal region (the genital ridge). Early PGCs migrate from the dorsal aspect of the hindgut between E9.0 (;150 PGCs) and E9.5 (;250 PGCs), separate into left and right groups of individual cells, and migrate laterally across the dorsal body wall. At E10.5, ;1000 PGCs reach the genital ridges and continue to migrate, and by E12.5 (;8000 PGCs), migration into the genital ridges is complete. Within the genital ridges, PGCs continue to proliferate, reaching about 26,000 cells by E13.5, at which point cell division stops, and they undergo male or female gametogenesis. Notably, PGCs are sexually bipotent at the migrating stage, and sex-specific differentiation begins after colonization of the genital ridges around E10.5 (Saga 2008). At E13.5, male germ cells undergo cell cycle arrest at G 1 /G 0 and do not enter meiosis during the embryonic stages of development, whereas female germ cells enter meiotic arrest. During migration and proliferation, PGCs undergo global epigenetic reprogramming, including exchange of histone variants, remodeling of histone modifications, and erasure of DNA methylation, which is thought to be complete around E13.5 in male and female embryos (Seki et al. 2005;Hajkova et al. 2008;Popp et al. 2010;Guibert et al. 2012). Prior to colonizing the genital ridges, PGCs exhibit parent-of-origin-specific imprinting methylation marks (called genomic imprinting), which enforce the mono-allelic expression of many imprinted genes. Most parental methylation imprints on paternal and maternal alleles are erased in nonmigrating PGCs (between E11.5 and E12.5) (Hajkova et al. 2002;Yamazaki et al. 2005). In addition, some germline-specific genes such as Ddx4, Dazl, and Sycp3 are initially expressed in PGCs between E10.5 and E11.5, and DNA demethylation concomitantly occurs in the regions flanking these genes (Maatouk et al. 2006). Following gonadal sex determination, germ cells acquire the ability for sex-specific, de novo methylation. In the male germline, G 1 -arrested male PGCs (usually called gonocytes) are highly methylated, with increased expression of DNA de novo methyltransferase genes during mitotic cell division at fetal stages (Sakai et al. 2004), and the DNA methyltransferase (Dnmt) families Dnmt3a, Dnmt3b, and Dnmt3l play essential roles in the establishment of retroviral methylation and paternal methylation imprints during spermatogenesis (Kaneda et al. 2004;Kato et al. 2007). In the female germline, increasing DNA methylation and establishment of maternal methylation imprints occur predominantly in meiotically arrested growing oocytes at postnatal stages (Lucifero et al. 2004;Hiura et al. 2006). Dnmt3a and Dnmt3l are also necessary for this process, while Dnmt3b seems dispensable (Kaneda et al. 2004(Kaneda et al. , 2010Smallwood et al. 2011). These methylation patterns have been investigated by indirect immunostaining methods using antibodies against 5-methylcytosine (Seki et al. 2005;Hajkova et al. 2008;Abe et al. 2011) and/or locus-specific analyses using bisulfite conversionbased methods, in which unmethylated cytosine is converted to uracil (Hajkova et al. 2002;Lucifero et al. 2004;Yamazaki et al. 2005;Hiura et al. 2006;Maatouk et al. 2006;Kato et al. 2007;Kaneda et al. 2010). Recent studies combining analyses of DNA methylation and whole-genome microarrays or high-throughput sequencing technologies have revealed the characteristic DNA methylation profiles of various types of cells (called ''DNA methylomes'') (Laird 2010). In particular, the combination of bisulfite treatment and high-throughput sequencing has allowed researchers to map every methylated and unmethylated cytosine in the genome (Cokus et al. 2008;Lister et al. 2009). However, since these methods require large amounts of DNA (typically, micrograms, i.e., at least over 10 6 mammalian cells), samples that are only available in small amounts, such as germline cells, are difficult to evaluate by these methods. A pilot study of high-throughput bisulfite sequencing in mouse PGCs (at E13.5) showed global reduction of CpG methylation in genomes or genomic compartments, but demethylation-targeted DNA sequences have not been fully characterized due to low coverage (Popp et al. 2010). In other sequencing-based assays, reduced representation bisulfite sequencing (RRBS) allows global DNA methylation analysis in oocyte genomes, but the targets of analysis are limited to CpG-rich sequences (Smallwood et al. 2011). Thus, in germline development, identifying functional DNA methylation loci is a fascinating issue that remains relatively unexplored. In this study, we examined genome-wide methylation profiles in developing germ cells of mice using high-throughput shotgun sequencing of bisulfite-treated DNA (whole-genome shotgun bisulfite sequencing; WGSBS), which accurately quantifies whole-genome methylation levels at single-base resolution. Using Illumina sequencing libraries, we scaled down the construction and analysis to nanogram quantities of DNA by generating a new WGSBS library, termed the post-bisulfite adapter tagging (PBAT) method (Miura et al. 2012). Here, we provide complete maps of cytosine methylation in developing male and female PGCs during gonadal sex determination. Results At E10.5, the male and female PGCs of mouse embryos migrate directionally from the dorsal body wall, the mesentery of the hindgut, into the genital ridges and have a very similar morphology; in contrast, primary testes and ovaries can be distinguished morphologically at E13.5, the onset of sex differentiation ( Fig. 1A; Sabour et al. 2011). To determine the sex of E10.5 mouse embryos, we carried out PCR-based sex genotyping using DNA from individual embryonic heads. We collected thousands of PGCs by fluorescence-activated cell sorting (FACS) purification from each dorsal mesentery or fetal gonad of Pou5f1-DPE-GFP male and female mouse embryos at E10.5, E13.5, and E16.5 (male PGCs: E10.5mPGC, E13.5mPGC, and E16.5mPGC; and female PGCs: E10.5fPGC, E13.5fPGC, and E16.5fPGC) (Supplemental Fig. 1; Yoshimizu et al. 1999). Green fluorescence protein (GFP) intensity was dramatically reduced but detectable in E16.5fPGC, consistent with previous results (Supplemental Fig. 1; Sabour et al. 2011). To obtain base-pair-resolution DNA methylomes, we performed WGSBS analysis using Illumina HiSeq 2000. WGSBS libraries were generated from 2000-5000 PGCs using a nonamplification technique termed PBAT (Supplemental Fig. 2; Miura et al. 2012). We generated 394 Gb of single-read (SR) and paired-end (PE) sequence data. Of these, 149 Gb (37.9%) were successfully aligned to either strand of the mouse genome, and each average read depth (i.e., the number of hits of reads that were mapped to a given position) was 9.053-12.233 for the PGCs examined (Supplemental Table 1). In total, >91% of the genomic sequence was covered by at least one sequence read (Supplemental Fig. 3). To elucidate the distribution of CpG methylation on regional and genome-wide scales, we created dot plots of average CpG methylation levels in sliding 200-kb windows throughout each chromosome. Interestingly, differences in the average methylation levels of these 200-kb windows in male and female PGCs were observed in autosomal chromosomes, rather than the X chromosome ( Fig. 2; Supplemental Table 2). Global demethylation of each chromosome appeared between E10.5 and E13.5; however, certain chromosomal regions were partially methylated ($10%) at E13.5 in male and female PGCs, and these regions were more highly methylated (20%-60%) than other regions at E10.5 ( Fig. 2; Supplemental Fig. 4). In fact, while differences in the methylation of autosomal chromosomes were observed in male and female PGCs, the methylation levels of individual 200-kb windows were significantly correlated among the examined PGCs, except for those in E16.5mPGC (Supplemental Fig. 4). In addition, particular genomic compartments, e.g., L1/ERVK/ERV1 retrotransposons (see after the next paragraph) and satellite DNA were better represented in the demethylation-resistant regions (Supplemental Table 3). These observations indicated that particular regions in germ cell genomes avoided DNA methylation reprogramming (demethylation) during gonadal sex determination. The demethylation-resistant genomic compartments are described below. Furthermore, while female PGCs did not exhibit increased global methylation between E13.5 and E16.5, the methylation of male PGCs showed an obvious increase during this developmental period ( Fig. 2; Supplemental Table 2). These results demonstrated that global CpG methylation levels throughout individual chromosomes of male PGCs were higher than in female PGCs during PGC development. Moreover, genome-wide de novo CpG and non-CpG methylation was acquired during fetal male germ cell development. Therefore, because non-CpG methylation appeared as a rare event during PGC development (except for E16.5mPGC), we focused our methylome analysis on CpG methylation. Since previous studies revealed a significant correlation between CpG frequency and methylation within intra-and intergenic regions in mammalian somatic and germ cells (Edwards et al. 2010;Kobayashi et al. 2012), we compared CpG densities and methylation levels to identify genome-wide differential methylation patterns in PGCs. Methylation levels of individual CpGs covered by at least three sequence reads (>85% of genomic CpGs) (Supplemental Fig. 5) were calculated, and CpG density was defined as the number of CpG dinucleotides per 200-nucleotide (nt) window (e.g., 1 CpG dinucleotide per 200 nt corresponded to a density of 0.005) ( Fig. 3). At lower CpG densities (under 0.025, 64%-66% of genomic CpGs), the tendency toward average methylation levels was similar to that in previous reports (Fig. 1). However, at moderate CpG densities ranging from 0.030 to 0.080 (;15% of genomic CpGs), except for exons, methylation levels were higher in both male and female PGCs at E10.5/E13.5. These results indicated that some CpGs, found in introns and intergenic regions at moderate CpG densities, remained partially methylated in PGCs after global demethylation. Furthermore, methylation levels fell off sharply at higher CpG densities (>0.085), consisting mostly of CpG-rich promoters and/or CpG islands. In fact, promoter regions around transcription start sites (TSSs) were hypomethylated in all investigated PGCs (Supplemental Fig. 6). Transposable elements (TEs) are mobile genetic sequences that comprise a large percentage of mammalian genomes; 37% of the mouse genome is made up of these elements (Waterston et al. 2002). Accordingly, we investigated the methylation of four major classes of transposable elements (long interspersed nuclear elements [LINEs], short interspersed nuclear elements [SINEs], long terminal repeats [LTRs], and DNA transposons) (Supplemental Fig. 7). At E10.5, higher levels of DNA methylation were observed at LINE and LTR regions with a relatively high CpG density (>0.030); thus, these repeated sequences were largely composed of methylated CpGs at moderate CpG densities in PGCs. Interestingly, at E13.5, partial CpG methylation (range, 20%-50%) in LINEs and LTRs at moderate CpG densities was observed, and almost all the other CpGs were hypomethylated in these elements. In addition, CpG methylation levels at lower CpG densities significantly increased, while those at relatively higher CpG densities were similar between E13.5 and E16.5 in male PGCs. We also investigated the methylation of major families of LINE/LTR retrotransposons: L1 LINEs, L2 LINEs, MaLR LTRs, ERVK LTRs, ERVL LTRs, and ERV1 LTRs. Interestingly, the demethylation-resistant CpGs with a higher CpG richness were observed only in L1 LINEs, ERVK LTRs, and ERV1 LTRs ( Fig. 3; Supplemental Fig. 8). These results suggested that each PGC had unique sequence-and CpG-density-dependent methylation patterns, and particular subsets of LINE/LTR retrotransposons were resistant to global demethylation during PGC migration and sex determination. CpG islands (CGIs) are prominent in the mammalian genome due to their GC-rich base composition and high density of CpG dinucleotides and have been found within or near promoters of mammalian genes. Recently, Illingworth et al. (2010) identified 23,021 mouse CGIs by deep sequencing of isolated, zinc finger CXXC domain-binding unmethylated DNA clusters. Using these identified CGIs, we calculated the average methylation levels of each CGI to identify gender-specific differentially methylated regions (DMRs) in PGCs (Supplemental Table 4). Although ;15% of CGIs were partially or highly methylated ($10% methylation levels), most CGIs were hypomethylated (<10% methylation levels) at E10.5 in male and female PGCs (Supplemental Fig. 9). Moreover, while almost all CGIs were hypomethylated in PGCs after gonadal sex determination, a small percentage of these CGIs (;4%) were only partially methylated in E16.5mPGC. In addition, ;160-170 CGIs were incompletely demethylated ($10% methylation levels) in male and female PGCs at E13.5, and many were located at introns or intergenic regions (Supplemental Fig. 10; Supplemental Table 5). Among the CGIs, demethylation-resistant sequences in PGCs, e.g., regions at Gm7120, Mid1, and Sfi1 loci (Guibert et al. 2012), were also re-identified. Typical germline DMRs were fully methylated (nearly 100%) in one mature gamete and unmethylated (nearly 0%) in the other; however, we could not identify such regions in PGCs at all investigated embryonic stages because of their undermethylated status (Smallwood et al. 2011;Kobayashi et al. 2012). Next, we identified male-and femalegerm-cell preferentially methylated regions (mgPMRs and fgPMRs, respectively) from CGIs at each embryonic stage and determined the significance of differences using Mann-Whitney's U-test (Supplemental Table 6). Although average methylation levels in male germ cells were higher than in female germ cells, the number of fgPMRs (n = 271) was larger than the number of mgPMRs (n = 97) at E10.5, and more than half of the fgPMRs (n = 191) were located on chromosome X (X-DMRs) ( Fig. 4A; Table 1). After gonadal sex determination, the number of fgPMRs decreased (n = 36 and 35 at E13.5 and E16.5, respectively), with hypomethylation observed on almost all X-DMRs (Supplemental Fig. 11). In contrast, the number of mgPMRs greatly increased in autosomal chromosomes but rarely in chromosome X (n = 1464, chromosomes 1-19; n = 4, chromosome X) at E16.5 (Table 1). Based on our previous DNA methylome data, more than half (48.4%, n = 709) of mgPMRs at E16.5 were hypermethylated with $80% methylation levels in mature sperm cells; of these, 292 mgPMRs were identified as germline DMRs ($80% methylation in sperm and #20% methylation in fully grown oocytes) (Kobayashi et al. 2012). Further investigations of the methylation levels of CGIs on chromosome X are presented below. Notably, CGIs at imprinted germline DMRs, known to control the imprinting of a given domain as imprinting control regions (ICRs), were moderately methylated (near 40%) at E10.5 (Fig. 4B); however, a paternally imprinted Dlk1-Meg3 intergenic DMR (IG-DMR) was unexpectedly identified as an fgPMR. At E13.5, while these known ICRs were almost completely demethylated (<5%), some maternally imprinted ICRs exhibited partial methylation (5%-10%) in fPGCs; these were identified as fgPMRs. At E16.5, all three known paternally imprinted ICRs, i.e., H19, Rasgrf1, and Dlk1-Meg3, showed increased methylation levels (22.8%, 14.4%, and 8.4%, respectively) and were re-identified as mgPMRs. Methylation levels of a CGI on the Dlk1-Meg3 IG-DMR in male PGCs were lower than expected, based on a previous report of traditional bisulfite sequencing (conventional cloning and Sanger sequencing) by Henckel et al. (2012) (methylation levels; 30% at E15.5, 89% at E17.5 in male gonocytes) (Supplemental Fig. 12), but the reason for this discrepancy is unknown. Surprisingly, seven known maternally imprinted ICRs (among 20 maternally imprinted ICRs that were previously identified in mice), found within Peg10, Mest, Peg3, Snrpn, Kcnq1, Slc38a4, and Impact genes, were also identified as fgPMRs at E16.5, and some of these ICRs (i.e., Peg10, Peg3, and Impact) exhibited sex-differential methylation, even at E13.5. Female-germ-cell-specific partial methylation at Mest and Snrpn ICRs were confirmed in E16.5 by conventional bisulfite sequencing (Supplemental Fig. 13). Taken together, our results indicated that some maternally methylated imprinted regions contained partial methylation in primary oocytes during fetal stages. Next, we investigated the methylation profiles of germlinespecific genes and pluripotency-associated genes containing CGIs. While some germline-specific genes, i.e., Ddx4, Dazl, Sycp3, and Figla, and PGC-specific genes (recently identified by microarraybased analysis) (Sabour et al. 2011), i.e., Fkbp6, Mov10l1, and Spo11, were partially or moderately methylated (;20%-60%), most other genes were hypomethylated in male and female PGCs at E10.5 (Fig. 4B). Unexpectedly, at E13.5, Fkbp6 and Spo11 were identified as mgPMRs and fgPMRs, respectively, and all other investigated genes were hypomethylated at E13.5 and E16.5, and throughout PGC progression. Moreover, coefficient of variation analysis showed that male and female CGI methylation patterns on autosomal chromosomes in PGCs were significantly correlated during migration (R 2 = 0.8616 at E10.5), but this correlation became much weaker after sex determination (R 2 = 0.5563 at E13.5 and R 2 = 0.0547 at E16.5) (Fig. 4C). Additionally, CGI methylation patterns on chromosome X were significantly correlated throughout the investigated fetal stages, especially at E16.5 (R 2 = 0.946). These results indicated that sex differences in the CGI methylation of autosomes began to appear after gonadal sex determination through increased CpG methylation in male gonocytes, while X-linked genes were resistant to de novo methylation. Finally, we examined cytosine methylation in male gonocytes, in which the presence of non-CpG methylation has not been reported. We mapped CpG and non-CpG methylation data for each gene to a ''gene model,'' which contained annotated genomic features in the neighborhood of transcribed genes, including promoters/transcription start sites (TSSs, 6500 bp), upstream intergenic regions (À2 to À5 kb from TSSs), and downstream regions (+2 to +5 kb from TSSs). Methylation levels of individual cytosines, covered by at least five sequence reads, were calculated, covering 80% of the genome (Supplemental Fig. 3). Upstream and downstream regions showed increased methylation levels in both CpG and non-CpG sites, while promoter regions were deeply hypomethylated (Fig. 5A,B). Our results indicated that non-CpG methylation may be accompanied by intra-/intergenic CpG methylation in male gonocytes. Furthermore, sequence context analysis showed that, at non-CpG sites (Fig. 5C), the nucleotide immediately 39 to highly methylated regions was most likely to be adenine (CpA), which is also a feature of non-CpG methylation in oocytes and embryonic stem (ES) cells Ziller et al. 2011). Discussion In this study, we performed WGSBS mapping with thousands of mammalian cells (equal to ;20-50 ng of genomic DNA) using the PBAT method. This DNA methylome study demonstrated genomewide DNA demethylation, with erasure of genomic imprinting during gonadal sex determination and gender-specific differences in genome-wide and gene-specific (a part of CGIs) DNA methylation levels in developing PGCs. Some of these global/local changes in DNA methylation during PGC progression were consistent with previous as well as more recent studies (Hajkova et al. 2002;Maatouk et al. 2006;Laird 2010;Popp et al. 2010;Seisenberger et al. 2012;Hackett et al. 2013). However, our complete DNA methylome maps revealed important and novel details of DNA methylation and demethylation processes during PGC development. Some of the new findings from this study include the following: (1) PGC DNA methylomes exhibited sex-and chromosome-specific differences in genome-wide CpG and CGI methylation during early to late PGC development; (2) LINE/LTR retrotransposons were resistant to DNA demethylation at high CpG densities during PGC migration; (3) some maternally imprinted regions (genes) remained partially methylated in primary oocytes during fetal stages; and (4) non-CpG methylation occurred in male gonocytes during mitotic arrest ( Fig. 6; Supplemental Fig. 14). Our data and techniques can therefore serve as a platform for future studies to elucidate the role of epigenetic modifications in germline development and other biological processes. Historically, most CGIs were thought to be unmethylated in various tissues; however, a substantial proportion of CGIs were reported to undergo methylation during genomic imprinting, X-chromosome inactivation (one of the two copies of the X chromosome present in females is inactivated), carcinogenesis, and even in normal tissues (Illingworth and Bird 2009). Moreover, the methylation status of CGIs in promoter regions is highly correlated with gene expression. CGIs have been studied by multiple array-based and sequencing-based global methylation assessments, and several imprinted, cancer-specific, and tissue-specific DMRs have been identified (Bock et al. 2010;Laird 2010). CGI methylation profiling revealed increasing mgPMRs, and significant RefSeq annotated genes at areas À2 to À5 kb from the TSS (upstream/intergenic), 6500 bp from the TSS (promoter), and +2 to +5 kb from the TSS (downstream). (C ) WebLogo plots for sequences proximal to highly methylated cytosines (mC/C $ 50%) in all three sequence contexts in E16.5mPGCs. differences were observed in autosomal chromosomes during mitotic arrest but not in chromosome X, where chromosome-wide de novo methylation occurred (Fig. 2). Our findings indicated that CGIs (mostly associated with X-linked genes) on chromosome X were resistant to de novo methylation during fetal spermatogenesis. This is reminiscent of previous methylome studies that showed an absence of sperm-specific methylated DMRs (compared between fully grown oocytes and sperm) on chromosome X (Smallwood et al. 2011;Kobayashi et al. 2012). The mechanism that protects the genome from de novo methylation pressure is unknown, but DNA-binding factors may maintain the unmethylated status of X-linked genes (Illingworth et al. 2010). Meanwhile, among mgPMRs at E16.5, all three known paternal ICRs were reidentified. This result was consistent with previous studies, which revealed that methylation of paternal ICRs is established during PGC mitotic cell division before birth Hiura et al. 2007;Kato et al. 2007). In contrast, the number of fgPMRs decreased during PGC progression and was quite low at E16.5. Interestingly, although some of these fgPMRs were also shown to be maternally methylated imprinted regions (i.e., Peg10, Mest, Peg3, Snrpn, Kcnq1, Slc38a4, and Impact), previous studies have shown that increasing expression of Dnmts and establishment of maternal methylation imprints occurred during oocyte growth at postnatal stages (Lucifero et al. 2004;Hiura et al. 2006). Of the maternal ICRs, Peg10, Peg3, and Impact ICRs were identified as fgPMRs at E13.5. These regions may be partially resistant to global demethylation, similar to some retrotransposons. Previously, a maternalzygotic effect gene, Zfp57, was shown to be required not only for the establishment of DNA methylation in the female germline, but also for methylation re-acquisition, specifically at the maternally derived allele in the Snrpn imprinted region (Li et al. 2008), indicating that the parental alleles were not equivalent and retained Figure 6. DNA methylome changes during gametogenesis and embryogenesis. Mouse PGCs emerge from precursor cells in the proximal epiblast at E7.25. They proliferate and migrate toward the genital ridge. Then, DNA methylation is globally decreased in both males (blue line) and females (red line) with erasure of methylation marks of imprinted genes, X-linked genes (only in females), and some germline-specific genes (see Fig. 4) through TETcatalyzed oxidation (Hackett et al. 2013). During this migration, whole-genome CpG methylation levels are relatively higher in males than in females (see Fig. 2). Following gonadal sex determination, new DNA methylation patterns are established in each germ cell in a sex-specific manner. In the male embryo, de novo CpG and non-CpG methylation occurs in mitotically arrested gonocytes (see Fig. 5). Establishment of the paternal methylation imprints (e.g., H19) is completed before birth (meiosis), and these imprints are maintained during subsequent spermatogenesis and throughout meiosis; however, the presence of non-CpG methylation is rarely observed in the mature spermatozoon. In the female embryo, PGCs enter meiosis as primary oocytes and arrest in the prophase of the first meiotic division; the oocyte genome remains globally hypomethylated, but parts of maternal ICRs (e.g., Peg10, Mest, Peg3, Snrpn) exhibit partial methylation (see Fig. 4). DNA methylation marks are established after birth during the growth phase of the oocyte. At puberty, fully grown oocytes are still arrested at meiotic prophase, a stage known as the germinal vesicle (GV) stage. The GV oocyte genome exhibits global hypermethylation at transcribed regions, but the whole-genome CpG methylation level of oocytes is less than half that of spermatozoa (Kobayashi et al. 2012). When GV oocytes resume the first meiotic division, they undergo GV breakdown, extrude a first polar body, and develop to metaphase of the second meiotic division (MII). MII oocytes complete meiosis only with fertilization. In the zygote, striking asymmetric DNA demethylation between the two parental genomes is observed within the zygote's cytoplasm. The paternal genome is actively demethylated before the first mitotic division through the involvement of TET protein-mediated 5-methylcytosine oxidation (conversion to 5-hydroxymethylcytosine) Wossidlo et al. 2011). The maternal genome resists hydroxylation (Nakamura et al. 2012) and instead undergoes passive DNA replication-dependent demethylation. Multiple maternal and zygotic DNA-binding factors specifically recognize ICRs and protect them from these post-fertilization demethylation events (Nakamura et al. 2007;Li et al. 2008;Messerschmidt et al. 2012), resulting in epigenetic allelic asymmetries that affect associated imprinted genes. Following blastocyst implantation, the embryo undergoes a wave of de novo methylation (black line) that establishes a new DNA methylation landscape, and this process is associated with cellular differentiation. Sex-specific methylation signatures in mouse PGCs Genome Research 623 www.genome.org their identity in the absence of Snrpn methylation. A type of DNAindependent epigenetic memory may exist in these regions to permit maternal methylation, even after global demethylation in female PGCs. Meanwhile, complete establishment of germline methylation at imprinted Mest DMR (also known as Peg1 DMR) is slower than the other maternally methylated imprinted loci (Lucifero et al. 2004;Hiura et al. 2006). Further DNA methylome analysis in growing oocytes will help us to understand a detailed mechanism that determines the timing of maternal methylation imprinting. In this study, we identified more than 200 fgPMRs in E10.5 PGCs, most of which were located on chromosome X (X-DMRs). This may reflect the CGI methylation of X-linked genes by X-chromosome inactivation in females. These X-DMRs became hypomethylated at E13.5, similar to genomic imprints; this is consistent with re-activation of X-linked genes during PGC development (Sugimoto and Abe 2007). In addition, demethylation during PGC migration may activate some germline-specific genes, including those specifically expressed in PGCs. Thus, our results showed that CGI methylation on X-linked genes, germline-specific genes, and imprinted genes was erased by global DNA demethylation during entry of PGCs into embryonic gonads. This idea is also supported by other, more recent reports based on WGSBS or methyl-DNA immunoprecipitation (MeDIP) studies (Seisenberger et al. 2012;Hackett et al. 2013). Conversely, we showed that the global DNA methylation levels were slightly higher in male PGCs than in female PGCs, even before gonadal sex determination at E10.5. This result is consistent with studies demonstrating that male ES cells and PGCs have higher methylation levels than female ES cells and PGCs at E13.5 (Zvetkova et al. 2005;Popp et al. 2010). A potential explanation for the observed sex-based difference was the global reduction in methylation due to the presence of two active X chromosomes in females; however, this might be unlikely because a sex-based difference was observed in all autosomal chromosomes. Thus, the reason for sex-based differences in the global DNA methylation of PGCs is still unknown. Interestingly, recent observations using low-coverage bisulfite sequencing have indicated that the cytidine deaminase AICDA (also known as AID) partially contributed to demethylation in PGCs and could explain sex-specific differences in DNA methylation in E13.5 PGCs (Popp et al. 2010). This also supports the hypothesis that the base excision repair pathway is involved in DNA demethylation during PGC migration and that other pathways exist in the demethylation process, such as ten-eleven translocation (TET) protein-mediated oxidation of 5-methylcytosine (Hajkova et al. 2010). In fact, recent reports revealed that hydroxylation of 5-methylcytosines mediated by TET proteins is involved in active DNA demethylation of the zygotic paternal genome after fertilization and both active and passive demethylation of the PGC genome during expansion and migration (Fig. 6;Supplemental Fig. 14;Gu et al. 2011;Wossidlo et al. 2011;Hackett et al. 2013). TET1 knockout mice were reported to be viable and fertile and exhibit normal gametogenesis (Dawlaty et al. 2011); however, in a more recent report, significant reduction of oocyte numbers and fertility was observed in other TET1 knockout mice (Yamaguchi et al. 2012). Moreover, bisulfite-based techniques to detect 5-hydroxymethylcytosines have been reported by two groups (Booth et al. 2012;Yu et al. 2012). Thus, further investigations using new technologies may elucidate DNA demethylation mechanisms in males and females during PGC formation. Our study of global DNA demethylation revealed that most genomic CpG sites (and CGIs) were hypomethylated at E13.5, but L1 LINE, ERVK LTR, and ERV1 LTR retrotransposons were resistant to demethylation at relatively high CpG densities. In addition, our previous study showed that methylation of LINE/LTR sequences was retained in Dnmt3l-deficient oocytes (Kobayashi et al. 2012). Furthermore, separate studies have reported that the intracisternal A-particle (IAP), a member of the ERVK LTR retrotransposons, generally appears to be resistant to demethylation processes in gametogenesis and embryogenesis (Hajkova et al. 2002;Lane et al. 2003;Seisenberger et al. 2012). Recently, Guibert et al. (2012) revealed that ERV1 (with a higher CpG richness than the bulk of ERV1 sequences) is incompletely demethylated in PGCs. These results suggest the existence of a mechanism for preferentially maintaining cytosine methylation at evolutionarily young and potentially active transposable elements, which may be necessary to prevent the deleterious effects of their activation during epigenetic reprogramming. As mentioned above, Peg10, Peg3, and Impact ICRs were also partially protected from PGC demethylation. Furthermore, two recent studies also showed that demethylation rates of Peg10 and Peg3 ICRs were slower than that of the other ICRs or the predicted rates of passive demethylation (in a similar fashion to that of IAP retrotransposons) (Hackett et al. 2013;Kagiwada et al. 2013). While the mechanisms that allow resistance to global demethylation are unclear, it may involve DNA binding factors that specifically protect some sequences from demethylation, as shown with multiple maternal and/or zygotic DNA-binding factors, including DPPA3 (also known as PGC7/STELLA), ZFP57, and TRIM28 (also known as KAP-1), all of which protect ICRs from demethylation in pre-implantation embryos (Supplemental Fig. 14;Nakamura et al. 2007;Li et al. 2008;Messerschmidt et al. 2012). Conversely, X-linked genes may be protected by de novo methylation during spermatogenesis. Thus, it is important to realize that these mechanisms protect DNA specifically via a wave of demethylation or de novo methylation. Following gonadal sex determination, new differential DNA methylation patterns are established during spermatogenesis and oogenesis, resulting in distinct DNA methylation profiles of mature spermatozoon and oocytes. In this study, we identified increasing cytosine methylation at both CpG and non-CpG dinucleotides only in male gonocytes (PGCs at E16.5). In contrast, previous studies based on bisulfite sequencing revealed that non-CpG methylation occurred in fully grown oocytes, but rarely in mature spermatozoa Kobayashi et al. 2012;Smith et al. 2012). It is possible that such non-CpG methylation in gonocytes may be lost during spermatogonial mitotic proliferation after birth; however, the reason remains unknown. In fact, Ichiyanagi et al. (2013) found non-CpG methylation at a subfamily of SINEs in male gonocytes and prospermatogonia at prepubertal stages but not in mature sperm cells. Several murine studies have demonstrated the presence of non-CpG methylation in ES cells and early embryos (Ramsahoye et al. 2000;Haines et al. 2001), but this modification is completely absent in most adult somatic tissues. Recently, non-CpG methylation was observed in ES cells and induced pluripotent cells, with loss of methylation in differentiated cells (Lister et al. 2009); however, how these methylation modifications can be gained and maintained in daughter pluripotent stem cells after mitosis is unknown. Furthermore, it has been suggested that the expression of Dnmts may be responsible for this modification (Ramsahoye et al. 2000;Arand et al. 2012). Up-regulation of Dnmts and de novo methylation in the male germline is initiated in mitotically arrested prospermatogonia before birth and the onset of meiosis; however, de novo methylation in the female germline occurs in postnatal meiotic prophase I oocytes (Lucifero et al. 2004;Sakai et al. 2004). These data indicate that non-CpG methylation may be a result of abundant or sustained Dnmt protein expression, particularly in nondividing cells. Comparisons of experimental data and examined methylation profiles during gametogenesis or stem cell formation (including embryogenesis) may be useful in discovering the underlying mechanisms responsible for each biological process. Illumina sequencing Based on the qPCR quantification, 4 3 10 8 to 10 3 10 8 copies of dsDNA from the PBAT library was sequenced per lane on a HiSeq 2000 (Illumina) as described (Miura et al. 2012). Cluster generation and sequencing were performed with ;102-nt SR and PE methods using the TruSeq SR Cluster Kit v3 -cBot -HS (Illumina) and the TruSeq SBS Kit v3 -HS (Illumina) according to the manufacturer's protocols. Mapping of reads Sequenced PBAT reads were processed using the Illumina standard base-calling pipeline (v1.8.0-1.8.2). Before read-mapping, the first 4 bases (or 15 bases of read 2 of PE sequences) and last 1 base of all SR/PE sequences were trimmed due to derivation from random primers and/or low quality. Generated sequence tags were mapped onto the mouse genome using the Burrows-Wheeler Aligner (BWA) tool and default parameter settings. Mapping and filtering of PBAT tags was performed by following procedures described in previous studies, with an original customized Perl program (Kobayashi et al. 2012). Briefly, any guanines in all SR tags and PE read 1 (forward) tags were replaced with adenines, and any cytosines in all PE read 2 (forward) tags were replaced with thymines. Next, these tags were aligned to two in silico-converted mouse genome reference sequences (mm9, UCSC Genome Browser, July 2007, Build 37.1) and the lambda DNA sequence (accession no. V00636), with cytosines in the first strand converted to thymine (''Watson'' strand) and guanines in the second strand converted to adenines (''Crick'' strand). Finally, all tags that were mapped uniquely without any mismatches (>32 nt of perfectly matched tags) to both ''Watson'' and ''Crick'' strands were used for further analyses. Methylation analysis The percentage of individual cytosines methylated at all CpG and non-CpG sites covered by at least one, three, or five reads was calculated as 100 3 [number of aligned cytosines (methylated cytosines)]/[total number of aligned cytosines and thymines (originally unmethylated cytosines)]. All genomic CpG methylation data are available on our website (http://www.nodai-genome.org/ mouse_en.html). Bisulfite conversion rates were calculated by read C:T ratios from lambda DNA mapping data. The conversion rates are shown in Supplemental Table 1. Locations of transposable elements in the mouse genome (mm9) were obtained from the UCSC Genome Browser, and the average methylation levels of the whole genome and each transposable element were recalculated from the ratio of the aligned cytosines and thymines in each sequence. Lists of a total of 23,021 CGIs were obtained from a previous report (Illingworth et al. 2010). These computational analyses were performed using a custom Perl script and R statistical package. Conventional bisulfite sequencing Genomic DNA was extracted from FACS-purified PGCs (about 10,000 cells) using the QIAamp DNA Micro Kit (QIAGEN) and treated with sodium bisulfite using the Methylcode Bisulfite Conversion Kit (Invitrogen). The bisulfite-treated DNA was PCRamplified using a reaction mix containing 0.5 mM each primer set, 200 mM dNTPs, 13 PCR buffer, and 1.25 units of EpiTaq HS DNA polymerase (Takara Bio) in a total volume of 20 mL. The primers were Snrpn DMR (59-ATTGGTGAGTTAATTTTTTGGA-39 and 59-ACAAAACTCCTACATCCTAAAA-39) and Mest DMR (59-TTTTAGATTTTGAGGGTTTTAGGTTG-39 and 59-AATCCCTTAAA AATCATCTTTCACAC-39). The following PCR program was used for Snrpn and Mest DMRs: 1 min of denaturation at 94°C followed by 35 cycles of 30 sec at 94°C, 30 sec at 60°C, and 30 sec at 72°C, and final extension for 5 min at 72°C. Subcloning and sequencing analyses were performed as described earlier (Kobayashi et al. 2009). Data access All WGSBS data in this study have been deposited in the DNA Data Bank of Japan (DDBJ) Sequence Read Archive (http://trace.ddbj. nig.ac.jp/dra/index_e.shtml) under accession number DRA000607.
2018-04-03T05:25:26.725Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "4dfc484c212918cb4618122278f2eed4148cfe1a", "oa_license": "CCBYNC", "oa_url": "http://genome.cshlp.org/content/23/4/616.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "1ab9dbc6956a08e00f80ab74c17f9251575b39e8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
231785348
pes2o/s2orc
v3-fos-license
INTERCOMPARISON BETWEEN DEADWEIGHT MACHINES IN CHINA AND UK UP TO 1 MN This paper describes a tripartite comparison at forces of 250 kN, 500 kN, and 1 MN between deadweight force standard machines located at FJIM (China), NIMTT (China), and NPL (UK). Two different transfer standards were used, and the results demonstrated good agreement between the three machines, with En values of significantly lower magnitude than one. INTRODUCTION In order to investigate the agreement between the large force standards established by China and the United Kingdom (UK), an intercomparison was carried out from September to November 2019. The new 2 MN deadweight machine (DWM) established in 2018 by FJIM has a stated uncertainty of 0.002 % (k = 3) and is shown in Figure 1. In order to give support to the uncertainty claims, it was agreed that an international comparison with other machines of suitable capacity and uncertainty should be performed. NIMTT and NPL, the laboratories responsible for maintaining China's and the UK's national force standards at the 1 MN level, agreed to participate in such a comparison, using machines of the following specifications: The three test machines are shown in Figure 2. The transfer standards used were a 500 kN TOP Z4A load cell and a 1 MN C18 load cell (Class 00 [1]), both manufactured by HBM. Different HBM indicators (DMP40 and DMP41) were used at the three laboratories, with a single HBM BN100A bridge calibration unit employed to correct the deflections to nominal mV/V values. COMPARISON METHOD The comparison method was based on that used for CIPM force key comparisons [2], but with a single rotation of the transducer (from 0° to 360°) rather than two rotations (from 0° to 720°). The tests were first carried out by FJIM (A1), then by NIMTT (B1), then by NPL (B2), and finally again by FJIM (A2), in a single loop comparison. The start of the loading cycle is shown in Figure 3. In order to minimise the effects of creep, each test was carried out in accordance with a strictly-timed loading profile, including the three preloads which were always performed at the start of each test and a preload after each rotation of the transducer in the machine. One value of deflection was obtained at each of six orientations, and the mean deflection was calculated as the mean of these six deflections. Before the comparison, the effect of temperature on transducer sensitivity was determined. The reference temperature of the comparison is 20 ºC, and each transducer was calibrated at temperatures of 15 ºC, 20 ºC, and 25 ºC, to obtain its temperature sensitivity characteristics. The period between the A1 and A2 tests was about two months, and the transducer drift was calculated to the day, assuming a linear trend. These two factors were used in the normalization of test results to make them comparable. Measured Values The measurement results are summarised in Table 1 and Table 2. In the tables, all effects resulting from indicator sensitivity, temperature, and transducer drift have been corrected for. Uncertainty Analysis For the calculation of the value, the correct uncertainty contributions obtained at the three laboratories must be included in the analysis [3]. The value is defined by equation (1). where is the mean deflection at a given laboratory and ( ) is its expanded uncertainty (k = 2). It is therefore imperative that the correct uncertainty contributions are included in the analysis -the following sections detail the uncertainty components which have been considered in this exercise and explain how each has been dealt with in the analysis. The uncertainty components that have been considered in this exercise include: the applied force; transducer drift; instrumentation; repeatability; reproducibility; resolution; and temperature. Uncertainty of Applied Force This component is simply the claimed expanded uncertainty of the force generated by the machine, divided by the relevant value of k. Drift of Transducer Sensitivity Because the quality of the comparison is dependent upon the three measurements made during each loop, the stability of each transducer's sensitivity is critical. The A1 and A2 calibrations of the transducers at FJIM provide results of the drift in the transducer sensitivity throughout the exercise, and this drift is contributed into the FJIM uncertainty value as a component. Instrumentation It was assumed that the voltage ratios generated by the BN100A remained constant throughout the intercomparison, to enable corrections to be made, but the uncertainty associated with this assumption is included in the budget. Repeatability The repeatability of the transducer is calculated at the 0° orientation in each test. Some of this variation may be due to the repeatability of the calibration force, but some will also be due to the transducer performance. The repeatability is estimated as a rectangular distribution, with a width equal to the spread of deflections. Reproducibility The reproducibility of the transducer is defined as the variation in deflection at the six different orientations in a single test. The reproducibility is incorporated into each laboratory's uncertainty budget as the standard deviation of these six deflections, divided by square root of six (the number of values used to calculate this standard deviation). Resolution The resolution of the indicator (0.000 001 mV/V for both DMP40 and DMP 41 indicators) is incorporated twice into each deflection uncertainty estimation, once for the zero value and once for the value with the force applied. This is equivalent to a single triangular distribution of width 0.000 002 mV/V. Temperature The calibrations at the three laboratories were carried out at different temperatures -from 20.3 ºC to 21.4 ºC at FJIM, 21.0 ºC at NIMTT, and from 20.6 ºC to 20.9 ºC at NPL. The temperature sensitivities of the transducers have been accurately determined by tests, assuming a rectangular distribution and using the measured temperature difference for each set of tests as the half-width of the distribution. Comparison Results The associated values of tests are plotted in Figure 4, while Figure 5 shows the relative deviations between FJIM and the other two laboratories and their relative expanded uncertainties. Figure 4 shows that, of the eight points at which the two pairs of machines were compared, all of the resulting values are significantly smaller than one, giving confidence in the claimed uncertainties of the machines at these values. Figure 5 shows that the relative deviations of the two pairs of machines is less than 3 × 10 -5 , with an expanded uncertainty of less than 5 × 10 -5 at a confidence level of approximately 95 % (k = 2). It also demonstrates that the agreement between NPL and NIMTT is consistently within 1 × 10 -5 . SUMMARY The results of a tripartite comparison of force standards between FJIM, NIMTT, and NPL have been detailed, and provide evidence to support the uncertainty claims of the three laboratories. This comparison is of very positive significance for verifying the consistency of force between China and the UK, and to enhancing the confidence of those who depend on these standards in their respective countries, and providing useful suggestions for future research.
2021-02-03T03:02:52.769Z
2020-12-31T00:00:00.000
{ "year": 2020, "sha1": "09a41a29b5579a2eadf9f14bb5ce574ba602070f", "oa_license": null, "oa_url": "https://acta.imeko.org/index.php/acta-imeko/article/download/IMEKO-ACTA-09%20(2020)-05-23/1511", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2320fd5a917b71e147da09bf2698b23ea2f55428", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
73878404
pes2o/s2orc
v3-fos-license
Teratogenic Effects of Crude Ethanolic Root Bark and Leaf Extracts of Rauwolfia vomitoria ( Apocynaceae ) on the Femur of Albino Wistar Rat Fetuses 1 Department of Anatomy, Faculty of Basic Medical Sciences, University of Calabar, PMB 1115 Calabar, Nigeria 2 Department of Zoology and Environmental Biology, Faculty of Natural and Applied Sciences, University of Calabar, PMB 1115 Calabar, Nigeria 3 Department of Anatomy, Faculty of Basic Medical Sciences, University of Uyo, PMB 1017 Uyo, Nigeria 4Department of Anatomy, Faculty of Basic Medical Sciences, Anambra State University, Uli, Nigeria Introduction Herbalism has become the main stream throughout the world.This is due in part to the recognition of the value of traditional medical systems, particularly of the Asian origin, and the identification of medicinal plants from indigenous pharmacopeias shown to have significant healing power, either in their natural state or as the source of new pharmaceuticals.Generally, these formulations are considered moderate in efficacy and thus less toxic than most synthetic pharmaceutical agents [1]. One of such herbs in use is Rauwolfia vomitoria belonging to the family Apocynaceae.Reports show that this herb lowers blood pressure [2] and possesses analgesic, haematinic, and anticonvulsant properties [3][4][5].Antioxidant and antipsychotic properties have also been reported [6,7], with another study reporting improvement of immunity [8].Rauwolfia is reported to contain indole alkaloids which includes yohimbine, reserpine, rescinnamine, raucaffricine, ajmaline, and ajmalicine [9].These may be responsible for the different properties exhibited by the plant. Common herbs rich in essential oils (sage, rosemary, and thyme) and essential oils extracted from these herbs and other plants (oils of sage, rosemary, juniper, pine, dwarf pine, turpentine, and eucalyptus) as well as their monoterpene components (thujone, eucalyptol, camphor, borneol, thymol, -pinene, -pinene, bornyl acetate, and menthol) were found to inhibit bone resorption when added to the food of rats.Pine oil, used as a representative essential oil, protects an osteoporosis model, the aged ovariectomized rat, from bone loss, while the monoterpenes borneol, thymol, and camphor are directly inhibitory in the osteoclast resorption pit assay [10]. Bones are rigid organs that form part of the endoskeleton of vertebrates.They are dense connective tissues that function to move, support, and protect the various organs of the body, produce red and white blood cells, and store minerals.Because bones come in variety of shapes and have complex internal and external structures, they are lightweight, yet strong and hard, in addition to fulfilling their many other functions.One of the types of tissues that makes up bone is the mineralized osseous tissues, also called bone tissue, that gives it rigidity and a honeycomb-like three-dimensional internal structure.Other types of tissues found in bones include marrow, endosteum and periosteum, nerves, blood vessels, and cartilage.There are 206 bones in the adult human body and 270 in an infant [11,12]. Bone contains an extracellular mineralized matrix and a number of different cell types, including osteoblasts, osteocytes, and osteoclasts.The bone matrix consists of a ground substance in which numerous collagen fibers are embedded, usually ordered in bone in parallel arrays.In mature bone, the matrix is moderately hydrated, and 10%-20% of its mass is water.Of its dry weight, 60%-70% is made up of inorganic, mineral salts (mainly microcrystalline, calcium and phosphate hydroxides, hydroxyapatite), 30%-40% is collagen and the remainder (c.5%) is noncollagenous protein and carbohydrate, mainly conjugated as glycoproteins.The proportions of these various components vary with age, location, and metabolic status [11]. Rauwolfia vomitoria has been used for centuries in India and Africa for the treatment of a variety of disorders including snake bites, insect bites and stings, insomnia, and maniac tendency, but the safety and efficacy of its use in pregnancy have not been established.Thus, this study was carried out to investigate the teratogenic effect of crude ethanolic root bark and leaf extracts of Rauwolfia vomitoria on the histology of developing femur bone to assess skeletal development. Materials and Methods Twenty-five adult female Wistar rats were bred in the Animal House of the Department of Human Anatomy, University of Calabar, after ethical approval was obtained from the Ethical Board of the University of Calabar.The animals were treated according to the International Guidelines for the Treatment of Animals and fed with normal rat chow, and water was provided ad libitum throughout the duration of the experiment.The rats were kept under standard room temperature of 25-27 ∘ C. The animals were divided into five groups designated as A, B, C, D, and E each consisting of five rats.Group A animals were the control and groups B, C, D, and E were the experimental animals. Preparation of the Herb Extract. The roots and leaves of Rauwolfia vomitoria tree were collected from Ekpene Obo, Esit Eket Local Government Area, Akwa Ibom State, Nigeria, and were identified and authenticated by the botanist in the Botanical Garden of the University of Calabar, Nigeria.The roots and the leaves were washed with water to remove the impurities.The root barks were defoliated and dried in carbolite moisture extraction drying oven (Grant Instruments, Cambridge, UK) at 40 ∘ C-50 ∘ C, and the leaves were defoliated and dried for 3 hours.The dried root bark and leaf were blended into powder using a Binatone kitchen blender and kept in glass containers with plastic covers.The extraction method involved cold ethanolic extraction, where a known weight of the blended sample was soaked in ethanol for 24 hours, and then the extract was filtered and evaporated to dryness at room temperature to obtain the crude extract. Experimental Protocol. The twenty-five virgin female Wistar rats were caged with sexually mature male rats of the same strain overnight after ascertaining the estrous phase of the estrous cycle.The presence of tailed structures in the vaginal smear the following morning confirmed coitus, and the sperm-positive day was designated as day zero of pregnancy.Oral doses of 150 mg/kg root bark and leaf extracts of Rauwolfia vomitoria were administered to pregnant rats in groups B, and C, respectively, while Groups D and E respectively were treated with 250 mg/kg per body weight of ethanolic root bark and leaf extracts.The treatments spanned from the 7th the through 11th day of gestation with the aid of an orogastric tube.The control, Group A, animals received corresponding volumes of distilled water on the corresponding days of gestation. The pregnancy was terminated on the 20th day of gestation by chloroform inhalation method, and the fetuses were collected by uterectomy.The fetuses were blotted dry and examined for gross malformations.Fetuses were weighed by Libror EB-330H sensitive balance.The fetal femurs were extracted with the aid of a sharp needle and a forceps.The femurs were then fixed in buffered formalin and then decalcified using 4% formic acid.Complete decalcification was tested by using ammonium oxalate solution which when dropped in the decalcifying agent turns cloudy if decalcification is not complete. Routine histological processing using Haematoxylin and Eosin staining method was carried out.The stained tissues on slides were mounted on a light microscope at ×400 magnification and were quantified to determine the population of osteoblasts and osteocytes within the bone matrix using ImageJ.The slides were further processed into micrographs, and the photomicrographs were obtained at ×400 magnification. Results Histological study of the fetal femur bone using Haematoxylin and Eosin staining method showed that Group A (control), bone matrix contained numerous osteoblasts (Figure 1).Groups B and C whose mothers received 150 mg/kg of the root bark and leaf extracts of Rauwolfia vomitoria,respectively, showed numerous osteoblasts and osteocytes in the histological sections (Figures 2 and 3).In Group D that received 250 mg/kg of the root bark extract of Rauwolfia vomitoria, there was increase in the bone matrix with scanty osteoblasts when compared with the control (Figure 4).Hypertrophy and hyperplasia of bone cells, osteoblast, and osteocytes were seen in Group E animals whose mothers received 250 mg/kg of the leaf extract of Rauwolfia vomitoria (Figure 5). The osteoblast population within the bone matrix of Group A (control) sections was significantly ( < 0.05) lower than those of Groups B and C treated with 150 mg/kg of the root bark and leaf extracts of Rauwolfia vomitoria.The control Group osteoblasts were also significantly ( < 0.05) lower than those of Group E treated with 250 mg/kg leaf extract of Rauwolfia vomitoria.No difference was however observed with group D treated with 250 mg/kg root bark extract of Rauwolfia vomitoria.Group C treated with 150 mg/kg leaf extract of Rauwolfia vomitoria was significantly ( < 0.05) higher than Groups B, D, and E, while Group D was significantly ( < 0.05) lower than Group E (Table 1). The osteocyte population within the bone matrix of group A (control) sections was significantly ( < 0.05) higher than groups B, D, and E treated, respectively, with the 150 mg/kg and 250 mg/kg of root bark extracts of Rauwolfia vomitoria and 250 mg/kg of leaf extract of Rauwolfia vomitoria.They were however significantly ( < 0.05) lower than Groups C treated with 150 mg/kg leaf extract of Rauwolfia vomitoria.Group C treated with 150 mg/kg leaf extract of Rauwolfia vomitoria was significantly ( < 0.05) higher than Groups B, D, and E, while Group D was significantly ( < 0.05) lower than group E (Table 1). Discussion Bone is a specialized connective tissue composed of intercellular calcified material, the bone matrix, and three cell types: osteocytes which are found in the cavities (lacunae) within the matrix; osteoblasts which synthesis the organic components of the matrix and osteoclasts which are multinucleated giant cells involved in the resorption and remodeling of bone tissue [12]. Hypertrophy and hyperplasia of osteoblasts were seen in Groups B, C, and E animals whose mothers received, respectively, 150 mg/kg of the root bark extract and 150 mg/kg and 250 mg/kg of the leaf extract Rauwolfia vomitoria.These are in correlation with the stereological quantification results in osteoblasts and osteocytes population.The results indicate that Rauwolfia vomitoria or its constituents crossed the placental barrier.These obtained results may be due to advanced skeletal development.The root bark and leaf extracts of Rauwolfia vomitoria may have crossed the placental barrier to affect the skeletal programming of femur bone development at the cellular level, thus exerting direct effect on skeletal cells.The fetal bones of the animals whose mothers received 250 mg/kg of the root bark extract were severely affected as the bone tissues were characterized by matrix with few osteoblasts, which may result in delayed ossification. IL-1, a bone-regulating cytokine, has been shown to exert direct effect on skeletal cells [13,14], which may affect cells such as chondrocytes, osteoblasts, and osteoclasts.The Rauwolfia vomitoria extracts may stimulate this cytokine resulting in the reported result of this study.Oreffo et al. [15] reported that IL-1 affects the offspring indirectly via the placenta.IL-1 exposure also results in decreased skeletal growth and a reduced amount of cortical bones in adult rats offspring [16]. The notion that skeletal growth may be programmed during intrauterine or early postnatal life is supported by a study by Cooper et al., [17] demonstrating a relationship between adult bone mass and weight at 1 year of age.In a related study, Snow and Keiver [18] reported that prenatal ethanol exposure has effect on the resting zone of the developing bone indicating that earlier stages of bone development may also be disrupted.The decrease in the length of the resting zone increased the length of the hypertrophic zone and enlarged, which was consistent with an effect of ethanol on the later stages of bone development. In conclusion, this study suggests that the high dose of root bark extract of Rauwolfia vomitoria had an adverse effect on the developing femur rather than the leaf extract.Hence, the root bark extract may lead to skeletal retardation, while the leaf extract may advance skeletal development. Figure 2 : Figure 2: Photomicrograph of the fetal femur bone of Group B treated with 150 mg/kg of root bark extract showing the matrix (M) with enlarged osteoblasts (OS) and osteocytes (OST) (H & E ×400). Figure 3 : Figure 3: Photomicrograph of the fetal femur bone of Group C treated with 150 mg/kg of leaf extract showing the matrix (M) with numerous osteoblasts (OS) and osteocytes (OST) (H & E ×400). Figure 4 : Figure 4: Photomicrograph of the fetal femur bone of Group D treated with 250 mg/kg of root bark extract showing scanty osteoblasts (OS) within the bone matrix (H & E ×400). Figure 5 : Figure 5: Photomicrograph of the fetal femur bone of Group E treated with 250 mg/kg of leaf extract showing numerous osteoblasts (OS) and osteocytes (OST) (H & E ×400). Table 1 : Bone matrix population of osteoblasts and osteocytes.
2019-03-12T13:02:50.928Z
2013-07-11T00:00:00.000
{ "year": 2013, "sha1": "0b0cca59f1e9c197b919323128a49360a15592f7", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2013/363857.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a7c6155b3a23b9f4087016bb8cfa3480a40b2567", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
265089414
pes2o/s2orc
v3-fos-license
Time to Look for Ergonomically Viable Designs of Radiation Protection Aprons and Thyroid Shields in Orthopedic Surgery: A Survey of 416 Orthopedic Surgeons Introduction The advent of minimally invasive surgery has increased the use of C-arm among orthopedic surgeons. Their views on the ergonomicity of radiation protection aprons and thyroid shields need elucidation. To investigate, we deliberated a question-based survey. The primary aim of the survey was to find out the percentage of those not using these devices, the prevalence of back pain, and its relationship with the type of radiation protection aprons. Materials and methods This was a cross-sectional survey. A five-section Google Forms survey (Google, Inc., Mountain View, CA) was filled out, and responses from 416 orthopedic surgeons were included. Analysis was carried out using Statistical Package for the Social Sciences (SPSS) version 14.0 (SPSS Inc., Chicago, IL). Results Of the total number of orthopedic surgeons, 36.8% felt that apart from radiation exposure, wearing a radiation protection apron was the biggest problem in C-arm usage. Furthermore, 20.4% wore thyroid shields the majority of the time. The 31-40 years age group was the most comfortable wearing these devices, wore them more often, and suffered more often from back pain (all p<0.01). Conclusion The study concluded that the majority of orthopedic surgeons were not comfortable with the current designs of radiation protection aprons and thyroid shields. Thyroid shields are worn less than aprons. Lead apron weight and thyroid shield ergonomicity were the number one reason for being bare-bodied. Among those who regularly wore aprons, a large proportion suffered from back pain. Introduction In the last few decades, there have been leaps of improvement in surgical techniques and implants in the field of orthopedic surgery.Surgeons' constant drive for perfection led to numerous innovations.C-arm is one such technological advancement and is no longer a luxury but a basic necessity for bone and joint surgery.However, the radiation exposure involved in its usage is the cost that surgeons have to pay [1]. The current radiation protection equipment, the aprons and the thyroid shields, have been designed to shield surgeons from secondary radiation [1,2].Conventionally, these aprons were lead-based, but due to their weight, lead-free aprons have also been introduced [3,4].Despite different designs, there have been numerous reports of back pain and increased fatigability among interventionists using these aprons, and a lack of education is evident regarding their use [5].However, no large survey has been reported from orthopedic surgeons, who are conventionally treated as a low-risk group, as they use these aprons only for a limited time [2].The dawn of minimally invasive surgeries has amplified C-arm usage among orthopedic surgeons, and the literature is deficient in views of the orthopedic community on the ergonomicity of the design of radiation protection aprons and shields, ease of usage, back pain prevalence, and its relationship with the usage of radiation protection apron.To investigate the above deficiencies in the literature, we planned to do a question-based survey.The primary aim of the survey was to find out the number of surgeons not using radiation protection equipment, the reasons for not using it, the prevalence of back pain among orthopedic surgeons who use lead aprons, and its relationship with radiation protection aprons and shields.This survey would be able to clarify the reasons why surgeons avoid using aprons and thyroid shields and would help companies modify designs according to surgeons' comfort.Furthermore, the survey has also identified the characteristics of the "at-risk" population that avoid wearing these protective equipment, on whom hospital administration/radiation safety officers can focus and impart necessary knowledge to increase the use of radiation protection aprons and shields. Materials And Methods This was a cross-sectional survey study.At our institute, surveys that do not involve patients do not need ethical committee approval.A five-section Google Forms survey (https://docs.google.com/spreadsheets/d/16rjB8dkBOSbSXj53hjJQckI8h7S-e4VAiEgwDLRAWQU/edit)(Google, Inc., Mountain View, CA) was sent using e-mail and WhatsApp to over 700 orthopedic surgeons whose contact details were obtained from the national orthopedic registry.A waiting period of 11 days was allowed for all respondents to reply, following which the survey was closed.All surgeons who were actively practicing orthopedics and gave their consent to use their views for research purposes were included in the study.All those surgeons who did not give consent or submitted forms that were not complete were excluded from the study.A total of 441 responses were then studied. The survey had five sections.The questions were finalized after active discussion among stakeholders.The first section asked if the respondent was an orthopedic surgeon.In the case of a non-orthopedic surgeon, the survey was closed.The second section collected basic information about the surgeon, such as age, gender, country of origin, and involvement in C-arm surgeries.The third section aimed at quantifying C-arm usage and the number of hours for which the surgeon wears the radiation protection apron and shield.The fourth section was based on questions that would analyze the problems that the surgeons face while wearing these devices.The final section dealt with the problem of back pain and the number of days missed due to back pain.Available replies were tabulated in an Excel sheet (Microsoft Corp., Redmond, WA), which was then statistically analyzed. Statistical analysis Statistical analysis was performed using Statistical Package for the Social Sciences (SPSS) version 14.0 (SPSS Inc., Chicago, IL).All quantitative data were expressed as mean ± standard deviation.The chi-square test, Fisher's exact test, and one-way analysis of variance (ANOVA) were used to correlate between study variables such as the presence of back pain and the area of the practice with age, number of surgeries, and days missed due to back/neck pain.We also looked into factors for comfortable design.A p-value of less than 0.05 was considered statistically significant.All aspects of the statistical analysis were reviewed by a statistician.The goal of our analysis was to identify statistically significant differences between surgeons who wore radiation protection equipment, found it comfortable, and did not experience back pain and those who did not wear it, found it uncomfortable, and did experience back pain.Age, practice region, the number of C-arm surgeries each week, the number of hours per week spent wearing an apron, and the use of thyroid shields were used as criteria for grouping the data.These groups were compared based on characteristics such as how comfortable an apron and thyroid shield are to wear, whether surgeons wear aprons and thyroid shields, whether they experience back and neck pain, and how many days they missed work owing to these conditions. To determine the most comfortable and the most faulty designs, we further evaluated the data using the chisquare test.The subgroup that never used this equipment received extra attention, and statistical tests of significance were employed to identify the traits unique to this group. Results A total of 441 responses were received for our Google Forms survey, of which 416 (95.2%) were from orthopedic surgeons.Of the orthopedic surgeons, 100% used C-arm in their practice.Of these, 153 (36.8%) felt that apart from radiation exposure, wearing a lead apron was the biggest problem in C-arm use.A total of 379 (91.1%) surgeons wore aprons the majority of the time.The biggest problem reported by surgeons for apron use was the heaviness of lead aprons reported by 243 (58.4%) surgeons, followed by a decrease in movements during surgery reported by 58 (13.9%) surgeons.Other problems reported by surgeons were that aprons were not fitting (11%).Only 40 (9.6%) surgeons reported that they were satisfied with the current design (Figure 1). FIGURE 1: Problems while using C-arm (apart from radiation exposure) In contrast, only 20.4% (85) wore thyroid shields the majority of the time.Of the respondents, 142 (34.1%) found thyroid shields suffocating, and 114 (27.4%) found them to irritate the skin.Furthermore, 22% felt that it did not fit properly, and about 1% felt it affected their communication.Only 32 (7.7%) had no problems with the current design. Forty-two (10.1%) of our respondents often forgot to wear aprons, and 155 (37.3%) always forgot to wear thyroid shields. Another important problem reported with current aprons was that they make surgeons sweaty during surgery, with 326 (78.4%) surgeons complaining about this in our survey.Furthermore, it was interesting to note that surgeons in the government sector complained of sweatiness more often than in the private or mixed sector (p=0.000)(Table 1). Does wearing a radiation protection apron make you sweaty during surgery? No response marked Half the time Less than half the time Majority of the time Never Total What is your area of practice? 1: Cross tabulation between areas of practice and feeling of sweatiness We found that the 31-40 years age group was most comfortable in wearing radiation protection equipment (p<0.01),wore it more often than the other age groups (p<0.01), and suffered more often from back pain (p<0.01)(Figure 2). FIGURE 2: Bar diagram showing the relation of age to comfortability of wearing aprons, whether surgeons wore aprons, and the incidence of back pain The extremes of age, despite having more usage, suffered from less back pain as there were only three responses in the 20-25 years age group, which is not representative of the general population, so this age group was not considered, and although >51 years wore lead apron more often than other groups, they suffered from less back pain as the number of surgeries performed by this age group was less (41.3% performed less than three surgeries per week). The area of practice also influenced the above factors.Private practice surgeons were most comfortable with radiation protection equipment (p<0.01),donned it more frequently (p<0.01), and experienced more back pain (p<0.01)than government practice surgeons (Figure 3). FIGURE 3: Bar diagram showing the relation of the area of practice to comfortability of wearing aprons, whether surgeons wore aprons, and the incidence of back pain Furthermore, surgeons performing 7-10 surgeries per week were most comfortable with wearing radiation protection equipment (p=0.000),wore it more often (p=0.000),and suffered more often from back pain compared to surgeons performing 1-7 surgeries or over 10 surgeries (p=0.000)(Figure 3). It is also prudent to state that the above figures indicate a temporal relation between apron usage and back pain, which may point toward the fact that apron wearing is one of the factors that cause back pain in orthopedic surgeons.Another important finding was that the most common complaint of surgeons suffering from back pain was ill-fitting lead aprons rather than lead aprons being heavy (40% versus 34.2%) (p=0.043) ( TABLE 3: Cross tabulation between types of apron design and feeling of comfortability Of our respondents, 63 (15.1%) did not know the type of apron they were using (Table 4 and Figures 4, 5).In our survey, the single-sided coat type design had more frequency of back pain compared to the wrap-around type (p=0.05).We noticed that nine surgeons never wear radiation protection aprons and shields.Over 90% of these were from private practice (p<0.01),did less than two surgeries (p<0.01), and were aged >51 years (p<0.01). Discussion Our survey shows that a very small proportion of orthopedic surgeons are comfortable with the current designs of radiation protection equipment.Furthermore, it reveals that very few surgeons are actually using thyroid shields.The survey is unique in telling the problems of such a large number of surgeons from multiple countries, which can be used as the basis for newer surgeon-friendly designs by radiation protection equipment manufacturing companies.Another important finding of our survey is that it describes two important factors that may be modified by the hospital administration and can help increase the usability of radiation protection equipment, i.e., temperature control and inability to call to mind to wear a thyroid shield.Of our respondents, 78% feel sweaty, and proper maintenance of operation theater (OT) temperature and humidity would make them more comfortable in wearing radiation protection equipment.Also, the majority of surgeons forget to wear thyroid shields, for which hospital administration should take proper steps including pasting of notifications at scrubbing counters and reminding surgeons to wear thyroid shields.Furthermore, manufacturers may incorporate thyroid shields in the apron, which may eliminate the chances of forgetting. Concern with the fitting of aprons being more significantly linked to back pain than heaviness is a point worth mentioning.This factor is easily controllable, and back pain incidence may decrease substantially if hospital administration can provide better fitting aprons and manufacturers may increase modularity in designs.Lastly, our survey identifies high-risk groups of surgeons, those in private practice, doing less than two surgeries, and were more than 51 years old, that are not wearing radiation protection equipment; proper programs and workshops may be implemented by hospital administration targeting these groups and to sensitize them to use radiation protection equipment. Currently, radiation safety has been of significant importance as the number of minimally invasive procedures being performed in orthopedic operation theaters has been increasing [1,2].Surgeons are mainly affected by the scattered radiation.Aprons and thyroid shields have been designed to prevent the ill effects of this radiation [1].However, they have their own problems.The earliest study of musculoskeletal issues of interventionists was reported in 1992 when Moore et al. [6] surveyed 236 radiologists regarding back pain and the use of lead aprons.Ross et al. [2] noted that lead apron-wearing cardiologists "reported more neck and back pain, more subsequent time lost from work, and a higher incidence of cervical disc herniations, as well as multiple-level disc disease."The article noted that "interventionalist's disc disease" is a confirmed entity and that it is possibly a consequence of lead apron use.Since then, numerous reports have identified them to be responsible for back and shoulder pain.However, most of these reports are for radiologists, cardiologists, and spine surgeons, and none for orthopedic surgeons, who are believed to wear radiation protection equipment only for shorter spells [2,[6][7][8][9].Ross et al. [2] compared back pain among radiologists and orthopedic surgeons and reported that 82% of orthopedic surgeons wear lead aprons with an average of 2.9 orthopedic surgeries per week.Contrary to these findings, our survey shows that 100% of surgeons use C-arm, 91% wear lead aprons, and 74.4% do more than three surgeries per week.It shows that the time and frequency of apron use has increased among orthopedic surgeons, and they are now more prone to occupational back pain. Our survey showed that 63% of the respondents suffer from back pain or neck pain, which is similar to rates reported for interventional radiologists [7,8].Livingstone et al. [7], in a survey of 91 interventional radiologists, reported that 47% had some sort of back pain, while Morrison et al. [8] reported that out of 640 interventional radiologists who were surveyed, 61% had back and 59% had neck pain.Our data indicates almost the same incidence of back pain among orthopedic surgeons as seen in interventional radiologists. Contrary to Morrison et al. [8] who did not find age to influence back pain in interventional radiologists, we found that the 31-40 years age group is most prone to develop back pain.This may be because Morrison et al. [8] have used only two age groups (less than 50 and more than 50), and we used six different subgroups.Furthermore, this was the age group in our study that most often used lead aprons.Similar to the findings of Ross et al. [2] and Livingstone et al. [7], we also observed that back pain was proportionate to the usage of lead aprons.However, there are other studies in the literature that have shown no difference between the prevalence of back pain and the duration of lead apron use [5,6]. We also found that lead-free and lead vinyl designs were more comfortable compared to lead-based designs.Similar findings were reported by Morrison et al. [8].Contrary to the report by Livingstone et al. [7], who reported that 29% of radiation interventionists were unaware of the type of radiation apron they were using, we found that 15% of the surgeons in our study were unaware of the type of apron they were using.It has been published in the literature that although the users claim to store the lead aprons correctly after use, this has not been the case [5].Correct storage of lead aprons has been reported to be crucial in preventing cracks or holes from developing, which can reduce their integrity and hinder their protective ability [5,10]. The radiation safety officer of the hospital must impart knowledge to orthopedic surgeons in this aspect to further improve these numbers.Education improved the knowledge and understanding of lead apron use [5]. Livingstone et al. [7] reported that about 47% of their respondents have some kind of body aches due to wearing single-sided aprons.Similar to them, we also found single-sided aprons to be more often a cause of back pain compared to wrap-around type.We believe that this may be due to the lumbosacral support type belt provided in wrap-around aprons.A study showed adding a belt decreased spine load substantially and was associated with significantly less pain [11,12].Other studies have not shown a significant difference in pain or disability between one-and two-piece garments (e.g., apron versus skirt and vest) [12,13]. Bowman et al. [14], in their nationwide survey of orthopedic residents, observed that forgetting was the number one reason to not wear a lead apron.Our survey showed that only 10.1% forget to wear lead aprons.This may be because most respondents in our survey were older, with more experience than residents, and therefore had a lower tendency to forget.However, surprisingly, over 35% of these respondents forget to wear thyroid shields. Alexandre et al. [15] found that the use of lead aprons increases body temperature by 0.55°-0.95°.Similar to their findings, we also found that 78.8% of the respondents felt sweaty while wearing lead aprons.These two points must be kept in mind by the hospital administration.By providing proper operation theater (OT) temperature and humidity and pasting notification in scrub areas about wearing thyroid shields, we believe compliance to use radiation protection equipment may get better.Furthermore, surgeons in the government sector complained of sweatiness more often than those in the private or mixed sector, so it seems that OT temperature was more of a problem in the government sector, probably highlighting the need to maintain comfortable temperatures in the OT in government-owned hospitals. Our study also observed that surgeons aged >51 years mainly involved in private practice and doing less than two surgeries per week are at the highest risk of not wearing radiation protection equipment, and therefore, measures to educate this subgroup regarding occupational radiation hazards may be beneficial in increasing compliance in using radiation protection aprons. To the best of our knowledge, we did not find any survey of orthopedic surgeons explaining the problems they face while wearing radiation protection equipment.This survey showed that the heaviness of lead aprons and decreased movement during surgery were major causes of uncomfortability among surgeons.Similarly, thyroid shields were found suffocating by most surgeons, while in others, it irritated the skin or beard.These points may be kept in mind by manufacturing companies while designing radiation protection equipment. Being the largest survey of orthopedic surgeons described in the literature that has assessed the usage of radiation protection aprons and shields is one of the important strengths of the study.Our study had various limitations as well.Firstly, it being a cross-sectional survey, the level of evidence is low.Secondly, the survey opinion is predominated by males and may not be representative of all orthopedic surgeons.Thirdly, a number of problems are inherent in any epidemiological study of back pain as the symptom is subjective and multifactorial.Therefore, all confounding factors can never be accounted for, and therefore, establishing a causal relationship may be difficult.Lastly, there are also problems associated with studying the use of lead aprons.Not all lead aprons are alike.Variations in weight and structure could significantly alter stresses on the back.Another limitation of our study is the lack of validation of our questionnaire.However, it was devised through extensive research by the author group and was based on a previously published questionnaire by Moore et al. [6].Despite these shortcomings, this survey is one of the first attempts to highlight the problems orthopedic surgeons face while using radiation protection equipment. Conclusions The study assessed the practice of orthopedic surgeons using radiation-protective lead aprons and thyroid shields with the help of a simple questionnaire.The majority of the surgeons responded that they were not comfortable with the current designs.Thyroid shields are worn less than aprons by orthopedic surgeons.Lead apron weight and thyroid shield design ergonomicity were the number one reason for not wearing the radiation protection equipment.Among those who regularly wore aprons, a large proportion suffered from back or neck pain.
2023-11-10T16:30:20.132Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "d0cc6ba7444c6b1120819d39133f00fb0e3e1003", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/153152/20231107-1858-7q7uhr.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a949dfe30e68e87c624dd67e81be6756024607e7", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
256619885
pes2o/s2orc
v3-fos-license
Significance estimation for large scale metabolomics annotations by spectral matching The annotation of small molecules in untargeted mass spectrometry relies on the matching of fragment spectra to reference library spectra. While various spectrum-spectrum match scores exist, the field lacks statistical methods for estimating the false discovery rates (FDR) of these annotations. We present empirical Bayes and target-decoy based methods to estimate the false discovery rate (FDR) for 70 public metabolomics data sets. We show that the spectral matching settings need to be adjusted for each project. By adjusting the scoring parameters and thresholds, the number of annotations rose, on average, by +139% (ranging from −92 up to +5705%) when compared with a default parameter set available at GNPS. The FDR estimation methods presented will enable a user to assess the scoring criteria for large scale analysis of mass spectrometry based metabolomics data that has been essential in the advancement of proteomics, transcriptomics, and genomics science. Matching fragment spectra to reference library spectra is an important procedure for annotating small molecules in untargeted mass spectrometry based metabolomics studies. Here, the authors develop strategies to estimate false discovery rates (FDR) by empirical Bayes and target-decoy based methods which enable a user to define the scoring criteria for spectral matching. U ntargeted mass spectrometric (MS) analysis of small molecules is important in our understanding of (bio) chemical processes in the environment, ocean, and individual organisms [1][2][3][4][5][6][7] . In untargeted mass spectrometry experiments, tandem MS (MS/MS) spectra are collected of molecules present in the analytical sample. To annotate these unknowns, the MS/MS spectra are compared against a library of reference MS/ MS spectra [8][9][10][11] . At present, spectrum-spectrum matches of unknown and library spectra are scored but this score alone provides no statement about statistical accuracy of that assignment. Without statistical techniques in place to estimate false discovery rates of identifications, researchers do not have a guide to set appropriate scoring criteria, unlike in proteomics, peptidic small molecule identification, transcriptomics, and genomics where statistical assessment and false discovery calculations for annotations are the norm [12][13][14][15] . In metabolomics, probabilistic assignments of molecular formulas has been done but this does not provide structure identifications which are critical to biological understanding 16,17 . This leads untargeted liquid chromatography tandem mass spectrometry (LC-MS/MS) based metabolomics or any other small molecule based untargeted mass spectrometry analysis to yield identification results where errors rates are uncontrolled. This can lead to a lack of sensitivity or worse: rampant false discoveries and ultimately incorrect interpretations. To compound the challenge, due to advances in instrumentation and the re-emergence of appreciation in the function of small molecules, the scientific community is producing more and more untargeted mass spectrometry data. These LC-MS/MS based experiments are now commonly applied in medicine, life science, agriculture toxicology, exposomics, ocean and forensic research to name a few. Modern instruments generated hundreds to thousands of MS/MS spectra from a single sample,and collectively tens of millions of MS/MS spectra for large scale projects. There are also a growing number of MS/MS spectra available in public spectral libraries 8,9,11,18,19 . To most in the scientific community, including mass spectrometrists and metabolomics investigators themselves, it often comes as a surprise that there is no significance estimation in metabolomics annotations yet, like it has been adopted, and thereby advanced, in the fields of proteomics, genomics and transcriptomics. While guidelines and rules have been established by the metabolomics standards initiative 20 , to report annotation of molecules from MS-based metabolomics data 20 , they are not commonly reported in the majority of metabolomics studies. Manual validation at the scale of tens of thousands to millions of spectra library matches is not realistic to do for each large scale experiment, and automated solutions for the annotations that enable downstream analysis such as pathway mapping, xenobiotic metabolism, chemical ecology, and ultimately prioritization for manual validation are needed; but this process starts with annotations 21 . Large scale non-targeted LC-MS/MS experiments result in hundreds to thousands of query spectra from a single chromatographic run. For molecular annotation, these MS/MS spectra are typically searched against a spectral library, which in turn, results in spectral library hits that are sorted by score. Using a decoy spectral library to estimate FDR is common in proteomics; there, the decoy database is often a (pseudo-)reverse peptide database or a shuffled database 12,22,23 . The reason why targetdecoy approaches for FDR estimation have not been applied so far to metabolomics, are the difficulties in generating decoy libraries; small molecules are diverse in structure, and shuffling or reversing a database is not possible. Therefore, alternative strategies needed to be developed for FDR estimation. Here we present and assessed four possible solutions.Ultimately we implemented one method, the re-rooted fragmentation tree, in the MS/MS data analysis platform Global Natural Product Social Molecular Networking (GNPS, http://gnps.ucsd.edu) 9 , to demonstrate that FDR estimations need to be used to guide scoring parameters. To validate the FDR approach and how it performs for spectral annotation with real large scale untargeted mass spectrometry data, we performed FDR controlled spectral library matching with 70 data sets from GNPS, consisting of thousands of LC-MS runs. This revealed that there is no universal scoring criteria that can control the FDR in all data sets. This adaptive approach shows promise to both, increase identifications and curb false positives in large scale metabolomics experiments. The FDR estimation has now been implemented as a tool called passatutto, named after a food mill used to remove unwanted particles commonly used in Italian kitchens. Ultimately, passatutto provides experimentalists with an high-throughput measure of confidence in MS/MS-based annotations by reporting an FDR, to guide the selection of scoring parameters for a project compatible with large scale MS/MS based untargeted metabolomics projects. Results Construction of FDR estimation approaches. Our first method that we assessed uses an empirical Bayes approach 24 whereas the second, third and fourth FDR estimation methods rely on the target-decoy approach, using different decoy databases ( Fig. 1a-d). Although the generation of "random" MS/MS spectra for small molecules is conceptually more challenging than for peptides 25 , it became possible with recent methodological advances 9, 26-28 . To estimate the FDR using a decoy database, three strategies were devised to create the decoy MS/MS library ( Fig. 1b-d), where the first two methods are spectrum-based while the third is fragmentation tree-based 23,24 . To show compatibility with different spectral matching scoring schemes, we present results for the MassBank scoring 11 and the GNPS scoring 9, 29 , both of which utilize modified versions of the cosine similarity (also known as normalized dot product). For the naive decoy spectral library, we use all possible fragment ions from the reference library of spectra and then randomly add these ions to the decoy spectral library, until each decoy spectrum reaches the desired number of fragment ions that mimics the corresponding library spectrum (Fig. 1b). This method is presented as a baseline evaluation of the other, more intricate methods. The second method is similar to the naive method, as we create the decoy spectral library through choosing fragment ions that co-appear in the spectra from the target spectral library (Fig. 1c): In this spectrum-based approach, we start with an empty set of fragment ion candidates. First, the precursor fragment ion of the target spectrum is added to the decoy spectrum. For each fragment ion added to the decoy spectrum, we choose all spectra from the target spectral library which contain this fragment ion, within a mass range of 5 p.p.m. From these spectra, we uniformly draw (all fragment ions have the same probability to be drawn) five fragment ions that are added to the fragment ion candidate set; we use all fragment ions in case there are fewer than five. We draw a fragment ion from the fragment ion candidate set and add it to the decoy spectrum, then proceed as described above until we reach the desired number of fragment ions that mimics the corresponding library spectrum. The two-step process of first drawing candidates, then drawing the actual decoy spectrum was introduced to better mimic fragmentation cascades and dependencies between fragments. Furthermore, it prevents that fragment-rich spectra dominate the process. Out of the five added candidate fragment ions, between zero and five end up in the final decoy spectrum. Fragment ions with mass close (5 p.p.m.) to a previously added In the target-decoy approach, query spectra are searched against a target and decoy spectral library, and FDRs are estimated from the merged and sorted list of spectrum matches. b-d To construct a decoy spectral library, we implemented three methods. b Naive method: randomly adding fragment ions from the reference library to the decoy spectrum. c Spectrum-based method: fragment ions are iteratively added to the decoy spectrum, conditional on fragment ions that have previously been added. d Fragmentation tree-based method: a fragmentation tree is computed from the target spectrum, its root is relocated. New formulas of fragments are calculated according to the losses in the tree. Fragments with invalid formulas are relocated fragment ion mass, or masses above the precursor fragment ion mass are discarded. If the precursor ion is absent from the MS/ MS spectrum, we use the selected ion mass to find matching compound masses. The third solution is a fragmentation treebased approach, where decoy spectra are generated using a rerooted fragmentation tree (Fig. 1d). From the original fragmentation tree, its structure and all losses are kept, and some new internal node is selected as new root, with the molecular formula of the precursor ion. Molecular formulas of all fragment ions are calculated along the edges of the tree, subtracting losses. In case the tree rearrangement yields chemically impossible molecular formulas (that is, a negative number of atoms for some element), the corresponding loss and its subtree are placed to another branch of the tree (re-grafted), attaching it to a uniformly selected node. The new root node is not drawn uniformly: Instead, a node is chosen as new root with relative probability 1/(n+1), where n is the number of edges that we would have to re-graft. For all three methods, intensities of the original fragment ions are used. Assessing quality against spectral libraries. Assessing the quality of empirical Bayes, and the naive, spectrum-based and fragmentation tree-based target decoy databases was done by p-value estimation, and by testing q-value estimates against exact values using public MS/MS libraries. Evaluation can only be carried out when the true identity of all query compounds is known. To assess quality, we used high resolution reference spectra from the Agilent 30 , MassBank 11 , and GNPS libraries 9 . From GNPS and For searching in the unfiltered target spectral library a-d, p-values are estimated using the empirical Bayes approach. For searching the noise-filtered target spectral library, p-values are estimated using the fragmentation tree-based target-decoy approach e-h. Distributions contain p-values from ten decoy spectral libraries. p-value distribution for both, true and false hits a, e, p-value distribution for true hits only b, f, and for false hits only c, g. By definition, the distribution of p-values for false hits has to be uniform, corresponding to the main diagonal in the p-value quantile-quantile (qq) plots d, h. The qq plots for the other methods are provided as Supplementary Fig. 1. i, j q-value plots for Agilent data (q-value plots for MassBank are provided as Supplementary Fig. 2). Estimated (y-axis) vs. true q-values (x-axis) in the unfiltered i and noise-filtered j version of the GNPS library. The small red line indicates cosine of 0.7. For the fragmentation tree-based method, we searched against the noise-filtered GNPS only, since this approach applies noise-filtering by design. The naive target-decoy approach can be seen as baseline method for comparison. For target-decoy methods, results were averaged over ten decoy spectral libraries ( Supplementary Fig. 4) MassBank, only spectra that had the unfiltered spectrum in the public domain, that had SMILES or InChI structure annotations (line notations for describing chemical structure using short strings) and for which the precursor mass matched to the exact structure-based mass to within 10 p.p.m., were used for the assessment of the FDR estimations. As an initial test, we checked if p-values of false hits (false positive identifications) estimated by our methods are uniformly distributed 31 : The p-value of a spectrum match is the probability to randomly draw a result of this or better quality, under the null hypothesis for which a spectrum has been randomly generated. We observe a mostly uniform distribution of p-values, both for the empirical Bayes approach and the fragmentation tree-based target-decoy approach (Fig. 2a-f), corresponding to a quantile-quantile plot close to the main diagonal ( Fig. 2d, h, Supplementary Fig. 1). This agrees with the distribution of p-values under the null hypothesis, and shows that our decoy databases are indeed representative models of the null hypothesis. In the p-value distribution, we observe heightened peaks close to 0 and 1; the heightened peak close to 0 is discussed below, whereas the heightened peak close to 1 is negligible for significance estimation. To evaluate the quality of estimated FDRs, we compared q-values of the four methods presented here with true q-values. In addition, we also assessed the impact of noise filtering on the quality of FDR estimation: Noise-filtering by fragmentation trees is accomplished by calculating a fragmentation tree that annotates some of the hypothetical fragment ions with molecular formulas 32,33 ; only these annotated fragment ions are kept, resulting in a cleaned spectrum that only keeps fragment ions that are well-supported by the fragmentation process. For the unfiltered target spectral library, empirical Bayes approach resulted in good estimates, whereas spectrum-based target decoy did not work as accurately (Fig. 2g, h): the empirical Bayes approach represented a good fit of the bisecting line, while the spectrum-based approach did not. For the noise-filtered target spectral library, the target-decoy methods except the naive method allow for accurate q-value estimates, and perform roughly on par (Fig. 2c). The naive method never results in accurate q-value estimates: Even for true q-values around 0.15, estimates are already close to 0. All methods tend to overestimate significance (estimated q-values are smaller than true q-values); in particular, estimates are close to zero for true q-values below 0.05. For some query compounds, not contained in the target database, there is a structurally similar isomer with similar fragmentation spectrum present in the target database ( Supplementary Fig. 2). These wrong hits will receive relatively high scores and, hence, wrong hits in the target database are more frequent at top positions of the output list than hits in the decoy database, impeding accurate estimation for small q-values. Results in Fig. 2g, h are for Agilent queries; see Supplementary Figure 3 for MassBank queries. To further evaluate the robustness of our estimates, we generate 10 decoy spectral libraries for each decoy method. Because generating decoy spectral libraries is a random process, q-values vary slightly between the 10 decoy spectral libraries; we found these variations to be negligible ( Supplementary Fig. 4). Results in Fig. 2 present searching Q-TOF spectra using the MassBank scoring function. Results for the cosine similarity score were comparable ( Supplementary Fig. 5). Furthermore, using Orbitrap MassBank spectra as queries yielded similar results ( Supplementary Fig. 2). Evaluation of fragmentation tree decoy strategy against public data. We evaluated the fragmentation tree-based decoy FDR estimation method broadly across 70 data sets available on GNPS. We selected a decoy-based FDR estimation, as this does not rely on presupposed underlying (and potentially unrelated) probabilistic models 34 . We also choose the filtered decoy approach as this is compatible with the filtered data in GNPS. Data sets included high resolution Q-TOF or Orbitrap data from 6220 LC-MS runs encompassing human, microbe, plant and marineorganism derived samples. To calculate both the 1% FDR and 5% FDR, the total running time for the FDR computation of the spectral library matches associated with all the projects took~48 h on the GNPS cluster, demonstrating the compatibility of the FDR approach with large-scale metabolomics experiments. At 1% FDR, the average gain in annotation for the 70 public data sets in comparison to default scoring cut-off value (cosine score of 0.7 and minimum of 6 ions to match) was 139% with a range of −92 up to 5705% (Fig. 3, Supplementary Figs. 6 and 7). At a score of 0.7, the annotations from continuous identification, as judged by the community via a four-star rating of the identifications, the GNPS community provided feedback that 91% of the annotations are correct, 4% possible isomers or correct, 4% not enough information to tell and 1% is incorrect 9 . When using 5% FDR, a mean gain annotation of 235% was obtained and had a range of −75% up to 6705% gain (Fig. 3). Further, we explore the impact of cosine scoring and the minimum number of fragment ions to match on the number of matches associated with 1% FDR using the fragmentation treebased decoy strategy. Over the 70 public metabolomics projects, the minimum matched peaks were modulated resulting in a cosine threshold ranging from 0.3 to 1 with the number of identifications represented in a histogram (Fig. 4a). The results reveal that the more ions were required to match, the more forgiving the spectral scoring could be. When 8 ions were required to match, the most common score to achieve 1% FDR was found to be between a cosine of 0.50-0.60, while when two ions were required to match, the most common score required was 0.85-0.95. For all the projects that require a cosine of 1 to achieve 1% FDR, not a single annotation was obtained (Fig. 4b). We observed that the most number of annotations was achieved with a minimum of 4 fragment ions matching, with 3 ions and 5 ions as close second and third in terms of the number of spectra that were annotated. As the number of fragment ions required to match the number of matches was increased to 7 and 8 or lowered to 2, the number of total matches decreased significantly. This comparison furthermore revealed that for most of the projects, an FDR of 1% was achieved at cosine of 0.6-0.65 (for 5% FDR, most of the projects dropped to a cosine of 0.5-0.55) and therefore the default cosine values for living data in GNPS are slightly more stringent. Three representative FDR vs. default score threshold plots are provided as Supplementary Fig. 8, and may guide the end user to make a decision on scoring criteria. At 1% FDR, it was observed that parameters of the GNPS living continuous annotation overestimates the number of annotations for 13% of the projects and that it underestimated the annotations in 82% of the projects and the remaining 5% of projects remained unchanged by the introduction of the FDR estimation. Discussion The four methods that were implemented and assessed for significance of annotations for untargeted mass spectrometry are using empirical Bayes approach, which implies a probabilistic model of score distributions, and three different target-decoy approaches. Different from in silico annotation of MS/MS spectra of peptidic small molecules 35 Fig. 4 The impact of number of matching fragment ions in a spectrum and cosine score at 1% FDR. a Frequency of data sets in relationship to number of MMP to match and cosine at 1% FDR estimation. b The number of MS/MS matches in relation to minimum matched fragment ions and cosine. An alternative 3D plot of these figures can be found in Supplementary Fig. 9 metabolite structures which are plausible but non-existing in nature, and the prediction of fragmentation spectra from metabolites structures are extremely challenging problems and therefore we did not pursue those routes for significance estimation 35,36 . To avoid this challenge, the decoy libraries are generated via the naive method, a spectrum-based and a fragmentation tree-based method. Using three test reference databases, Agilent 30 , MassBank 11 , and GNPS 9 , with thousands of MS/ MS spectra that have the structures of molecules associated with them, we show that all but the naive methods can estimate false discovery rates 37 (FDR, the proportion of false discoveries among the discoveries) and q-values (the minimal FDR thresholds at which given discoveries should be accepted) with high accuracy. The key considerations that went into the design of the decoy spectral libraries was to ensure that decoy spectra mimic real spectra as closely as possible, but at the same time, do not correspond to MS/MS spectra of any true metabolites present in the sample. This ensures that hits in the decoy database are equally likely as false hits in the spectral library (the target database). In addition, we assured that for any precursor mass range, the same number of target and decoy spectra were found. All methods circumvent generating decoy structures, as it is unsolved problem to generate molecular structures which are sufficiently similar to the structures in the target spectral library, but not present in the sample. Generating decoy MS/MS spectra completely at random, i.e., randomly drawing both masses and intensities of the fragment ions, will not result in an adequate decoy spectral library, as there are ion masses that can be generated but will never be found in a real MS/MS spectrum. Addition of adducts to the spectra that are not encountered would be a solution to create a decoy spectral library, as it was recently done for precursor mass FDR calculations in imaging mass spectrometry annotation 32 ; but these adducts would not look like spectra that we would encounter in an MS/MS spectrum from a biological sample and therefore this solution is not appropriate for the annotation of MS/MS spectra. We further showed that the approach could be applied to different dot product like scoring methods ( Supplementary Fig. 5) and therefore we anticipate that these methods can be used for other commonly used scoring schemes for spectral matching, such as cosine similarity itself 28,38 , scorings based on the number of matching fragment ions and the sum of intensity differences 39 or scorings which incorporate mass differences 40 . It is evident that the methods are versatile but there are some limitations associated with significance estimations of spectral matches, some of these have been well documented for proteomics. All methods tend to overestimate significance (estimated q-values are smaller than true q-values); in particular, estimates are close to zero for true q-values below 0.05. For some query compounds not contained in the target database, there is a structurally similar isomer with similar fragmentation spectrum present in the target database ( Supplementary Fig. 2). These wrong hits will receive relatively high scores and, hence, wrong hits in the target database are more frequent at top positions of the output list than hits in the decoy database, impeding accurate estimation for small q-values. This situation is different from shotgun proteomics for a single organism but similar to metaproteomics, where estimation of accurate q-values close to zero is equally challenging 41 . One of the key reasons is that some small molecules, in particular isomeric structures ( Supplementary Fig. 2), have nearly identical MS/MS spectra. Under such circumstances the end user would have to consider all isomers and perform follow up experiments to differentiate among the possibilities. Because of the probabilistic model requirements for Empirical Bayes and the observation the fragmentation tree decoy strategy performed best of all the decoy strategies with filtered data, the type of data that is most readily accessible in GNPS. In addition, empirical Bayes allows us to estimate Posterior Error Probabilities; but the method relies on presupposed probabilistic models, whereas target-decoy approaches make no further modeling assumptions 34 . To avoid making assumptions of a model required by Empirical Bayes and because the fragmentation tree decoy strategy was the most compatible with the data type in GNPS, it was implemented as FDR controlled spectrum matching workflow into GNPS. Passatutto was tested against 70 mass spectrometry projects in the public domain The results show that the same spectrum matching score can contribute to a highly variable FDR and that the FDR can be drastically different for each project. This means that the spectral scoring for annotations needs to be adjusted on a per project basis and based on the false discovery rates the end user is willing to accept. With the 70 projects analyzed there were no trends with respect to the instrument type observed, in agreement with our benchmark results (Fig. 2). However, notable trends were observed in relation to number of fragment ions used to match MS/MS spectra. As the number of fragment ions required to match the number of matches dropped to 3 and 2, the number of total matches decreased significantly. At these scores, there is not enough spectral information to differentiate a match to the library from the decoy library and therefore drives up the FDR more quickly. However, there is a clear optimum at 4, 5, 6 ions because as it requires 7 and 8 ions to match, a decrease in the number of annotations can be observed. This was explained by the fact that fewer spectra havinge a minimum of seven or eight fragment ions were able to match. We can further compare those results to the results from the default GNPS scoring value of cosine of >0.7 and a minimum of 6 fragment ions to match 9 (If multiple reference spectra exist that satisfy these criteria, only the best-scoring reference is used as a "hit"). The GNPS community assessed matches are the only direct comparisons that can be used to asses how the FDR estimation impacts results of large scale spectrum library matching. However, a key observation is that for each data set the cosine scoring needs to be adjusted when compared to the default GNPS parameters of cosine of 0.7 or greater and minimum 6 fragment ions. In other words, GNPS living data enabled through continuous annotation 9 that uses just one specific scoring value not only underestimates the annotations for most projects but perhaps more importantly, as this affects the interpretation of the results, living data also overestimates the number of annotations for some projects. The reason why most GNPS projects analyzed with living data parameters underestimate the number of matches is because there are many molecules that do not provide 6 ions when fragmented. These are currently missed by living data in GNPS. Thus, FDR calculations enable an informed decision in terms of the analysis parameters that a researcher can use in terms of deciding what the level of acceptable incorrect annotations that can be expected with such parameters. These results demonstrated why the introduction of significance estimation and FDR assessments are critical for the field of untargeted small molecule mass spectrometry and that significance estimations needs to become a routine part of this field. In summary, all tested methods but the naive allow for FDR estimation. The decoy-based methods do not rely on presupposed underlying probabilistic models, but require that target mass spectra are noise-filtered to reach accurate estimates. We use fragmentation trees 26,27,33 to separate signal fragment ions from noise fragment ions to "clean" target spectra for the generation of decoy libraries. We demonstrated that this approach can be used for providing confidence measures in large scale metabolomics project, where it is becoming more and more impossible to inspect each annotation by hand, which is the current norm in metabolomics. It revealed that the spectrum scoring parameters need to be adjusted on a per-project-basis, which requires a form of confidence measures associated with the results. As such evaluations have been critically important for advancing other fields such as proteomics, genomics and other fields, we anticipate that this will play a similarly critical role with mass spectrometric analysis of small molecules in the future. In that perspective, we integrated passatutto into GNPS web platform to ensure that the community can readily search spectral libraries in highthroughput manner while reporting a significance of the annotation. Our methods constitute the first step towards FDR estimation of annotations in untargeted metabolomics, but we anticipate that additional advances will be made in the years to come. We further envision that robust accuracy estimations, including FDR, will also enhance the analysis of spectral matches for in silico generated reference libraries or in silico annotations 4,[42][43][44][45][46][47] , that 48 Spectral libraries and processing. We use three reference libraries for evaluating our FDR estimations: Agilent, MassBank and GNPS. The requirements for a MS/ MS spectrum of a compound to be included in the analysis are that they had to (a) have a SMILES or InChI associated with it; (b) to remove low resolution reference data, the exact precursor mass must be within 10 p.p.m. of the observed mass; (c) the unfiltered MS/MS spectrum has to be available in the public domain. To ensure maximal homogeneity, we keep only (d) spectra in positive ion mode, (e) compounds below 1000 Da, and we discard (f) spectra with <5 peaks with relative intensity above 2%. Spectra recorded at different collision energies are merged. In total, MS/MS spectra of 6716 compounds (4138 GNPS, 2120 Agilent, 458 Mass-Bank) fit these criteria. Most GNPS and all Agilent spectra were measured on Q-TOF instruments, all MassBank spectra were recorded on Orbitrap instruments. Not all peaks/signals in an MS/MS spectrum can be explained as fragment ions 32 , but we will stick with the term "fragment ion" instead of "hypothetical fragment ion", "peak" or "signal" for the sake of readability. Similarly, we will speak of the "mass" of a fragment ion when we refer to the observed m/z value, and of its 'intensity' when we refer to the peak intensity. Noise filtering. For each target MS/MS spectrum, we calculate a fragmentation tree that annotates a subset of hypothetical fragment ions with molecular formulas 27,32,33 ; only annotated fragment ions are kept, using the original peak intensities. We set mass deviation parameters 10 p.p.m. (relative) and 2 mDa (absolute). This procedure is more sensitive than simply using a hard or soft intensity cutoff, as it ensures that fragment ions can be explained in principle by some sensible fragmentation cascade. After noise filtering, 104 spectra were empty or consisted only of the precursor ion peak, and were discarded. Software and creation of decoy databases. Passatutto has been implemented as a Java v1.6 program. It reads and writes spectra in MassBank file format and fragmentation trees in the SIRIUS DOT file format. Source code is available from https://github.com/kaibioinfo/passatuto, Java executables (JAR files) are available from https://bio.informatik.uni-jena.de/passatutto/. Passatutto contains modules for (a) generating a decoy database, (b) database searching in locally stored data sets, and (c) estimating q-values either by means of the target-decoy approach or by empirical Bayes estimation. For generating a decoy database using the fragmentation tree-based method, SIRIUS can be used for the computation of fragmentation trees, which is available from https://bio.informatik.uni-jena.de/sirius/. We ran the software on an Intel XEON 6 Core E5-2630 at 2.30 GHz with 4 GB memory. FDR estimation. Given a decoy database, FDRs and q-values can be estimated using target-decoy competition 25 , separated target-decoy search 23 , or the more sophisticated mix-max approach 52 . Here, we use a simple separated target-decoy search 23 where the proportion of incorrectly annotated spectra 31 is estimated from the empirical Bayes distribution. Only the best-scoring reference spectrum is referred to as the "hit" in the target database, as this represents the most likely interpretation of the query data. We merge lists of hits from target and decoy database, and sort by score. For any score threshold, we only report hits above this threshold in the target database, and estimate the number of false hits there using the number of hits in the decoy database above the score threshold. We estimate the FDR as percentage of incorrect targets (PIT) times the number of decoy hits above score threshold, divided by the number of target hits above score threshold 23 . The PIT is the percentage of hits in the target database that are incorrect, when we do not apply a score threshold but consider the complete batch of queries. We estimate PIT as the area under curve for false identifications, using the empirical Bayes approach described in Supplementary Information. The q-values of a hit is the minimal FDR at which this hit is present in the output list, varying over all possible score thresholds. For the empirical Bayes approach 26, 52 , we model database search scores as a two-component mixture of distributions representing true hits and false hits (true positive and false positive identifications). Scores of true hits are modeled using a mirrored Gamma distribution, a mirrored Gumbel distribution, or a mirrored Weibull distribution, whereas scores of false hits are modeled using a Gamma distribution. Both the actual distribution for true hits and the distributions' parameters are chosen based on the observed data, where Expectation Maximization is used to simultaneously find the parameters of the mixture distribution. FDR and q-values are estimated using the average Posterior Error Probability of all hits with score above the threshold; the Posterior Error probability for a given score, in turn, is estimated as the proportion of incorrect hits among all hits with this score. See Supplementary Information. Quality assessment of FDR estimation. The two smaller data sets, MassBank and Agilent, are used as query spectra, whereas the larger GNPS dataset is searched in. The estimated p-value is the ratio of decoy hits with score above the threshold. As we know the true identity of all queries, we can calculate the true FDR (ratio of false hits among all hits) for any score threshold; by definition, the q-value of a hit is the smallest FDR for which it is reported. FDR based annotations for metabolomics. Passatutto produced decoy spectra for GNPS's spectral library search workflows. These workflows were altered to enable FDR estimation utilizing these decoys to estimate provide q-values for all identifications in a search. The number of identifications were reported at 1% and 5% FDR for each of the 70 data sets analyzed in GNPS and were compared against the default scoring thresholds recommended at GNPS (0.7 cosine, 6 minimum matched peaks). Impact of scoring parameters that achieve 1% FDR. To evaluate how scoring settings such as cosine score and number of minimum ions to match affected the number of annotations with an FDR of 1%, we ran passatutto on the same 70 public projects but varies the number of minimum matched ions from 2 to 8. We then reported the number of data sets that achieved 1% FDR at each cosine value and minimum number of ions that matched. Finally, we reported the number of spectra matched for all of the different projects. Empirical Bayes Approach. For details of the empirical Bayes approach, please see Supplementary Methods.
2023-02-07T15:58:08.930Z
2017-11-14T00:00:00.000
{ "year": 2017, "sha1": "01f80ccac8d1147a2f7bcea36b375795a6f88e6d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-017-01318-5.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "01f80ccac8d1147a2f7bcea36b375795a6f88e6d", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
1538179
pes2o/s2orc
v3-fos-license
Kinase Suppressor of Ras 2 (KSR2) expression in the brain regulates energy balance and glucose homeostasis Objective Kinase Suppressor of Ras 2 (KSR2) is a molecular scaffold coordinating Raf/MEK/ERK signaling that is expressed at high levels in the brain. KSR2 disruption in humans and mice causes obesity and insulin resistance. Understanding the anatomical location and mechanism of KSR2 function should lead to a better understanding of physiological regulation over energy balance. Methods Mice bearing floxed alleles of KSR2 (KSR2fl/fl) were crossed with mice expressing the Cre recombinase expressed by the Nestin promoter (Nes-Cre) to produce Nes-CreKSR2fl/fl mice. Growth, body composition, food consumption, cold tolerance, insulin and free fatty acid levels, glucose, and AICAR tolerance were measured in gender and age matched KSR2−/− mice Results Nes-CreKSR2fl/fl mice lack detectable levels of KSR2 in the brain. The growth and onset of obesity of Nes-CreKSR2fl/fl mice parallel those observed in KSR2−/− mice. As in KSR2−/− mice, Nes-CreKSR2fl/fl are glucose intolerant with elevated fasting and cold intolerance. Male Nes-CreKSR2fl/fl mice are hyperphagic, but female Nes-CreKSR2fl/fl mice are not. Unlike KSR2−/− mice, Nes-CreKSR2fl/fl mice respond normally to leptin and AICAR, which may explain why the degree of obesity of adult Nes-CreKSR2fl/fl mice is not as severe as that observed in KSR2−/− animals. Conclusions These observations suggest that, in the brain, KSR2 regulates energy balance via control of feeding behavior and adaptive thermogenesis, while a second KSR2-dependent mechanism, functioning through one or more other tissues, modulates sensitivity to leptin and activators of the energy sensor AMPK. INTRODUCTION The brain plays a critical role in sensing energy demands and regulating fuel storage to maintain body weight within a tight range. Extensive analysis has identified key conserved genes and neural pathways critical in regulating energy balance [1,2]. At the core of this homeostatic pathway is the central melanocortin system, which is composed of the melanocortin 4 receptor (Mc4r), its agonist a-melanocyte-stimulating hormone (a-MSH), which is derived from cleavage of the precursor polypeptide proopiomelanocortin (POMC), and the Mc4r inverse agonist, Agouti gene-related peptide (AgRP). Orexigenic Neuropeptide Y (NPY) is co-expressed with AgRP. The anorexigenic peptide leptin feeds back on the melanocortin system, activating POMC neurons to stimulate the generation and release of á-MSH. Coincident with this stimulatory action, leptin also limits the inhibitory signals in this pathway by inhibiting NPY/AgRP neurons and suppressing the production and release of NPY and AgRP. Genetic manipulation of these pathways in preclinical models and the identification of melanocortin pathway mutations in humans has led to strategies for therapeutic intervention that may modulate energy balance in humans to ameliorate obesity and its associated co-morbidities [3,4]. However, additional targets that play limited and narrowly defined roles in affecting energy balance may provide therapeutically tractable targets with reduced off-target effects. Kinase Suppressor of Ras 2 (KSR2) is a molecular scaffold coordinating Raf/MEK/ERK signaling that potently regulates energy consumption and expenditure [5e7]. Like its paralog Kinase Suppressor of Ras 1 (KSR1) [8e10], KSR2 coordinates the interaction of Raf/MEK/ERK signaling to facilitate signal transduction and regulate the intensity and duration of ERK signaling [6]. KSR2 also promotes activation of the primary regulator of cellular energy homeostasis, 5 0 -adenosine monophosphate-activated protein kinase (AMPK) [5,7]. KSR2 was found to interact directly with AMPK [5], and ectopic expression of KSR2 enhanced AMPK activation and signaling in a cell autonomous manner [7]. However, defective AMPK activation was also observed in the adipose tissue of mice even though KSR2 mRNA is not significantly expressed there. These observations suggest that KSR2 may have cell autonomous and cell non-autonomous effects on this key energy sensor. KSR2 À/À mice develop normally but grow slowly immediately after birth. Increased adiposity is evident after weaning at 8e9 weeks of age [5]. In the DBA1/LacJ mouse strain, KSR2 disruption causes hyperactivity without hyperphagia, revealing that increased adiposity results from a defect in energy expenditure [5]. Disruption of KSR2 in C57BL/6 mice caused obesity and hyperphagia, which led some to conclude that KSR2-dependent regulation of food consumption was the sole cause of obesity in KSR2 À/À mice [11]. Doubt was cast on this contention by pair-feeding experiments showing that hyperphagia exacerbates, but does not cause, obesity in C57BL/6 mice KSR2 À/À mice. KSR2 À/À mice are also profoundly insulinresistant in liver and adipose tissue [12]. Insulin resistance appears to be secondary to the obesity, as dietary restriction after weaning prevents obesity and glucose intolerance in KSR2 À/À mice. Insulin resistance returns when KSR2 À/À mice are fed ad libitum [13]. In KSR2 À/À mice, decreased AMPK activation may impair the oxidation of fatty acids and increase their storage as triglycerides, contributing to obesity and insulin resistance [5]. Some KSR2 mutations in individuals with early-onset obesity disrupt ERK activation or impair interaction of the scaffold with AMPK [12]. These data identify KSR2 as a key effector in whole-body energy regulation in mice and humans. The recent identification of KSR2 mutations in humans, in combination with the observation that humans bearing these mutations have phenotypic characteristics found in KSR2 À/À mice [5,11,12] suggests that KSR2 and KSR2-regulated pathways may be potential targets for therapeutic intervention for type 2 diabetes and obesity. KSR2 is expressed abundantly in many areas of the brain, but in relatively low levels in muscle, liver, and adipose tissue [5,14]. Within the brain, KSR2 expression is highest in the cortex and cerebellum and somewhat lower in the hippocampus, hypothalamus, amygdala, substantia nigra, and various areas of basal ganglia [1A]. We recently reported that growth hormone (GH) signaling is altered in the liver of KSR2 À/À mice and that some of the phenotypic changes observed in these mice, especially decreased body length, can be rescued by administration of IGF-1 during the neonatal period [15]. Nevertheless, hepatocytes isolated from KSR2 À/À mice exhibited normal GHinduced signaling under in vitro culture conditions. These findings suggested that the systemic effects of KSR2 knockout might be mediated in part by a cell non-autonomous mechanism. We hypothesized that the source of this regulation is the brain. Here we show that brain-specific disruption of KSR2 is sufficient to reduce body temperature, promote cold intolerance, cause obesity, and impair glucose homeostasis, while elevating fasting insulin and free fatty acid levels in blood. Disruption of KSR2 selectively in the brain causes hyperphagia in male, but not female, mice. Though still obese, adiposity in female mice lacking KSR2 in the brain is correspondingly reduced. These data demonstrate that KSR2 functions centrally to regulate energy balance through effects on feeding behavior and adaptive thermogenesis. MATERIALS AND METHODS 2.1. Animals KSR2 À/À mice were generated as previously described [5,13]. Nes-CreKSR2 fl/fl mice were generated by crossing B6.Cg-Tg(Nes-cre)1Kln/ J, (Jackson Labs; hereafter referred to as 'Nes-Cre') to mice in which LoxP sites had been inserted flanking exon 3 of the KSR2 locus (KSR2 fl/ fl , inGenius Targeting Laboratory, Ronkonkoma, NY, Figure 1). The Institutional Animal Care and Use Committee (University of Nebraska Medical Center, Omaha, NE) approved all studies. Animals were maintained on a 12-h light/dark schedule and had free access to laboratory chow (Harlan Teklad LM 485) and water, except as described below. 2.2. Dual energy X-ray absorptiometry (DEXA) Mice were weighed weekly on a digital scale. Lean mass and fat mass were quantified every two weeks by dual-energy X-ray absorptiometry (DEXA) with a Lunar PIXImusÔ densitometer (GE Medical-Lunar, Madison, WI). Mice were anesthetized using a mixture of inhaled isoflurane and oxygen (anesthetization using 3% isoflurane and 1 L/ min oxygen; maintenance using 1e2% isoflurane and 1 L/min oxygen) and placed prone on the imaging positioning tray. Adipose mass was determined by excising and weighing each fat depot after euthanasia. Glucose tolerance test (GTT) and 5-aminoimidazole-4carboxamide-1-b-D-ribofuranoside (AICAR) tolerance test (ATT) To determine the role KSR2 plays in glucose homeostasis, mice were assessed by GTT and ATT. Each GTT was performed after a 10-h overnight fast; each ATT was performed following a 4-h morning fast. Mice were injected intraperitoneally (IP) with D-glucose (20% solution, 2 g/kg of body weight) for GTT, or with AICAR (0.25 g/kg of body weight) for ATT. Glucose levels were determined in blood collected from the tail vein at the indicated times following injection. Metabolite assays Blood was collected by tail bleeds of live animals or via cardiac puncture of euthanized animals. Animals were fasted overnight for 10e12 h prior to collection for blood glucose and serum insulin measurements. Blood glucose was measured with a Bayer Contour Glucometer. For serum analysis, blood was allowed to clot at 4 C for 8e24 h, and the serum was separated by centrifugation for 10 min at 10,000 rpm. Serum was transferred to a new tube and stored at À80 C until assayed. Serum insulin was measured with the Ultra Sensitive Mouse Insulin ELISA Kit (Chrystal Chem, Downers Grove, IL) using mouse standards. Serum leptin was measured with a Mouse Leptin ELISA kit (Millipore, Billerica, MA). Serum free fatty acid (FFA) levels were quantified using a Free Fatty Acids Half Micro Test (11383175001, Roche, Indianapolis, IN). Measurement of food intake Food intake was calculated by single-housing mice for 4e5 days, and taking a daily average of grams of food consumed during that time period. The effect of chronic leptin treatment on food intake was measured in mice that were allowed to acclimate 2 days prior to starting the experiment. On subsequent days, mice were given phosphate-buffered saline (PBS) or leptin (2.5 mg/kg in PBS) intraperitoneally, 2 h prior to the onset of the dark cycle, and food intake was measured over a 24-hour period (dose 1). The above experiment was repeated twice more (dose 2 and dose 3), allowing for food intake to normalize between doses. All mice served as their own controls, receiving PBS first, followed by leptin. Histology Lipid accumulation in mice was visualized by hematoxylin and eosin staining of sections of white adipose tissue (WAT) and brown adipose tissue (BAT). The tissues were fixed overnight in 4% paraformaldehyde, and transferred to 70% ethanol until paraffin embedding. Sections were 4e6 mm. Rectal temperature and cold tolerance study Five-month-old mice were individually housed and rectal temperatures were taken using a MicroTherma 2T handheld thermometer (Ther-moWorks, Lindon, UT) during resting (2 pm) and active (9 pm) time periods. For the cold tolerance study, six-month-old male mice were fasted overnight for 10 h. Mice were then housed individually and placed for an additional 2.5 h in micro-isolator cages that had been acclimated to 4 C. Rectal temperatures were monitored at the times indicated. 2.9. Quantitative PCR Tissues were removed from mice after sacrifice and immediately frozen on dry ice or liquid nitrogen and stored at À80 C. RNA was extracted using TRI Reagent (Molecular Research Center) and RNeasy kit (Qiagen). After treatment with DNase I (Ambion, AM1906), cDNA was synthesized using iScript RT Supermix (BioRad) according the manufacturer's instructions. Quantitative real-time PCR was performed in a 20-ml reaction volume using SsoAdvanced Sybr Green Supermix (BioRad). All reactions were performed in duplicate on a Stratagene MxPro3000p detection system, and relative RNA levels were calculated by using 18S rRNA as internal control. The data were processed using an R script based on the qBase relative quantification framework [16] with the following modifications. The square root of (h-1) replaces (hÀ1) in Formula 4. The term À1/slope replaces 1/ slope as the exponential component in To determine the extent to which disruption of KSR2 affected the indicated response variable at a single time point, a Student's t-test was used, applying a Bonferroni adjustment when multiple pairwise comparisons were made [17]. To determine the effect of KSR2 disruption on the indicated response variable over time or under various treatments, a two-way analysis of variance (ANOVA) was applied with genotype as the independent variable and time/age as the dependent factors [17]. When multiple data points were drawn sequentially from the same animal, pseudoreplication was avoided by performing a repeated measures two-way ANOVA. When results from the two-way ANOVA indicated that genotype had a significant effect, we performed as appropriate Dunnett's post-test or an additional series of Bonferroni-adjusted Student's t-tests to identify the time points at which or treatments under which the effect of KSR2 disruption became significant. All data are shown as the mean AE standard error of measurement (SEM). Significance was accepted at p < 0.05. Unless indicated otherwise for clarity, significant comparisons are represented as follows: *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001. Selective disruption of KSR2 in the brain causes obesity Nes-Cre mice were crossed to KSR2 fl/fl mice to generate Nes-CreKSR2 fl/fl mice deleted in exon 3 of the KSR2 gene ( Figure 1, schematic of strategy in panel A and PCR analysis indicating deletion of exon 3 in panel B). Western blotting revealed strong expression of KSR2 in brains from WT, Nes-Cre, and KSR2 fl/fl mice but no detectable KSR2 in brains from Nes-CreKSR2 fl/fl mice ( Figure 1C). qPCR from selected tissues showed that KSR2 mRNA was undetected in brains from Nes-CreKSR2 fl/fl mice; however, KSR2 mRNA was detectable and not significantly changed, relative to control mice, in the pituitary, liver, white adipose tissue (WAT), quadriceps muscle (Quad), and kidney of Nes-CreKSR2 fl/fl mice ( Figure 1D). As we observed previously [5], KSR2 was undetectable by qPCR in brown adipose tissue from mice of any genotype (not shown). The GTex portal indicates that KSR2 expression is high in the human pituitary gland [1A]. Of importance, our data confirm similar high-level expression of KSR2 mRNA in the pituitary of C57BL/6 mice ( Figure 1D), and the nestin promoter is clearly not active in the gland, as KSR2 expression was not different between Nes-CreKSR2 fl/fl mice and WT controls. Nes-CreKSR2 fl/fl mice showed a significant increase in body mass relative to control mice beginning at 12 and 16 weeks of age, respectively (Figure 2A). Despite the absence of detectable KSR2 in brain, the rate and degree of growth in Nes-CreKSR2 fl/fl mice were notably less than those observed in KSR2 À/À mice (Figure 2A,B; compare right-hand panels to left). To determine the extent to which KSR2 expression in brain contributes to adiposity, lean mass and fat mass were measured in Nes-CreKSR2 fl/fl mice from 5 to 20 weeks of age ( Figure 3). Strikingly, lean mass of KSR2 À/À mice was elevated relative to WT mice while Nes-CreKSR2 fl/fl mice exhibited no significant difference in lean mass in comparison to controls ( Figure 3B). Relative to Nes-Cre and KSR2 fl/fl mice, male Nes-CreKSR2 fl/fl mice showed a significant increase in total fat mass by 8 weeks of age. In female Nes-CreKSR2 fl/fl mice, total adipose mass was not significantly different from controls until 16 weeks of age. Fat mass accumulated in female Nes-CreKSR2 fl/fl mice at about half the rate of male Nes-CreKSR2 fl/fl mice ( Figure 3A), which likely explains why total body mass in female Nes-CreKSR2 fl/fl mice was not significantly different from control mice until that time (Figure 2A). Female KSR2 À/À mice were markedly more obese than female Nes-CreKSR2 fl/fl mice, which still had twice the body fat of control mice at 20 weeks of age. We recently observed that whole-body disruption of KSR2 results in a significant decrease in nose-to-anus length, accompanied by a decrease in bone mineral content and density. The skeletal deficit appears to result from a cell non-autonomous decline of hepatic IGF-1 expression and can be rescued by infection of KSR2 À/À neonates with an adenovirus encoding an IGF-1 transgene [15]. This decrease in body length is recapitulated in the Nes-CreKSR2 fl/fl mice. At five months of age, Nes-CreKSR2 fl/fl males and females are significantly shorter than control KSR2 fl/fl mice [for males: 9.1 AE 0.05 cm for Nes-CreKSR2 fl/fl (n ¼ 10) vs 9.6 AE 0.07 cm for KSR2 fl/fl (n ¼ 11); for females: p < 0.0001; 8.8 AE 0.07 cm for Nes-CreKSR2 fl/fl (n ¼ 9) vs 9.2 AE 0.08 cm for KSR2 fl/fl (n ¼ 11), p < 0.001]. As with KSR2 À/À mice [5], the adiposity of Nes-CreKSR2 fl/fl mice results from a general increase in the mass of all adipose depots, including brown adipose tissue ( Figure 4A). Similar to the enlarged adipocytes observed in KSR2 À/À mice [5], disruption of KSR2 selectively in the brain also increases adipocyte size ( Figure 4B). In contrast, KSR2 À/À mice show a greater degree of hyperphagia than Nes-CreKSR2 fl/fl mice, which likely contributes to their greater adiposity. Indeed, female Nes-CreKSR2 fl/fl mice do not eat significantly more than control Nes-Cre or KSR2 fl/fl mice ( Figure 4C). KSR2 À/À mice are reported to be resistant to leptin after being pair-fed for 14 days [11]. However, in KSR2 À/À mice fed ad libitum, leptin inhibits overnight food consumption though not to the degree observed in WT mice [5]. We examined the effect of leptin on Nes-CreKSR2 fl/fl mice and KSR2 fl/fl controls. In contrast to the global disruption of KSR2 ( Figure 4D, lower panel), which diminished the ability of leptin to inhibit food consumption in mice fed ad libitum, leptin inhibited food intake to the same degree in Nes-CreKSR2 fl/fl mice than it did in control animals. This was true even when mice were injected successively with multiple leptin doses ( Figure 4D, upper panel). Thus, the selective disruption of KSR2 in the brain has no effect on leptin responsiveness, which likely contributes to the more modest obesity observed by disruption of KSR2 in the brain alone. 3.2. Brain KSR2 is required for normal metabolism of glucose and lipids KSR2 À/À mice show marked defects in glucose metabolism [5,13]. Glucose tolerance tests revealed glucose intolerance in Nes-CreKSR2 fl/fl mice at five months of age ( Figure 5A). Fasting insulin was elevated in male and, to a lesser degree, female, Nes-CreKSR2 fl/fl mice ( Figure 5B). Insulin levels in male Nes-CreKSR2 fl/fl mice were increased relative to their controls, which is comparable to the effect seen in KSR2 À/À mice vs. the WT control. Serum free fatty acids (FFA) were similarly increased in Nes-CreKSR2 fl/fl mice relative to control animals ( Figure 5C). FFA levels were comparable to that observed in KSR2 À/À mice, despite a relatively lower level of adiposity. Defects in glucose tolerance, insulin, and FFA were not evident until after the onset of obesity. At 5e7 weeks of age, lean KSR2 À/À and Nes- CreKSR2 fl/fl mice showed no difference in their ability to handle a glucose load and no significant elevation in fasting insulin or FFA ( Figure S1). These data suggest that defects in glucose homeostasis may be secondary to the obesity caused by disruption of KSR2 in the brain. KSR2 À/À mice show a marked decrease in response to AICAR treatment, a measure of whole-body response to the activation of AMPK after injected AICAR is metabolized into the allosteric activator, ZMP [5,13]. Selective disruption of KSR2 in the brains of Nes-CreKSR2 fl/fl mice did not affect AICAR tolerance before or after the onset of obesity ( Figure. S2A, Figure 6), suggesting that KSR2 functions in a tissue other than the brain to alter AMPK effects on whole-body glucose metabolism. KSR2 in the brain affects thermogenesis In the DBA1/LacJ background, global disruption of KSR2 reduces rectal and core body temperatures during both quiescent (day) and active (night) periods relative to WT mice. This difference is evident even at thermoneutrality [5]. In contrast, in the C57BL6/J background, no difference in body temperature is evident during the active period in KSR2 À/À mice compared to WT mice or between mice with brainspecific disruption of KSR2 and their controls ( Figure 7A). However, the typical drop in temperature that occurs during the quiescent period (2 pm) is exacerbated by the disruption of KSR2 throughout the body, or selectively in the brain ( Figure 7A). This reduced thermogenic capacity is apparent at 5e7 weeks of age and may be a key contributor to the obesity and insulin-resistant phenotype seen in mature animals. While UCP1 mRNA is significantly decreased in brown adipose tissue (BAT) of KSR2 À/À mice, it only trends lower, but not significantly so, in Nes-CreKSR2 fl/fl mice ( Figure 7B). However, the marked intolerance of the mice to acute cold exposure at 4 C when fasted ( Figure 7C) and the overt accumulation of lipid in the BAT of Nes-CreKSR2 fl/fl and KSR2 À/À mice ( Figure 7D) indicate a distinct defect in energy metabolism resulting from the loss of KSR2 in brain that compromises thermogenesis. DISCUSSION Here we show that brain-specific disruption of the molecular scaffold KSR2 phenocopies the obesity and glucose intolerance of whole-body KSR2 knockout, although the degree of adiposity and glucose intolerance and the increased circulating insulin in fasting Nes-CreKSR2 fl/fl mice is less than that observed with whole-body knockout of KSR2. KSR2 À/À and Nes-CreKSR2 fl/fl brain-specific knockout mice show comparable defects in body temperature maintenance and cold tolerance. However, there are also deficits in the KSR2 À/À mice that are not present when KSR2 is selectively disrupted in the brain. Though KSR2 À/À mice are insensitive to AICAR treatment, Nes-CreKSR2 fl/fl mice do not differ from their controls in sensitivity to AICAR ( Figure 6). Further, leptin demonstrates its full acute anorexigenic effect in Nes-CreKSR2 fl/fl mice, while its actions are blunted in KSR2 À/À mice ( Figure 4D). Thus, although a large component of the effects of KSR2 to regulate energy intake and expenditure leading to obesity operates through KSR2-modulated signals from the brain, at least one other cell non-autonomous mechanism must contribute significantly to KSR2dependent energy balance. The precise brain region(s) in which KSR2 functions have yet to be identified, but circumstantial evidence supports the notion that KSR2 expression within discrete nuclei of the hypothalamus is critical for normal energy balance. Mice lacking KSR2 in the brain and whole body share notable similarities to, but also striking differences from, mice lacking the Mc4r receptor. Similar to the Nes-CreKSR2 fl/fl and KSR2 À/À mice, mice mutant or nullizygous for expression of Mc4r become obese around 5e7 weeks of age and exhibit hyperphagia [18,19]. Like KSR2 [5,12], Mc4r also regulates energy balance, at least in part by promoting energy expenditure [18]. However, the hyperphagic, obese and glucose-intolerant phenotype of KSR2 À/À mice on the C57BL/6 background is markedly more severe than that of the Mc4r À/À mice [11]. Treatment with the Mc4r agonist MTII attenuated the hyperphagia Figure 6: AICAR tolerance is impaired in KSR2 À/À mice but not in Nes-CreKSR2 fl/fl , mice. AICAR tolerance was tested in five-month-old male (n ¼ 7e14) and female (n ¼ 9e14) Nes-CreKSR2 fl/fl and KSR2 À/À mice and analyzed relative to controls. of KSR2 À/À mice [11], suggesting that KSR2 functions upstream of Mc4r. Opposing effects on body length also distinguish the physiology of KSR2 from Mc4r. Mc4r disruption causes an increase in nose to anus length [18,19]. In contrast, mice lacking KSR2 have a significantly decreased nose to anus length that can be rescued by correcting the neonatal defect in hepatic IGF-1 expression with adenovirus-mediated introduction of a KSR2 transgene [15]. These data indicate that KSR2 does not mediate the actions of Mc4r on energy balance. a-MSH stimulation of Mc4r within the paraventricular nucleus (PVN) of the hypothalamus suppresses food intake and increases energy expenditure [20]. In contrast, the Mc4r inverse agonist, Agouti generelated peptide (AgRP) is produced by an opposing set of neurons in the arcuate, which are activated during caloric insufficiency. The orexigenic neuropeptide Y (NPY) is co-expressed with AgRP, and, when activated, these anabolic neurons promote feeding behavior, energy storage, and weight gain. Leptin impinges upon the melanocortin system, activating POMC neurons and inhibiting NPY/AgRP neurons [3]. The possibility that KSR2 is involved in Mc4r signaling is supported by reports that AgRP, NPY and POMC are significantly reduced in 6week-old KSR2 À/À mice [11]. However, by 8e9 months of age, no differences in these neuropeptides were observed between WT and KSR2 À/À mice, on the DBA1Lac/J background [5]. These seemingly conflicting data may reveal a role for KSR2 in regulating the rate at which neuronal connections are formed in the brain, which can have a potent impact on the development of neural control over energy balance [21]. At least some connections between hypothalamic nuclei controlling energy balance appear to form developmentally in response to a transient, postnatal surge in leptin [22]. KSR2 may play a critical role in their formation affecting the ability of these nuclei to sense and respond to nutrient status. KSR2 À/À mice on the C57BL/6 background [11e13], but not the DBA1Lac/J background [5], develop hyperphagic behavior leading to obesity. These data reveal the presence of strain-specific genetic modifiers that affect the expression of KSR2-dependent phenotype. Food restriction of C57BL/6 KSR2 À/À mice after weaning prevents them from becoming obese and alleviates defects in lipid and glucose metabolism [13]. However, this effect is temporary; once KSR2 À/À mice resume ad libidum feeding, hyperphagia and obesity return. Insensitivity to the anorexigenic adipokine leptin may account for the hyperphagia. Since Nes-CreKSR2 fl/fl do not display insensitivity to leptin shown by KSR2 À/À mice, what is the molecular basis for this insensitivity? One possibility is that poor response to leptin observed in KSR2 À/À mice reflects hypothalamic inflammation. Hypothalamic inflammation can occur in response to a high-fat diet (HFD)-induced downregulation of PGC1amediated expression of ERRa [23]. Leptin insensitivity is evident in both male and female KSR2 À/À mice ( Figure 4D and unpublished data). However, HFD-induced effects are evident only in male C57BL/ 6 mice [23]. Thus, if KSR2 expression regulates hypothalamic inflammation, it likely does so via a mechanism that is not genderspecific. In our initial report of an effect of KSR2 on energy balance [5], we proposed that the reduced responsiveness to AICAR observed in KSR2 À/À mice reflected an increase in triglyceride storage and a deficit in mitochondrial import and oxidation of fatty acids resulting from the diminished ability of AMPK to inhibit acetyl CoA carboxylase and impede malonyl CoA synthesis [5]. In vitro studies with KSR2 and the closely related protein KSR1 show that these molecular scaffolds can interact directly with AMPK and promote phosphorylation of AMPK substrates [5,7,24]. Furthermore, while global KSR2 disruption markedly reduces AMPK activation in white adipose tissue, little, if any, KSR2 is expressed in fat ( Figure 1D), and no KSR2 is detected in BAT [5]. These data strongly suggest that the KSR2-dependent effects on AMPK-regulated fatty acid metabolism can be both cell autonomous and cell non-autonomous. These observations in combination with the effect of KSR2 on body temperature suggest that BAT may be a key target of KSR2 action. Cell non-autonomous regulation of AMPK and its substrate acetyl CoA carboxylase (ACC) in BAT by KSR2 may be essential for normal fatty acid metabolism in BAT. KSR2 disruption may affect body weight by interrupting AMPK-mediated inhibition of ACC activity in BAT, impairing fat metabolism and heat generation while promoting fatty acid storage as triglycerides. Cell autonomous effects of KSR2 on AMPK function may occur in the brain and contribute to altered energy balance. Disruption of KSR2 enhances the phosphorylation and activation of mTOR [11], and sustained mTOR activation in the hypothalamus promotes obesity [25]. Phosphorylation of TSC2 and Raptor by AMPK inhibits mTOR anabolic activity [26e30]. Thus, it is possible that KSR2 disruption promotes mTOR-mediated effects on energy balance via loss of AMPK-mediated phosphorylation of key mTOR regulators. However, it should be noted that hypothalamic mTOR-mediated obesity was obtained by disruption of TSC1 and was accompanied by hyperphagia [25], which is only evident in Nes-male CreKSR2 fl/fl mice ( Figure 4C). Thus, extensive analysis is still required to understand the role(s) of these effectors in the action of brain KSR2. The normal response to AICAR observed in Nes-CreKSR2 fl/fl mice indicates that this cell non-autonomous effect of KSR2 on AMPK is not mediated by the brain. Based on tissue distribution ( Figure 1D and ref. 1A), the pituitary is a strong candidate for mediating KSR2-dependent control of AMPK. The maintenance of robust KSR2 expression in the pituitary of Nes-CreKSR2 fl/fl mice may explain why their obesity is not as severe as that observed in KSR2 À/À mice. If correct, this may indicate that KSR2 regulates the timing or extent of secretion of specific hormones from the anterior pituitary that affect the daily control of tissue responsiveness to AMPK-dependent energy sensing or influence the acquisition, during development, of normal AMPK responsiveness to energy stress. In this regard, thyroid hormones are well-established regulators of energy expenditure and thermogenesis [31], and a reduction in TSH secretion might account for some of the effects of KSR2 knockout. However, thyroxine levels in KSR2 À/À mice are not significantly different from controls [5]. KSR2 À/À mice are reduced fertility (unpublished data). Initial analysis of reproductive capability indicates that male and female Nes-CreKSR2 fl/fl mice have reduced fertility relative to controls, suggesting the possibility that gonadotropin secretion or responsiveness may require KSR2. Given the fact that obese humans with KSR2 mutations phenocopy the disruption of KSR2 in mice [12], future work detailing the mechanism of KSR2 function in the brain and identifying other KSR2-expressing tissues that control energy balance is likely to reveal novel mechanisms controlling peripheral metabolism. R.E.L. is the guarantor of the work, had access to all the data, and takes responsibility for the integrity of the data and the accuracy of its analysis.
2017-04-27T08:35:35.871Z
2016-12-18T00:00:00.000
{ "year": 2016, "sha1": "4d9b1032691e0fa90aca17eaf9d8bceb0158de32", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.molmet.2016.12.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ee71df2bdde5c76f5ed03aa13eb2509de0c642a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256298950
pes2o/s2orc
v3-fos-license
Contemporary progress in the green synthesis of spiro-thiazolidines and their medicinal significance: a review The development of new strategies for the production of nitrogen and sulfur-containing heterocycles remains an extremely alluring but challenging proposition. Among these heterocyclic compounds, spiro-thiazolidines are a distinct class of heterocyclic motifs with an all-encompassing range of pharmaceutical activities such as anti-histaminic, anti-proliferative, anesthetic, hypnotic, anti-fungal, anti-inflammatory, anti-HIV, anthelmintic, CNS stimulant, and anti-viral potentials. Consequently, investigators have produced these heterocycles through diversified intricate pathways as object structures for medicinal studies. Notwithstanding their innumerable manmade solicitations, there is yet no special periodical on MCRs concerning spiro-thiazolidine via green synthesis. Thus, this in-depth review encompasses the excursion of MCRs to spiro-thiazolidines, including the environment-friendly synthetic approaches, reaction situations, rationale behind the optimal selection of catalyst, scope, anticipated mechanism, and biological activities. In this review, we have focussed on the furthermost current developments in spiro-thiazolidine creation under different conditions, such as ionic liquid-assisted, microwave-assisted, on-water, solid-supported acid-catalyzed, asymmetric, and nanocatalyst-assisted syntheses, developed over the last 8 years. This study details works regarding the total amalgamation of spiro-thiazolidines under N- and S-containing heterocycles. Furthermore, this article summarizes the developments of artificially and pharmaceutically important spiro-thiazolidine candidates. Introduction N-and S-containing heterocyclic motifs are universal structural units that extensively exist in naturally occurring molecules and medicinal agents. 1,2 Thiazolidines embrace an advantageous position in both organic synthesis and medicinal chemistry. Thiazolidine derivatives, which are found as fundamental structural components in a variety of medically active compounds, have been used to represent a major subdivision of heterocycles. Thiazolidinone are thiazolidine derivatives that include carbonyl groups at positions 2, 4, or 5. With a carbonyl group on the fourth carbon, thiazolidinone is a saturated version of thiazole and is known as the "mystic moiety" since it contains practically all known biological functions. A cyclic structure was reported based on the hydrolysis of 3-phenyl-2phenyl-amino-4-thiazolidinone with mercaptoacetic acid being the main byproduct. The structures of thiazolidinone are given in Fig. 1. Although substitutions can occur on the 2, 3, and 5 positions of the thiazolidinone ring, the second position carbon atom of the ring has the maximum impact on the structure and characteristics of thiazolidinones. The structure-activity connection of thiazolidinone, as an important family of N-and S-containing heterocycles, makes it a particularly alluring target for combinatorial library synthesis. Thiazolidinone is widely employed as the main building block in the areas of pharmaceuticals and pharmacological agents. 3 Thiazolidinone scaffold has been linked to a variety of biological processes, including those that are anti-bacterial, anti-fungal, anti-tubercular, anti-convulsant, anti-thyroid, and anti-histaminic. 4,5 For more than 40 years, the synthesis of thiazolidines has been made possible by the reaction of 3-mercaptoalkylamines with aldehydes and ketones. 6 Numerous investigations have been published to create innovative lead or hit compounds that include thiazolidines. [7][8][9][10][11][12][13] Thiazolidine-2,4-dione heterocyclic ring system has several uses, such as in an acidic solution, where it prevents mild steel from corroding. These can also be utilized as very sensitive reagents for heavy metals in analytical chemistry 14 and as a brightener in the electroplating industry. 15 Spiro compounds generally have cyclic structures bearing one mutual sp 3 hybridized carbon atom surrounded by two rings and fascinating synthetic encounters because of their signicant structural inexibility and complexity. 16,17 Spiro heterocyclic compounds encompass nitrogen, sulfur, and oxygen atoms, and have unique features of several natural and articial products with notable roles in biological processes and have exhibited signicant pharmacological activities. 18 Additionally, they have various usages as photochromic resources 19 and asymmetric catalysts. 20 Diverse synthetic methodologies concerning spiro compounds have been offered, in which the quaternary core was constructed by rearrangements, ring expansions/contractions, alkylations, transition metalpromoted reactions, cycloadditions, etc. [21][22][23] Spiro-thiazolidines are versatile frameworks and a mesmerizing class of heterocyclic compounds that have advanced ominously in recent years owing to their inuential nature of establishment in the grounds of material science, catalysis, drug discovery, pharmacological, and combinatorial chemistry. Because of their extensive signicance, spirocyclic derivatives have been synthesized by both chemists and biologists in recent decades. Simultaneously, many heterocycles that comprise a thiazolidine and its analogues are also connected with various pharmaceutical effects, like anti-HIV, antiviral, antifungal, antibacterial, diuretic, antihistaminic, anticonvulsant, anti-inammatory, tuberculostatic, anticancer, and analgesic actions. [24][25][26][27] Here, some bioactive spiro-thiazolidine scaffolds are shown in Fig. 2. 28,29 Regardless of their innumerable synthetic solicitations, there is yet no special review on the MCRs regarding the green amalgamation of spiro-thiazolidine-based derivatives. This critical review purposes the greatest current developments in spiro-thiazolidine production under various conditions like ionic liquid-assisted synthesis, microwave-assisted synthesis, solid-supported acid-catalyzed synthesis, on water synthesis, nanocatalyst-assisted synthesis, asymmetric synthesis, miscellaneous, and so forth developed over the last 8 years. Hence, this review insurances recent advances in the passage of MCRs to spiro-thiazolidines containing the articial tactics, possibilities, reaction situations, rationale behind the catalyst selection, and the anticipated mechanism. Moreover, this article will undeniably be underwritten in the scientic domain for emerging articially and medicinally imperative spirothiazolidine analogs. We have also termed a reasonable study of numerous synthetic strategies of spiro-thiazolidines with gains (+) and drawbacks (−) (Fig. 3). 3No undesired side-product 3High level of diastereoselectivity systematic instincts is enclosed in the subsequent sections. A comparative study of various techniques of spiro-thiazolidine synthesis is given in Table 1. In this instance, the necessary diyne precursor of thiazolidinedione was synthesized from N-methylthiazolidine-2,4dione (13) and propargyl bromide (14) in DMF in the presence of K 2 CO 3 to obtain the dipropargylated intermediate thiazolidinedione (15) in 85% yield. The co-trimerized spiro derivative (16) was obtained by reacting diyne (15) with propargyl bromide (14) in acetonitrile in the manifestation of Mo(CO) 6 at 90°C under the inuence of microwave irradiation (MWI). Ionic liquid-assisted synthesis Ionic liquids (ILs) have fascinated widespread research interest as environment-friendly solvents owing to their favorable characteristics for instance insignicant vapor pressure, non-inammability, high thermal stability, and reusability. [34][35][36][37] They have also been called "designer solvents" because their physical and chemical characteristics may be tweaked by carefully selecting the right cation and anion. Ionic liquids are developing as a 'green reaction medium' (catalyst as well as solvent) by combining these unique features. Ionic liquids as a reaction medium offer a practical solution to both the catalytic recycling issues and solvent emission. [38][39][40] K. Arya and researchers 41 reported Brønsted acidic Ils, including N-based organic cations 1-butyl-3methylimidazolium and 1-methylimidazolium with inorganic anions like PF 6 , BF 4 , and PTSA, as catalysts and reaction media for the production of uorinated spiro[3H-indole-3,2 ′tetrahydro-1,3-thiazine]. Fluorinated spiro[3H-indole-3,2 ′ -thiazolidine]-2,4 ′ (1H)-diones-2,4 ′ (1H)-diones (19) were produced using a one-pot environment-friendly microwave-generated process (Scheme 4). Signicant substrate transformation and artifact selectivity were produced by the amalgamation of physiologically potent uorinated spiro-indole[thiazine/ thiazolidinone] with the use of catalytic quantities of ionic liquids. The creation of spiro derivatives using Ils as the reaction medium went well, and the products were easily decanted from the ionic liquid. The study concluded that such a reaction medium was abundant in manufacture, separation, and reutilization as the catalyst, which was inexpensive and advantageous to the environment. The aforementioned substances also shielded guinea pig ileum against histamine induction. The assessment of pA2 levels conrmed the existence of H1antagonism. Solid supported acid-catalysed synthesis E. M. Hussein and colleagues 42 used sulfonated mesoporous silica (MCM-SO 3 H) as a heterogeneous and reusable acidic catalyst to establish a simple and effective one-pot production of polyfunctionalized spirothiazolidin-4-ones (23) (Scheme 5). This approach avoided the use of harmful solvents, resulting in greater yields under benign conditions as compared to the previously described synthetic procedures for spirothiazolidin-4-one molecules. Product (23) was produced under optimal conditions by reacting indoline-2,3-dione derivatives (20) with aromatic amines (21) to produce the proper imine derivative, followed by a reaction with thioglycolic acid (22). This approach improved yields while employing reasonably non-hazardous solvents. The primary features of this protocol included environment-friendly conditions, ease of reaction, quick reaction times, high yields, ease of work-up, as well as catalyst reusability. The putative mechanism proposed a viable process for the manufacturing of spirothiazolidin-4-ones (23) by applying MCM-SO 3 H as a catalyst. MCM-SO 3 H worked as a Brønsted acid catalyst, activating the cyclic ketone's carbonyl group, followed by a nucleophilic attack by the aromatic amine's -NH 2 group. Imine intermediate I was formed aer further dehydration. On-water synthesis Water is regarded as the ideal solvent for carrying out chemical reactions in green chemistry since it is inexpensive, non-toxic, safe, and provides no environmental harm. [43][44][45][46][47] Furthermore, because typical organic molecules are poorly soluble in water, using water as a solvent creates product purication by simple ltration or extraction quite straightforward. R. Singh and colleagues 48 described a water-facilitated and thiamine hydrochloride promoted efficient and eco-compatible process for the production of spiro[acenaphthylene-1,2'[1,3]thiazolidine]-2,4 ′ (1H)-diones (27) via MCR of substituted anilines, a-mercaptocarboxylic acid, and acenaphthylene-1,2dione at 80°C (Scheme 7). One C-S bond and two C-N bonds were formed during this conversion, which resulted in the onepot assembly of a ve-membered ring. The process was very enticing, sustainable, and economical because the catalyst was easily accessible and recoverable, the excellent product yield, a rapid work-up process, simple catalyst handling, and no toxic or organic solvents were used. The suggested reaction mechanism showed that the removal of a water molecule caused condensation between acenapthoquinone and aniline that resulted in the creation of the imine derivative A. Intermediate B was formed by the nucleophilic addition of the thiol group of a-mercaptocarboxylic acid (25) Scheme 6 The feasible mechanism for the production of spiro-thiazolidin-4-ones by MCM-SO 3 H. Scheme 8 Plausible mechanism for the creation of spiro[acenaph- M. Nath and group 49 developed an energy-efficient, environmentally acceptable synthetic approach for the production of a range of pharmaceutically potent spiro[indoline-3,2 ′ -thiazolidinone] derivatives (30) (Scheme 9). These derivatives were prepared by reacting primary amines (28) with different isatins (29) and thioglycolic acid in the occurrence of p-dodecyl benzenesulfonic acid (DBSA) as an effective Brønsted acid surfactant joined catalyst in an aqueous medium at 25°C. This synthetic technique had the advantages of operational simplicity, energy efficiency, high to remarkable isolation yields, and utilization of a favored green solvent system for product manufacturing. Scheme 11 depicts a possible mechanism for the synthesis of compounds (35 or 35 ′ could take hydrogen atoms from the hydroxy group in intermediate I and a proton from the H-base to produce resonance structures II, III, and IV. The product was formed when III loses an electron in the intramolecular cyclization process. At the same time, one electron was accepted by the rGO cation to restore its original structure. Due to high volatility, toxicity, corrosive character, and lack of recovery and reuse, homogeneous catalysts pose certain challenges in their use. As a result, following the principles of green chemistry, it is strongly advised that eco-friendly and reusable heterogeneous solid acids can be used as a replacement for traditional, poisonous, and polluting homogeneous acid catalysts. R. Singh and his co-workers 74 designated a convenient and environmentally-friendly method for the Scheme 10 Synthesis of spiro-thiazolidinethione from naphthol Mannich bases. Scheme 11 The plausible mechanism for synthesis of spirothiazolidinethiones. synthesis of new hybrid spiro[indeno [1,2-b]quinoxaline [11, 2']thiazolidine]-4 ′ -ones (39) via a multi-component reaction involving indeno [1,2-b]quinoxalinone, mercaptocarboxylic acids (37) and various types of amines (38) using urea-choline chloride as a green deep eutectic solvent and carbon-SO 3 H as a solid acid catalyst (Scheme 12). This procedure had the benets of avoiding hazardous solvents and catalysts as well as high to outstanding product yields. Furthermore, both catalyst and DES were quantitatively recovered from the reaction mixture and utilized many times. A plausible mechanism was proposed for the production of spiro[indeno [1,2- (39). The carbon-SO 3 H increased the electrophilic nature of carbonyl carbon (36), which was attacked at the same time by the -NH 2 group of aniline (38) L-Proline MNR-supported catalyzed, effectual, one-pot, green, and three-component approach was investigated by A. Bekhradnia and his group 75 for the stereoselective construction of a novel class of spirothiazolidine derivatives. The interaction of 5-arylidene thiazolidine-2,4-diones (43), isatin (40), and secondary amino acids (41a-c) with MCCFe 2 O 4 @L-proline (MnCoCuFe 2 O 4 @L-proline) magnetic nanorods as a new nanocatalyst yielded a series of spiro-heterocycle derivatives (44a-c) stereoselectively and in high yields (Scheme 14). Thermal stability, magnetic characteristics, and other physicochemical features of the produced catalyst were all thoroughly investigated using a variety of methodologies. This catalyst was demonstrated to be an effective and reusable catalyst when it was used to produce endo-isomers of spirocyclic pyrrolidine/ pyrrolizidine/pyrrolothiazolidine derivatives. The presented method's primary appealing features included its high yield, high degree of diastereoselectivity, avoidance of the production of undesirable side products, and simplicity of catalyst recovery without suffering a signicant loss of catalytic activity. Asymmetric synthesis D. M. Du et al. 76 developed a bifunctional squaramide-catalyzed asymmetric cascade aza-Michael/Michael addition process to synthesize chiral spirothiazolidinone tetrahydroquinolines with three contiguous stereocenters. To generate spirothiazolidinone tetrahydroquinolines (47), numerous functionalized rhodanine derivatives (45) and 2-tosylaminochalcone (46) with squaramide catalyst were reacted in CHCl 3 solvent and stirred at 35°C for 48 h (Scheme 15). Under moderate circumstances, this cascade reaction yielded the required products in good to exceptional yields (up to >99% yield) with outstanding diastereoselectivity (>25 : 1 dr) and high enantioselectivity (up to 96 percent ee). More signicantly, the stereoselectivity was unaffected by the amplication and derivation procedures. Miscellaneous synthesis R. Raghavachary et al. 77 synthesized a diverse sugar-fused chromanono thiolizidine derivatives (51) by the reaction of several 3-arylidenechroman-4-ones as dipolarophiles (49) with thiazolidine-4-carboxylic acid (50), and sugar aldehyde (48) under the toluene, reux for 8 h in good yields (Scheme 16). The cycloaddition was found to be extremely regioselective and the o-benzyl group was found to be removed. Highly regioselective, majorly a single product formation and more functional group tolerance were signicant to this reaction. B. S. Kumar and his colleagues 79 designed the multistep synthetic strategy of some new spiro compounds comprising thiazolidinone (56) and pertaining antimicrobial evaluation (Scheme 18). The antimicrobial activity of the recently produced compounds was assessed by the cup plate method. The spiro derivatives with -OCH 3 , -Cl, and -Br substituents displayed promising in vitro antimicrobial activity. The hypothesized mechanism for the production of the nal product 61 was depicted (Scheme 20). The planned route was divided into two sections, the rst stage comprised a Cu(I) catalyzed [3 + 2] azide-alkyne cycloaddition to produce intermediate A. Cu(I) was created in situ by sodium ascorbate's wellknown reduction of Cu(II) to Cu(I). 32 Biological activity of spiro-thiazolidine derivatives A broad outlook of pharmaceutical activities such as antimicrobial, anticancer, antidiabetic, antioxidant, and antitubercular, displayed by spiro-thiazolidine derivatives with the best candidates reported in the past eight years are mentioned in this review, explaining the importance of spiro-thiazolidines in medicinal chemistry (Fig. 4 and Table 2). Antimicrobial activity P. N. Patel and Y. S. Patel 83 synthesized various spirothiazolinone heterocyclic compounds and tested them for antibacterial (MIC/MZI) and antifungal (MIC/MZI) activities against Gram-positive bacteria such as Bacillus subtilis, Bacillus sphaericus, Staphylococcus aureus, and Gram-negative bacteria such as Pseudomonas aeruginosa, Klebsiella aerogenes, Chromobacterium violaceum by disc diffusion, microdilution/ turbidometric methods. Agar diffusion and broth dilution methods were used to test the antifungal activity against Candida albicans, Aspergillus fumigatus, Trichophyton rubrum, and Trichophyton mentagrophytes in DMSO. Antimicrobial screening results validated that practically all the candidates were active and had moderate to good antibacterial activity compared to standard drugs. At the studied doses, (75) having a huge heterocyclic system on the thiazolidine-4-one ring exhibited a robust inhibitory effect compared to the reference medication (antibacterialstreptomycin and antifungal-amphotericin B), whereas, other compounds showed considerable antibacterial activity. According to the ndings, an increase in the combination of heterocyclic rings with the thiazolidine-4-one ring might be the cause of an increase in considerable inhibitory action. The inclusion of chlorophenyl in the molecules also boosted the inhibitory effect (Fig. 5). G. N. Kandile and co-researchers 84 synthesized new Schiff bases containing thiazolidine motifs and further checked them for in vitro antimicrobial activity using broth dilution procedure against two Gram-positive bacterial strains (Staphylococcus aureus and Staphylococcus epidermidis), two Gram-negative bacterial strains (Escherichia coli and Klebsiella pneumonia) and two fungal strains (Aspergillus fumigatu and Candidaalbicans) in minimal inhibitory concentration (MIC), minimal bactericidal concentration (MBC), and minimum fungicidal concentration (MFC) terms. The standard drugs used for antibacterial activity were sulfamethoxazole and antifungal activity was uconazole. (76) displayed the best antimicrobial activity among thiazolidine derivatives (Fig. 6). E. M. Hussein and team 81 reported thiazolidine moietybased spiro heterocycles synthesis and their in vitro antimicrobial activity was studied using the cup plate procedure. All the synthesized derivatives were assessed for antibacterial activity at 50 mg mL −1 concentration against Gram-positive (Staphylococcus aureus) bacteria and Gram-negative (Escherichia Coli, Pseudomonas aeroginosa) bacteria using standard antibacterial reference ciprooxacin. Compound 2-(4methoxybenzylidene)-4-(morpholinomethyl)-1-thia-4-azaspiro [4.5]decan-3-one (85) showed excellent activity against both Gram-positive and Gram-negative bacterial strains. The results specied that the aromatic and aliphatic substituents type was the governing factor for the activity of synthesized compounds. Based on the structure-activity relationships (SAR), the phenyl ring attached with an electron-donating group (MeO-) as demonstrated in compound (85) but the phenyl ring with electron-withdrawing groups (-Cl, -NO 2 ) showed less antibacterial potential. Polycyclic heterocycles containing spiro thioxothiazolidin-4one were produced and tested for antimicrobial activity by A. Barakat and co-authors 87 via the agar diffusion well process against bacterial strains (Staphylococcus pneumonia, Bacillus subtilis, Pseudomonas aeruginosa, and Escherichia coli) and fungal strains (Aspergillus fumigatus, Syncephalastrum racemosum, Geotricum candidum, and Candida albicans). In terms of antibacterial activity, ( (88) compounds outperformed chosen benchmarks ampicillin and gentamicin, and in terms of antifungal activity, amphotericin A and uconazole. The phenyl group played an essential role in determining drug interaction inside the receptors, according to a molecular docking investigation of the produced compounds (Fig. 13). Conclusions The current review has highlighted the tremendous impact of the green, effective and eco-benign synthetic strategies for the generation of bioactive spiro-thiazolidines via several steps, different reactants, catalysts, and various conditions. Spirothiazolidines have been and will continue to be the cherished synthetic targets of heterocycles due to their high potential importance as pharmaceuticals and materials. The protocols for their synthesis are highly demanding and will remain to be an important endeavor in the years to come. In this review, we have concentrated on the most recent advancements in spirothiazolidine synthesis under varied conditions, such as microwave-assisted synthesis, ionic liquid-assisted synthesis, solid-supported acid-catalyzed synthesis, on-water synthesis, nanocatalyst-assisted synthesis, and asymmetric synthesis, developed over the last 8 years. The antiviral, antifungal, antibacterial, diuretic, antihistaminic, tuberculostatic, anticancer, anticonvulsant, analgesic, and anti-inammatory actions were signicantly inuenced by the heterocyclic framework of these compounds. As a result, we hope that this review will assist researchers in developing further into the many aspects of this subject matter to uncover hidden prospects and serve as a roadmap for the development of many more unique, innovative, and environmentally friendly synthetic techniques. These synthetic techniques will continue to gain in popularity, with a wide range of applications in chemical synthesis, pharmacology, and medicine. The goal of this review is to emphasize the importance of spirothiazolidines in synthetic, pharmaceutical, and other disciplines, making them a useful topic in organic chemistry. Conflicts of interest There are no conicts to declare.
2023-01-27T16:09:03.014Z
2023-01-24T00:00:00.000
{ "year": 2023, "sha1": "a158e8c7d1bcc61176a58e64aa8b86f23c33e426", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1039/d2ra07474e", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "769a97a649e98d9aa654092cf8bbbf08d3cf5c47", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
260202044
pes2o/s2orc
v3-fos-license
A genome-wide genomic score added to standard recommended stratification tools does not improve the identification of patients with very low bone mineral density Summary The role of integrating genomic scores (GSs) needs to be assessed. Adding a GS to recommended stratification tools does not improve the prediction of very low bone mineral density. However, we noticed that the GS performed equally or above individual risk factors in discrimination. Purpose We aimed to investigate whether adding a genomic score (GS) to recommended stratification tools improves the discrimination of participants with very low bone mineral density (BMD). Methods BMD was measured in three thoracic vertebrae using CT. All participants provided information on standard osteoporosis risk factors. GSs and FRAX scores were calculated. Participants were grouped according to mean BMD into very low (<80 mg/cm3), low (80–120 mg/cm3), and normal (>120 mg/cm3) and according to the Bone Health and Osteoporosis Foundation recommendations for BMD testing into an “indication for BMD testing” and “no indication for BMD testing” group. Different models were assessed using the area under the receiver operating characteristics curves (AUC) and reclassification analyses. Results In the total cohort (n=1421), the AUC for the GS was 0.57 (95% CI 0.52–0.61) corresponding to AUCs for osteoporosis risk factors. In participants without indication for BMD testing, the AUC was 0.60 (95% CI 0.52–0.69) above or equal to AUCs for osteoporosis risk factors. Adding the GS to a clinical risk factor (CRF) model resulted in AUCs not statistically significant from the CRF model. Using probability cutoff values of 6, 12, and 24%, we found no improved reclassification or risk discrimination using the CRF-GS model compared to the CRF model. Conclusion Our results suggest adding a GS to a CRF model does not improve prediction. However, we noticed that the GS performed equally or above individual risk factors in discrimination. Clinical risk factors combined showed superior discrimination to individual risk factors and the GS, underlining the value of combined CRFs in routine clinics as a stratification tool. Supplementary Information The online version contains supplementary material available at 10.1007/s00198-023-06857-w. Introduction Osteoporosis is a major health problem due to low diagnostic rates and disease-related consequences, such as osteoporotic fractures and increased mortality. Despite being a frequent and preventable disease, screening tests and referral to diagnostic tests have not increased diagnostic rates sufficiently [1,2]. Quantitative computed tomography (CT) is a relatively new promising method for identifying high-risk individuals using CT scans to quantify bone mineral density (BMD). BMD is a well-known surrogate marker of osteoporosis [3]. Opportunistic screening using CT scans has the benefits of using pre-existing scans made for various clinical indications. We have recently demonstrated that spine BMD measured from routine cardiac CT scans is associated with fracture rate in a large group of individuals referred to cardiac CT [4]. Furthermore, we have documented that a substantial proportion of individuals (179/1487; 12%) referred to routine cardiac CT have very low BMD (thoracic spine BMD<80 mg/cm 3 ) [5]. These findings suggest a feasibility using CT for opportunistic screening to increase diagnostic rates and identify high-risk osteoporotic individuals. Current screening guidelines differ between countries and in lack of validated screening strategies, individual risk stratification is often based on identifying osteoporosis risk factors [6,7]. The Fracture Risk Assessment tool (FRAX) can be used to assess an individual's absolute 10-year fracture probability based on clinical risk factors with or without BMD measurements [8]. Recently, a randomized controlled trial has shown the benefits of using FRAX to screen women for further BMD assessment and pharmacological treatment by reducing hip fractures [9], underlining the clinical value of screening using FRAX. However, the ability of FRAX to discriminate between men with fractures versus men without fractures is limited [10]. Screening using either a model including risk factors for osteoporosis or FRAX could possibly be improved by adding the individuals' inherent risk of osteoporosis, especially in individuals otherwise not identified using standard screening tools. Osteoporosis is a polygenic disease associated with a strong heritability (twin study-based H 2 -estimates range between 0.77 and 0.89 for lumbar and femoral neck BMD [11]); hence, family predisposition is a strong independent risk factor for osteoporosis [12,13]. Genome-wide association studies (GWASs) have identified numerous genetic variants associated with BMD and/or osteoporotic fractures [14][15][16][17] from which a genomic score (GS) can be constructed [18]. The GS, also known as a polygenic score, is the sum of common alleles an individual carries weighted by the single nucleotide polymorphism (SNP) effect size estimated from GWAS. A GS could be a promising tool to identify patients with very low BMD otherwise invisible in the daily clinic. BMD measurements are accessible and relatively cheap; however, patients with osteoporosis often go undiagnosed. Research in genetic risk profiling and focus on personalized medicine have increased in recent years and could be a valuable tool for assessing the risk of very low BMD in the future. However, the clinical role of integrating GSs in medical clinics to improve the selection of individuals for BMD testing needs to be further assessed. We hypothesize that a GS adds value to the identification of patients with very low BMD based on a clinical risk factor model. Therefore, the aim of this study was to investigate whether adding GS to recommended stratification tools improves the discrimination of participants with very low BMD. Study design and study participants This cross-sectional sub-study of the multicenter Danish Study of Non-Invasive Diagnostic Testing in Coronary Artery Disease (Dan-NICAD) included participants (n=1675) referred to a cardiac CT between September 2014 and March 2016 at two centers. All participants had an indication for a coronary CT angiography, a low to intermediate risk profile of coronary artery disease (CAD), and did not have a history of CAD. Inclusion and exclusion criteria have previously been reported [19]. All participants gave written, informed consent before enrolment. The Central Image acquisition and BMD analyses All participants were referred for routine cardiac CT on the clinical indication of CAD. Using the non-enhanced cardiac CT, a mean BMD was analyzed by an experienced reader (Josephine Therkildsen (JT)) blinded to participant data for every participant based on three consecutive thoracic vertebrae beginning from the left main coronary artery. The selected thoracic vertebrae have previously been demonstrated to include Th6 (11%), Th7 (48%), Th8 (37%), and Th9 (4%) [20]. The Mindways QCT Pro software (Mindways Software Inc, TX) was used to measure BMD together with the calibration phantom, Mindways Solid 3 (Mindways Software Inc, TX) as previously described [5]. The mean BMD was grouped as very low (<80 mg/cm 3 ), low (80-120 mg/ cm 3 ), and normal BMD (>120 mg/cm 3 ) as described by the American College of Radiology (ACR) for lumbar spine QCT [21]. Osteoporosis risk factors During interviews at the baseline visit and from electronic patient records, trained study nurses obtained information on self-reported osteoporosis risk factors including age, sex, smoking status, alcohol history, and family predisposition for osteoporosis (1-degree relatives having osteoporosis or a history of hip fractures). Anthropometric measurements, such as weight and height, were also obtained. Danish Patient Registries were used to collect history of previous fractures, both any type of fracture and osteoporosis-related fractures (hip, spine, forearm, and proximal humerus fracture), use of glucocorticoids and anti-osteoporotic treatment, and known diseases to calculate Charlson comorbidity index as previously reported [4]. Using the Bone Health and Osteoporosis Foundation (BHOF), formerly known as the National Osteoporosis Foundation, recommendations for BMD testing, participants were divided into an "indication for BMD testing" and a "no indication for BMD testing" group. The categorization was based on [1] age + sex (age cutoff in women ≥ 65 years and men ≥ 70 years irrespective of osteoporosis risk factors) and [2] age + sex + clinical risk factors (postmenopausal women and men older than age 50 years with clinical risk factors which here includes body mass index<20 kg/m 2 , previous osteoporosis-related fracture, family history of osteoporosis, active smoking, glucocorticoid treatment within the previous 6 months, and secondary osteoporosis (here defined as diabetes mellitus type 1, early menopause < 45 years of age or Charlson comorbidity index ≥2)) [22]. FRAX calculation The FRAX tool estimates the 10-year probability of fracture (hip and major osteoporotic fracture) (https:// www. sheff eld. ac. uk/ FRAX/ tool. aspx? count ry= 29). A previous osteoporosis-related fracture was defined as a previous fracture at the hip, spine, proximal humerus, or forearm registered in the National Danish Patient registry using the 10th edition of the International Classification of Diseases [23] as previously reported [4]. Participants were asked if they were predisposed to osteoporosis, including 1-degree relatives having known osteoporosis or a history of hip fractures. Current systemic glucocorticoid use was defined as a redeemed prescription within 6 months prior to the baseline visit. All participants with self-reported or known rheumatoid arthritis, Crohn's disease, colitis ulcerosa, osteogenesis imperfecta, severe scoliosis, Scheuermann's disease, lymphoma, leukemia, or myelomatosis were excluded from BMD analyses; thus, these risk factors were considered as not present for the entire cohort. Secondary osteoporosis was defined as diseases strongly associated with osteoporosis (type 1 diabetes mellitus, early menopause (<45 years), or Charlson comorbidity index ≥ 2). Consumption of alcohol in units/day was not recorded; however, participants with chronic alcoholism were excluded from BMD analyses, and so the risk of alcohol consumption ≥ 3 units/day was considered as not present for the entire cohort. In general, there were a few missing values for osteoporosis risk factors (Table 1 legends). If a participant had a missing risk factor, the FRAX score was recorded as missing (n=32). Seven participants had a weight above 125 kg, and as FRAX only allow a maximum weight of 125 kg to be included, these participants were regarded as having a weight of 125 kg. Genomic scores All participants had blood samples collected (4 mL EDTA whole blood) at baseline for genotyping. DNA was extracted, and only high-quality DNA was used for genotyping using the Illumina Global Screening Array (Illumina, Inc., 5200 Illumina Way, San Diego, CA) at deCODE genetics (deCODE genetics, Inc., Reykjavik, Iceland). Quality control was performed with Plink 1.9 [24] removing samples and SNPs with missingness <0.02, sex discrepancy, minor allele frequency <0.01, and deviation from Hardy-Weinberg equilibrium (p<1 × 10 −6 ), which resulted in a total of 310,531 autosomal SNPs. Furthermore, first and second-degree relatives were identified based on identity by descent ( ̂>0.2), and ancestral outliers Participants were grouped into low GS (0-20% percentile), average GS (20-80% percentile), and high GS (80-100% percentile) based on the GS TB_BMD (P<0.001). Data is presented as the number with the percentage in parentheses, as a mean with a standard deviation, or as a median with an interquartile range. P values are one-way ANOVA, Kruskal-Wallis test, or Fisher's exact test comparing the three GS groups. P-values < 0.05 are highlighted in bold Overall, a few missing values were registered. Nine participants had missing BMI; 5 had missing smoking status; 7 had missing history of family disposition; 80 women had missing postmenopausal status; 17 had missing data on previous fracture, comorbidity, and glucocorticoid use; 10 had missing vitamin D and/or calcium intake data; 7 had missing data regarding known osteoporosis at baseline; and 32 had missing FRAX BMD bone mineral density, BMI body mass index, FRAX Fracture Risk Assessment tool, BHOF the Bone Health and Osteoporosis Foundation, GS genomic score *Data from Danish National Patient and Prescription Registries. Medication use was defined as a redeemed prescription within 6 months prior to the baseline visit°T he Bone Health and Osteoporosis Foundation recommends BMD testing in women aged ≥ 65 years and men aged ≥ 70 years irrespective of osteoporotic risk factors [22]°°T he Bone Health and Osteoporosis Foundation also recommends BMD testing in postmenopausal women and men older than age 50 years with clinical risk factors [22]. In this participant cohort, risk factors included body mass index<20 kg/m 2 , previous osteoporosis-related fracture, family history of osteoporosis, active smoking, glucocorticoid treatment within the previous 6 months to baseline and secondary osteoporosis (diabetes mellitus type 1, early menopause < 45 years of age and Charlson comorbidity index ≥2) were identified by multidimensional scaling, which in total removed 27 samples. Genotypes were imputed using the Michigan Imputation Server [25] (reference panel HRC: r1.1.2016 GRCh37). SNPs with a minor allele frequency below 0.01, missing genotype rate >5%, deviation from Hardy-Weinberg equilibrium (p<1 × 10 −6 ), imputation quality below 0.9, and SNPs located within the major histocompatibility complex (MHC; 6p22.1-21.3: chr6:28,477,797-33,448,354, GRCh37) were removed, resulting in a total of 6,565,335 autosomal genetic variants, with positions annotated to genome build GRCh37. The GS was computed based on a GWAS meta-analysis assessing total body (TB) BMD (GS TB_BMD ) as measured by dual-energy X-ray absorptiometry (DXA) in 66,628 individuals compromising of 86 % European ancestry, 2% African American, and 14% from admixed background [17]. The GS TB_BMD was predefined as the primary GS based on an available full GWAS summary statistic. Two other GSs (GS FN_BMD and GS HQ_SOS ) were constructed and compared with the primary GS TB_BMD (Supplementary Table S1). The summary statistics for the GS FN_BMD were based on a GWAS meta-analysis assessing femoral neck (FN) BMD measured by DXA in 32,961 individuals with populations from both Europe, America, East Asia, and Australia [14]. Only 54 SNPs associated with BMD were publicly available. Finally, the GS HQ_SOS was based on 11,709 SNPs from a study assessing heel quantitative ultrasound speed of sound (HQ_SOS) in 341,449 individuals mainly of white British ancestry [15]. The GSs were computed based on genetic markers pruned for linkage disequilibrium (r 2 <0.9) as GS = ∑ m i=1 X îi , where X i is the ith genotype encoded as 0, 1, or 2 counting the number of the first allele in the bim-file, and ̂i is the marker effects of the ith marker. Each GS was computed using different SNP P value thresholds (0.001, 0.01, 0.1, 0.5, 0.7, 0.9) to find the optimal GS performance. Processing of the genetic data and computing the GS were performed with the R package qgg [26,27]. Statistical analysis All GS scores were standardized to a mean of zero and a standard deviation (SD) of one. Demographics at baseline were compared by GS groups (low, average, and high) using a one-way analysis of variance, Kruskal-Wallis test, or Fischer's exact test. Data are presented as a number with a percentage for categorical variables, as a mean with SD for normally distributed continuous variables, or as a median with an interquartile range for not normally distributed continuous variables. The correlations between the GS and mean BMD, together with the GS and osteoporosis risk factors, were estimated using Pearson correlation, Spearman correlation, or logistic regression coeffcients. Four different logistic regression models were used to estimate odds ratio (OR) (and 95% confidence intervals) for having very low BMD (y BMD , defined as mean BMD <80 mg/cm 3 as described by ACR [21]): Model (2 and 3) was based on a combined model for clinical risk factors (CRF) which included: age, sex, BMI, active smoker, osteoporosis disposition, previous osteoporosisrelated fracture, glucocorticoid treatment, and secondary osteoporosis. The GS and BMI were multiplied with −1, as an inverse association with mean BMD was present and statistical tests assume higher values indicating higher risk. The ORs presented in Table 2 are per 1 SD in GS standardized to the current dataset using z scores. Using logistic regression, the CRF and CRF-GS models were compared using the likelihood ratio test. We evaluated each model's ability to discriminate participants having very low BMD using area under the receiver operating characteristic curve (AUC). Furthermore, we evaluated the ability of reclassifying participants using the CRF model vs. the CRF+GS model using pre-test probability cutoff values of 6%, 12%, and 24% [29]. These cutoff values were based on the prevalence (p) of having very low BMD (12%) in the total cohort, by calculating p/2, p, and 2p, as recommended when clinical cutoff values do not exist [30]. The net reclassification index (NRI) and integrated discrimination improvement (IDI) were calculated. In order to internally validate the CRF and CRF+GS model, we compared AUCs with the FRAX and FRAX+GS model (not fitted to the current study [31]) and tested AUCs using 5-fold cross-validation [32]. A P value<0.05 was considered statistically significant and in multiple comparisons, P values were adjusted according to the Bonferroni adjustment (P value/n tests ) when relevant. Sample size calculation has previously been published [19]. All analyses were performed using STATA/IC 17.0 (StataCorp, College Station, Tex). (1) Study population In total, 1421 participants (672 men, 749 women) had BMD measurements and GS available. The participant flowchart is displayed in Fig. 1. The mean age for the total population was 57 ± 9 years (range 40-79 years). Baseline characteristics stratified by GS groups are shown in Table 1 Assessing three different genomic scores, GS TB_BMD and GS HQ_SOS showed significant associations to mean BMD but the GS FN_BMD did not (Supplementary Table S1). Overall, a greater GS TB_BMD was associated with higher BMD in both men and women (Fig. 2) with a 3 % of the variance in the mean BMD explained by the GS. The GS was not statistically significantly correlated with osteoporosis risk factors, except for family predisposition in women likely capturing the inherited risk (Supplementary Table S3). GS and mean BMD were overall correlated across age and BMD groups, suggesting its predictive value irrespectively of age and BMD groups (Supplementary Table S4). Genomic score-discrimination of participants with very low BMD The GS showed a similar ability in identifying participants having very low BMD compared to standard clinical risk factors as displayed by the AUC in the total cohort and in participants with indication for BMD testing (Fig. 3A and Data are the number with percentages in parentheses or odds ratio (OR) with 95 % confidence intervals (CI) and P values. The GS and BMI were computed as a continuous variable multiplied with −1, and the mean BMD was grouped as very low BMD (<80 mg/cm 3 ) or not (≥80 mg/ cm 3 ). The OR is presented per 1 standard deviation in GS standardized to the current dataset using z scores In the sex-stratified analyses, sex was omitted from the models Models 2 and 3 are fitted to the current study, thus tending to an optimistic estimate of the predictive performance compared to Models 1 and 4. Model 4 was used as a reference platform, well-knowing that FRAX is only validated for assessing fracture risk and not very low BMD. (Fig. 3B). Per one SD decrease in GS, the odds of having very low BMD were increased by 34% (Table 2). Subgroup analyses including women vs men showed similar results ( Table 2). In participants without indication for BMD testing, OR for very low BMD per SD decrease was 1.56 (95% CI 1.18-2.07, P<0.01) compared to 1.24 (95% CI 1.02-1.50, P<0.05) in participants with an indication for BMD testing. No improvements in net reclassification nor the integrated discrimination were detected when adding the GS to the CRF model using predefined probability cutoff values of 6%, 12%, and 24% in participants without an indication for BMD testing (Table 3). We observed a few individuals without very low BMD being misclassified using the CRF+GS model (n=3) ( Table 3). Furthermore, no improvements were detected in participants with an indication for BMD testing or for the total cohort (Supplementary Table S5 and S6). To assess the possible benefits of using GS for risk stratification, we assessed the effect of including the GS in the stratification algorithm as recommended by BHOF (Supplementary Figure S4). In total, 841 individuals did not have an indication for BMD testing based on standard stratification tools; however, 54 cases with very low BMD (6%) fell into this category. A number of cases that would be screened if incorporating the low GS in risk stratification are n=150, Fig. 1 Study participant flowchart. In seven participants, the nonenhanced cardiac CT (used for BMD measurements) was not performed based on a pre-test individual clinical risk assessment. In 181 participants, BMD measurements were excluded due to technical or participant-related factors known to affect BMD measurements. In 66 participants, no GSs could be made, as genotyping from blood samples was not performed. The final study population consisted of 1421 individuals with both BMD and GS available. Abbr.: CT computed tomography, BMD bone mineral density, GS genomic score with a possibility to identify 17 more cases with very low BMD. Prediction models-discrimination of very low BMD The likelihood-ratio test showed a significant improvement of adding the GS to clinical risk factors (P<0.001). Subsequently, a CRF model including clinical risk factors (age, sex, BMI, active smoking, family history of osteoporosis, previous osteoporosis-related fracture, glucocorticoid treatment, and secondary osteoporosis) was developed to predict very low BMD using all available participants. Analysis of the model's ability to discriminate very low BMD showed an AUC of 0.78 (95% CI 0.74-0.81). Next, a CRF+GS model was developed adding the GS to the clinical risk factor (CRF+GS) model resulting in an AUC of 0.78 (95% CI 0.75-0.81). Directly comparing AUCs did not show statistically significant improvement from the AUC for the CRF model alone (Supplementary Figure S3). In participants without indication for BMD testing, adding GS to a clinical risk factor (CRF) model resulted in an AUC of 0.80 (95% CI 0.75-0.85) not statistically significant from the CRF model alone. AUCs stratified by indication for BMD testing are shown in Fig. 3. Internal validation of the CRF and CRF+GS model is shown in Supplementary Figure S5. The models overall showed good calibration, although the CRF+GS model showed a tendency of better calibration at higher risks compared to the CRF model alone. We computed the AUCs for the FRAX MOF Figure S3). A five-fold cross-validation revealed AUCs for the CRF and CRF+GS model of 0.77 (95% CI 0.73-0.81) and 0.77 (95% CI 0.73-0.81), suggesting a small over-fitting but without affecting the difference in AUCs between the models. Discussion In the present study, the GS was weakly associated with mean BMD as measured from cardiac CT scans (Supplementary Table S1). Nonetheless, the GS showed equal or greater discrimination than standard clinical risk factors, especially in participants without an indication for BMD testing. In participants without an indication for BMD testing, odds for having very low BMD per SD decrease in GS was 1.56 (95% CI 1.18-2.07, P<0.01) compared to 1.24 (95% CI 1.02-1.50, P<0.05) in participants with an indication for BMD testing. However, when performing reclassification analyses, no significant improvements in prediction were detected adding the GS to the CRF model in the total cohort nor in subgroup analyses inconsistent with our stated hypothesis. Despite not detecting any statistically significant improvement in the combined model's AUCs, a new predictor may add valuable improved prediction, as AUC is not suffcient to fully assess model performance. Often, despite improving the performance of a model, the increase in AUC is likely very small [29]. Furthermore, despite not adding statistically significant value, the combination of several risk factors can add value to prediction [33]. Therefore, we performed a reclassification analysis to further assess the contribution of the GS as a novel variable in risk assessment for identifying participants with very low mean BMD. To our knowledge, no other studies have assessed the clinical utility of adding a GS in a stratification algorithm to identify participants having very low BMD as measured from CT. There is a need to assess the clinical relevance of genetic profiling, as recent developments have resulted in low costs and fast routine assessment of genetic profiles being available, making personalized disease-preventive measures possible. If GS becomes a routine assessment, the genetic risk of very low BMD could be evaluated if indicated and requested by the clinician and the patient. Our primary outcome was quantitative CT-measured BMD; however, GWASs have primarily focused on DXAor ultrasound-measured BMD and fracture risk [14,16,17], and to our knowledge, no GWAS have been conducted on CT-measured BMD. As differences in BMD measurements exist regarding units and measurement sites, this could reduce the discriminative performance of the GS and GScombined models in this study. BMD of the femoral neck by DXA is used to diagnose osteoporosis, but total body BMD and BMD measured by ultrasound are not recommended for diagnosis according to the World Health Organization [34]. However, we compared different genomic scores derived from these measurement sites (GS TB_BMD , GS FN_BMD, and GS HQ_SOS ), including BMD femoral neck, in order to assess Fig. 3 Discrimination of very low BMD stratified by indication for BMD testing. The area under receiver operating curves (AUC) using osteoporosis risk factors, the GS, and FRAX for discriminating participants having very low BMD stratified by indication for BMD testing as recommended by the Bone Health and Osteoporosis Foundation based on age, sex, and clinical risk factors [22]. In total, 580 participants indicated BMD testing (A) and 841 participants did not (B). AUC between the individual osteoporosis risk factors (including GS) was not statistically different in participants with indications for BMD testing (P=0.47). However, in participants without indication for BMD testing, the AUC for GS alone performed above some individual clinical risk factors (sex, active smoking, and secondary osteoporosis, all P<0.0063) and equally for previous osteoporosis-related fracture, family predisposition, BMI, and glucocorticoid treatment (all P>0.0063). Bonferroni correction (0.05/8=P<0.0063). Adding the GS to the FRAX and CRF model did improve the AUC, except for adding the GS to the FRAX in the group without indication for BMD testing; however, this was not statistically significant (all P>0.05). The GS and BMI were included as a continuous variable multiplied by −1, as higher values, thus indicating a higher risk. Abbr.: AUC area under receiver operating curves, BMD bone mineral density, BMI body mass index, CI confidence intervals, CRF clinical risk factors, FRAX (MOF) the 10-year probability of major osteoporotic fractures calculated by the Fracture Risk Assessment tool (without BMD), GS genomic score 1 3 this in our study population and in a clinical context. We predefined the GS TB_BMD as our primary GS as it was based on a fully available GWAS summary statistic, and the GS TB_BMD (P<0.001) was selected for further analyses based on its strongest association to mean BMD in comparison to the other P value thresholds (Supplementary Table S1). There are numerous ways to calculate GSs [35], and our results are limited to this specific GS limiting generalizability to other GSs. The overall effect size of identified genetic variants is small, suggesting at present that GSs for prediction of fracture or low BMD alone have limited predictive power [36]. However, a GS may still be of clinical relevance combined with other clinically relevant predictors to fully capture the polygenic nature of osteoporosis. Furthermore, when GWASs for BMD further expand, the predictive power for discriminating disease likely increases. It is important to assess both BMD-specific GSs (based on GWASs for BMD) and fracture-specific GSs (based on GWASs for fracture) to assess the relevance in a clinical context and for patient care. Individuals with BMD T score>-2.5 and no osteoporosis risk factors do not generally have an indication for specific anti-osteoporosis treatment, despite most fragility fractures occur in individuals without osteoporosis [37,38]. This fact highlights the need to optimize the identification of individuals at high fracture risk, who are otherwise overlooked. A BMD-specific GS can potentially help identify inherited very low BMD, and thus, high fracture risk, in younger individuals without clinical risk factors. Despite limited research in the management of osteoporosis in younger adults, it seems likely that these individuals represent a patient group, where diagnosis and early treatment may impact positively the quality of life [39]. Furthermore, these young patients are likely to be missed using standard clinical stratification tools. Further research in screening, treatment, and prediction of fractures in younger adults is warranted. Recent studies suggest adding a GS in osteoporosis screening could reduce the number of patients who need BMD testing and improve overall fracture risk prediction [15,16,40]. Due to differences in clinical outcomes and designs, it is challenging to directly compare our results with other studies. Ho-Le et al. (2017) assessed the predictive value of a BMD-associated genetic score for the incidence of radiographically verified fragility fractures. They documented that the genetic score improved the accuracy of fracture prediction above clinical risk factors [40]. Our primary outcome was CT-measured BMD, a surrogate marker of fracture risk, limiting direct comparison of outcomes. However, we have previously shown that CT-measured BMD in our cohort is associated with greater fracture rate, both incident osteoporosis-related fracture and any fracture, documenting the clinical feasibility of CT-measured BMD in predicting fractures across a broad age range [4]. Lu et al. (2021) assessed a genetic score for heel quantitative speed of sound, which they found to improve fracture risk prediction above traditional clinical risk factors when adding it to FRAX. The clinical cutoff was set at 20% and 3% for the prediction of major osteoporotic fractures and hip fractures, based on the indication for treatment as recommended by the BHOF [16]. Our primary GS was based on total body BMD as previously described and we assessed a CRF model vs CRF-GS model using cutoff values at 6%, 12%, and 24 %. As the reclassification analysis is highly dependent on the cutoff values used, this limits direct comparison. We used FRAX as a reference platform to identify individuals at risk, as this model is validated for fracture probability, included in clinical guidelines and not fitted to the current study data [41]. However, FRAX does not identify very low BMD, but asses fracture risk and so, our primary focus was a clinical risk factor model including known osteoporosis risk factors. Having differences in outcome and designs in mind, we could not confirm previous findings suggesting adding a GS to recommended stratification tools could be of use in osteoporosis screening for personalized risk assessment. The aim of adding a GS is to identify patients, who will benefit from interventions including medical treatment that will reduce fracture risk. Preventive interventions are especially cost-effective in high-risk individuals [42]. Based on our data, we found no benefit of adding a GS to a CRF model. Although DXA will likely always be favorable in availability and costs, future GSs might increase awareness of many silent diseases by identifying high-risk individuals. We noticed that the GS model performed equally or above individual clinical risk factors in discriminating individuals with very low BMD. This suggests that if general genetic profiling of the population or subgroups of the population becomes more common, a GS might be of value in identifying individuals at risk otherwise overlooked. The combined clinical risk factor model (and FRAX) showed superior discrimination to individual risk factors and the GS, underlining the value of combined CRFs in routine clinics as a stratification tool. Our results highlight the need to carefully evaluate the benefits of new predictors in different cohorts using different study designs before implementation. Strengths and limitations The strengths of this study include the consecutive inclusion of a homogenous study population of both men and women at two centers. Participants had free access to medical attention and were referred to cardiac CT by experienced cardiologists, limiting possible selection bias. All participants had relevant and extensive osteoporosis risk factors collected at their baseline visit and a trained reader (JT) performed BMD analyses and calculated FRAX for all participants. BHOF releases comprehensive recommendations to guide clinicians to prevent and treat osteoporosis worldwide, including guidelines for BMD testing [22], and despite countryspecific guidelines, we used these recommendations for a stratification algorithm to assess the GS in a clinical setting. Limitations include the following: [1] Only participants with European ancestry referred to cardiac CT were included; thus, the results from this study may not be extrapolated to other cohorts. [2] FRAX includes previous fractures (defined as spontaneous or low-energy fractures in adult life). However, we defined a previous osteoporosis-related fracture based on diagnostic codes from Danish registries recorded at Danish hospitals since 1995. These registries do not differentiate low-from high-energy fractures, nor could we include non-clinical fractures, such as vertebral fractures only detected on imaging or vertebral fractures without imaging. Participants were asked if they were predisposed to osteoporosis, including 1-degree relatives having osteoporosis or a history of hip fractures, and this was entered into the FRAX tool. Thus, a history alone describing parental hip fracture was not recorded despite being used in the FRAX tool. Current systemic glucocorticoid use was defined as a redeemed prescription recorded 6 months prior to the baseline visit; however, the FRAX tool defines use as either currently using or exposed for 3 months of minimum 5 mg daily. Secondary osteoporosis was here defined as type 1 diabetes mellitus, early menopause (<45 years), or a Charlson comorbidity index ≥ 2. Although the FRAX tool does not include a Charlson comorbidity index in its users' guide, this was chosen to include diseases with a known increased fracture risk [41] comparable to the clinical risk factor model. [3] Lumbar and thoracic spine BMD values show high correlations [20], but currently, only lumbar QCT BMD is recommended [43]. CT-measured thoracic BMD is a surrogate marker of fracture risk, currently not included in clinical guidelines to diagnose osteoporosis [20,43]. Despite BMD being the strongest individual predictor of fracture risk, the developed models' ability to predict future fractures needs to be assessed. In the lack of validated thoracic BMD cutoff values, we chose ACR's recommended lumbar spine BMD cutoff values for very low and low BMD. This likely underestimates the very low and low BMD group due to physiological differences in BMD values between the thoracic and lumbar spine [44]. [4] In the lack of relevant clinical cutoff values, predefined cutoff values of 6%, 12%, and 24% were chosen according to the calculated p/2, p, and 2p as described by Cook et al. (2011) [30]. The results of the reclassification analyses were dependent on these cutoff values. We performed cross-validation, as model synthesis tested in the original dataset tends to optimistic estimate the predictive performance [45]. Our analysis showed a more realistic estimate with the tendency of a slight over-fitting, however not affecting the overall results. As our models were developed and assessed in the same study population, there is a need to externally validate the results of this study in an independent cohort to fully assess the generalizability of the results. In conclusion, adding a GS to a clinical risk factor model did not improve the overall prediction of very low BMD. However, we noticed that the GS performed equally or above individual risk factors in discrimination. Clinical risk factors combined showed superior discrimination to individual risk factors and the GS, underlining the value of combined CRFs in routine clinics as a stratification tool.
2023-07-28T06:17:27.471Z
2023-07-26T00:00:00.000
{ "year": 2023, "sha1": "160faccf33ab54f3324dbac51d8f67f930ebcea0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "bbbdb96926acd87d5088092ae0fa1e9a53b3bc31", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213894281
pes2o/s2orc
v3-fos-license
Profile of Emotional-Social Competence of Quarantine Participants of International Junior Science Olympiad (IJSO) Participants of the International Junior Science Olympiad (IJSO) are required not only to excellent in academics but are expected to have emotional-social competence for the conflict resolution process in the quarantine process. This study aims to describe the emotional-social competence of IJSO participants. The research design used was qualitative with a phenomenological model. The subjects of the study were the 20142018 IJSO participants. Data obtained from observations, interviews and field notes. The results of the study describe the emotional-social competence of IJSO participants who have not been able to express their feelings appropriately and are picky when making friends, especially in group work processes. Therefore, it is important guidance and counseling services provided as a facilitator for the development of characteristics to be superior. I. INTRODUCTION Participants International Junior Science Olympiad (IJSO) are sons and daughters of representatives of Indonesia at the junior high level who have gone through a rigorous selection process at the national level in the fields of Physics, Chemistry, and Biology. Before the IJSO event takes place, IJSO participants must follow the quarantine process for reselection which lasts for 44 days. Quarantine activities are intended to strengthen the ability of participants. The intensive quarantine program requires participants to follow the learning process for 13 hours in class, the participant's self-development focuses more on developing cognitive abilities compared to the character's development of participants. In the quarantine process, IJSO participants did not escape a conflict. IJSO participants tend to be difficult to express and express feelings of discomfort towards them. Friends For example, if there are friends who are in the bathroom for a long time, do not maintain cleanliness, or do not want to budge when arguing about the material. This behavior influences their emotional-social competence. Students who are furious because they do not like the behavior of their friends, impact on teamwork, participants tend to be subjective, less open to accept the shortcomings of others, and do not want to budge with their friends. IJSO participants who are identified as gifted groups, have high intellectual abilities in gifted adolescents are able to make gifted groups present themselves normatively or even superior as psychosocial adjustments compared to non-gifted groups. The level of emotional-social problems faced by gifted learners is not only seen from the individual's perspective of the conditions faced, but also related to the way of the environment around gifted individuals, this is related to the level of individual self-awareness that can trigger data collection which is biased, due to the individual's highly valued competence or ability. Various statements were found conflicting statements. On the one hand gifted students are considered as students who have good emotional abilities and are able to adjust themselves normatively, but on the other hand some studies find that gifted students are considered to be very vulnerable to being confronted with conflicts in their social environment. Participants who are in their early teens are naturally being faced with physiological changes that place participants in having the need to adjust to many changes towards the maturity process of an adult. Failure of participants to carry out developmental tasks such as adjusting and building self-image during adolescence can have an impact on dissatisfaction, low self-esteem, and feelings of worthlessness, in extreme cases it can trigger adolescents to commit suicide. In addition, IJSO quarantine participants are faced with pressure, hope, increased intellectual capacity and also psychological needs to fulfill development tasks during adolescence, conditions faced by participants in adolescent groups can face disturbances in the form of both disturbance of thoughts, feelings and behavioral disorders. Stress, sadness, anxiety, loneliness, doubt in adolescents make them take risks by doing delinquency. Based on the issue of the conditions faced by the participants, the emotional -social competence of the IJSO participants is an important key for participants to be able to face the stressful quarantine process. Daniel Goleman argues that the success of one's life will be more determined by his emotional social abilities than by intellectual abilities. Social emotional abilities become the foundation for individuals to interact with the wider environment. Social emotional abilities are also included in the ability to control themselves well. The inability of individuals to control themselves can cause various social emotional problems with others, so that it can place individuals in difficulties and obstacles in the development process. The quarantine process as a social system is a place that should be conducive to supporting the learning process. Learning will work well if the physical and psychological environment is conducive, conversely if there is pressure, anxiety that is difficult to overcome, and the lack of facilities to vent negative emotions during the quarantine process can have a negative impact on the learning process of IJSO participants. Therefore, guidance and counseling services become an integral part of the quarantine process in order to occupy a strategic position in developing the emotional-social competence of IJSO participants. Shertzer & Stone, views guidance as a process of helping and individual to understand himself and his word'. According to Kartadinata "guidance is defined as the process of assistance to individuals in achieving optimum levels of self-development". The guidance service function as a preventive measure in this research is focused on developing emotional-social competence. America School Counselor Association (ASCA) suggests that counseling is a face-to-face relationship that is confidential, full of acceptance and giving opportunities from the counselor to the counselee, the counselor uses his knowledge and skills to help the counselee overcome his problems. The function of counseling services as a curative effort in solving problems related to emotionalsocial competence. It is important for IJSO participants to obtain guidance and counseling services to achieve personal character by facilitating or helping students get out of problems that can hamper their development both physically and psychologically in the personal, social, academic and career fields. One of the most difficult developmental tasks for adolescents is related to social adjustment [9]. As mentioned in the independent competency standard for students [10], one of the tasks of development is the maturity of peer relations. II. METHOD The research method uses qualitative research with a phenomenological model. The phenomenological model is used to express the meaning of each phenomenon that occurs around IJSO participants. Hanurawan explains the phenomenological model is a qualitative study that explores in detail a person's personal life experience with the results of a description of how a person gives meaning about phenomena related to the personal world and its solutions. This study aims to find the meaning of emotional-social competence in IJSO participants. The research subjects were carried out through approaches to groups that had characteristics in accordance with the research topic, personal contacts. The subjects of the study were the 2014-2018 IJSO participants. Data collection is done by using interviews and observations. Through interviews, researchers try to reduce the many statements made by research participants on a topic to the main propositions or prominent as research. Meanwhile, through observation, researchers make intuitive conclusions obtained by researchers when and after observing the reactions that appear to participants when interacting with topics or phenomena. The data analysis technique used is the phenomenological analysis technique. Significant statements can be found in words, sentences, or several sentences in order to obtain a special meaning about a phenomenon. Johnson & Christensen explains that significant statements can be found in words, sentences, or phenomena. III. RESULT AND DISCUSSION Field findings through interviews and observations conducted during the IJSO activities are used to describe emotional-social competencies in dealing with problems or challenges that must be faced by IJSO participants as an impact of self-adjustment to the ability possessed (intelligence above average) during quarantine process. Based on observations and interviews with IJSO quarantine participants related to the emotional condition of students and social conditions when they have to mingle and are required to be able to cooperate with other participants. Emotional competence that students must have in the aspect of self-awareness is the ability of individuals to build emotional awareness that has an impact on work. Emotional awareness is recognizing the emotion that is felt. Based on observations and opportunities given to participants to express their feelings and needs, there are only two participants who express clearly their feelings and needs to be able to regulate themselves. While there are also participants who tend to be indifferent to the behavior of friends who are less pleasant when hanging out, but show displeasure when in teamwork cannot work together, such as "Can you not work according to our agreement?" Or "I don't want a group with him sis, because he is selfish ". The self-management aspects shown in the sub aspects are the ability to control self-emotion, achievement orientation, positive thinking, selfadjustment. Participants found it difficult to control emotions during pressures and demands through tests and parental demands, this condition was reported by six participants. A statement that is often made by IJSO participants in this condition is "I don't want to work in groups if he behaves like that". Characteristics of adolescents include trying to defy the rules given by parents. Two of the six participants showed resistance from the rules given by their parents, such as: reading story books, eating food that was not allowed by parents. This condition according to Yusuf is influenced by a flood of growth hormones in the sexual organs, causing new feelings that have not been experienced before. In early adolescence (students in junior high school), their emotional development shows a very sensitive and reactive (critical) nature towards various social events or situations. Positive thinking possessed by participants is more faced by participants who feel inferior conditions. Problems or challenges that made the participants reason to feel inferior were bronze medalists while other friends got silver and gold medals. This was seen in a number of participants who seemed unenthusiastic when attending quarantine and complained about the statement "why was I chosen? Even though I just got a bronze ". According to Parrot [13] states that sometimes teenagers make statements with minimal facts in reality such as "I'm not smart"; "I'm not attractive". The statement about self that is maintained will hire adolescents into a condition of majority. The process of positive thinking in quarantine settings in achieving goals influences the participant's achievement orientation. The competence of students in adjusting quickly is demanded in the quarantine process, the challenges faced include when they have to adjust to roommates who come from different regions and when they have to be ready to be paired in practicum groups that are continually adjusted so that changes in group members occur very frequently (group changes occur four to five times). Changing roommates and teammates sometimes becomes a participant's complaint like "I don't want to share a room with him sis, because it's dirty" or "I don't want a group with him sis, because it's not from the same district". In addition, the sub-aspect of competency that must be possessed by participants is managing emotions, reportedly there are participants who cry hysterically, due to lack of ability to manage emotions when failing to enter the advanced selection stage in quarantine. Information obtained, participants felt they should be able and can enter the next stage. Unrealistic expectations will also develop low self-esteem. Bruce Narramore states that the three common expectations as sources of problems in the process of self-acceptance are 1) I must be able to meet the expectations of others towards me, so that I can be accepted and loved; whenever I fail in achieving my goals, I deserve to be punished, and 3) I must rule the world. Every point of those expectations is irrational and can damage self-esteem. In addition to the competencies that must be possessed to be intrapersonal, other competencies that demand social competencies include: (1) social awareness shown in the ability to empathize, and organizational awareness (the ability to read the group's emotional conditions and understand the power of relationships, identify networks and group dynamics); and (2) management of the relationships shown in their ability to influence, conflict management, inspirational leaders, and teamwork. The aspect of social awareness, in the sub-aspect of empathy, is faced when participants are in a background of competence and an atmosphere of competition is built so that they are not excluded from quarantine, the condition of participants who do not want to understand the difficulties of other participants is shown when they do not want to share information or knowledge they have, in one this side of the condition became boomerang because participants were required to develop dynamic groups. In addition, challenges in developing organizational awareness were more likely to be faced at the beginning of the quarantine process, while participants were building their own group dynamics. This was seen when the participants were given assignments by the lecturer, some participants showed no desire to share information such as trying to avoid saying "I'm not done yet" or "wait a minute, I want to do this part first". In essence the process of building a dynamic group will go through the stages of forming (storming), storming, norming, performing. However, the process of dynamics within the group through each phase will be necessary and inevitable so that the group can grow, face challenges, overcome problems, find solutions, plan work, and deliver results. In the aspect of relationship management, the challenges faced are especially when working in groups. Each participant is required to give each other room to each group of friends to develop each task they carry, trust, share assignments effectively, and encourage each other is a challenge for participants to be able to influence, mentor, manage conflict, become an inspirational leader, and show dynamic teamwork. If participants feel that they are compatible with the group formed, they will say "Sis, please, the team should not be changed anymore" or if the participants do not agree with the team formed, the statement is "Sis, it still doesn't fix the formation of the group?". Adolescent social competence in practice influences one another. The vital role of emotions for individuals is emotional information that involves understanding self-emotions and group emotions will help teens to motivate and direct attention, and facilitate group relationships. Basically, the high intellectual ability possessed by IJSO participants, helps them to develop ethically sensitive attitudes and better moral development, besides that in some cases teenagers are also identified as having social and emotional development more mature. Appropriate handling in dealing with gifted adolescents has an impact on the perception of gifted adolescents on their environment, because after all according to LJ. Coleman & Cross argues that perceptions and forms of behavior of people around to students about their abilities have a real effect on their social interactions. As an effort to facilitate the development of IJSO participants in fulfilling development tasks so that they develop naturally in various life settings. Guidance and counseling services in group settings can be provided as direct and indirect interventions. Direct intervention can be given in individual settings or group settings. Service components that can be provided are basic services and responsive services. While the form of indirect intervention services will be more focused on efforts to develop a development environment (in reach-out reach) in the interests of facilitating the development of IJSO participants, this activity will involve many parties in it. The parties involved were IJSO participants, the Ministry of Education and Culture as the organizer of the quarantine process, parents and schools as a support system for the participants. IV. CONCLUSION The social emotional competence of IJSO participants shows that they have not been able to consciously recognize emotions, manage emotions, feel inferior, have empathy and be able to work with anyone. Important guidance and counseling services are provided to IJSO participants as a facilitator in developing superior characteristics. In addition, with the presence of guidance and counseling services IJSO participants are expected to be able to resolve conflicts in the quarantine process.
2020-01-02T21:50:49.936Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "71f8909935583e01beb75209a371ecdcfe5b40b3", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125926566.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "43017e28cfea38320d4fec48d7d9ee81f958a854", "s2fieldsofstudy": [ "Psychology", "Environmental Science" ], "extfieldsofstudy": [ "Psychology" ] }
240311351
pes2o/s2orc
v3-fos-license
Opuntia ficus-indica as a supplement for gilts in late gestation and lactation: effects on biochemical parameters and voluntary feed intake ABSTRACT This study evaluated the effect of Opuntia ficus-indica L. supplementation on gilts during late gestation and lactation, particularly on their biochemical parameters and voluntary feed intake (VFI) at lactation. Thirty-two gilts were randomly distributed into two groups: control group (CG), gilts fed conventionally and experimental group (EG), gilts fed commercial feed plus O. ficus-indica. Glucose concentration was lower (P < 0.05) in the EG. Insulin concentration was higher in the gilts that consumed O. Ficus-indica. Triglyceride’s concentration was lower (P < 0.05) at gestation, farrowing and lactation in the EG. Total cholesterol was higher (P < 0.05) in gestation and lactation in the CG. HDL present higher concentration in lactation in the gilts that consumed O. Ficus-indica. LDL concentration was higher (P < 0.05) in gestation and lactation in the CG. Leptin concentration was higher (P < 0.05) at lactation in the CG. The gilts of the EG had higher VFI (22.6% more) and presented less body weight loss (3.7% less) at weaning (P < 0.05). These findings suggest that O. Ficus-indica favourably regulates the biochemical indicators involved in the development of insulin resistance, and this is reflected in the higher VFI and lesser body weight loss at weaning. Introduction Insulin resistance (IR) is a metabolic disorder that involves decreases in cellular sensitivity towards insulin and predisposes to hyperglycemia and dyslipidemia (Unger et al. 2014). However, in sows, like most female mammals IR is a metabolic adaptation at late gestation and lactation that allows to direct a higher input of nutrients to the uterus (for the higher exponential growth of the fetus) and the udder (to start milk production) Etienne 2007, 2018). The importance of IR in sows is that it has an effect on the sow's productivity, specifically on feed intake in lactation (Père and Etienne 2007;Manu et al. 2020). Because the effect of IR was observed in lower feed intake, during late gestation this effect is not perceived, since, during this period, feeding schemes are based on sow's restricted feeding, in contrast to ad libitum feeding at lactation (Solà-Oriol and Gasa 2017). Also, leptin has no effect on feed intake in gestation since there is resistance to leptin (hyperleptinemia) for a higher input of nutrients to the uterus (Tessier et al. 2013). However, in lactation, sows do not experience leptin resistance (Cools et al. 2014). Some studies show that there is an increase in the serum leptin concentration gradually until the middle third of the gestation, reaching a high concentration and remaining high until farrowing in swine females (Saleri et al. 2015). It was reported that backfat depths were positively associated with leptin concentrations. However, no direct relationship between levels of circulating leptin and sow fertility, if leptin is involved in the control of reproduction, its role is merely permissive. E.g. through the association that leptin has on reduced feed intake in lactating sows. The lower feed intake promotes catabolism of the sow and modifies its metabolic state. This puts at risk the synthesis of hormones involved in reproduction: LH, FSH, IGF-I, estrogens, etc. (De Rensis et al. 2005;Solé et al. 2021). For the reasons described above, strategies that maximize feed intake in lactating sows should be sought. It has been reported (Serena et al. 2007;Quesnel et al. 2009;Jha and Berrocoso 2015;Li et al. 2021) that the addition of dietary fibre to the diet of gestating sows promotes gastric health and increases feed intake during lactation. This is because dietary fibre favours the metabolic profile minimizing insulin resistance and dyslipidemia (Serena et al. 2007;Li et al. 2021). In sheep, rabbits, mice, and pigs, the dietary fibre of various foods, including Opuntia spp., has been linked to an improvement in glucose and lipid metabolism (Brahim et al. 2012;Halmi et al. 2013;Ordaz et al. 2017). The lower plasma concentrations of glucose, cholesterol, and triglycerides related to the consumption of O. ficus-indica are associated with: (i) stimulation of insulin secretion and best glucose reabsorption by different tissues and, (ii) modifies lipid biosynthesis by binding to bile acids, this improves cholesterol catabolism (Kritchevsky et al. 1988;Fernández et al. 1992;Pari and Latha 2005;Gouws et al. 2019). Consequently, our hypothesis focuses on O. ficus-indica effect on the regulation of biochemical indicators (insulin and leptin mainly), minimizing resistance to insulin and its effects on sow productivity during late gestation and lactation. Hence, this study aimed to evaluate the effect of O. ficus-indica supplementation on gilts during late gestation and lactation, and particularly on their biochemical parameters and voluntary feed intake at lactation. Materials and methods This research was carried out at the Swine Unit of 'Posta Zootécnica' belonging to the Veterinary Medicine and Husbandry Faculty of Universidad Michoacana de San Nicolás de Hidalgo (FMVZ-UMSNH), Tarímbaro, Michoacán, México (Road; 19°4 6 ′ N, 101°08 ′ W, and altitude of 1855 m). The animals used in this research were bred in accordance with the regulations of the zoo-technical and zoo-sanitary legislation of Mexico for the humanitarian care and use of animals in research SAGARPA-SENASICA. Animal diets and husbandry Forty gilts were directly exposed to mature boars (eight gilts: one boar) for at least 15 min daily from 160 days of age until the last gilt had displayed pubertal estrus. Gilts that presented the second estrus at 21.0 ± 1.0 days (n = 35 gilts) were inseminated (182 ± 1.3 days; 137 ± 8.2 kg) with semen of boar genotype Yorkshire × Pietrain and were housed in groups (n = 7 gilts) in 16 m 2 pens during the 110 days of gestation. Eight gilts were removed from the investigation: five during the selection process to be inseminated (sows with a lagged reproductive cycle), one gilt due to abortion and two gilts due to problems related to lameness. Therefore, thirty-two sows were used to carry out the research. All gilts were fed 2.5 kg −1 of commercial feed per day (divided into two portions, at 8:00 and 15:00 h), until day 84 of gestation (Table 1). At day 85 of gestation, the gilts were housed in individual pens of 2.0 × 2.0 m 2 . According to a completely random design, the animals (n = 32 gilts) were divided into two groups (n = 16 gilts·group −1 ): control group (CG), gilts fed commercial feed only (2.5 kg·day −1 ) and experimental group (EG), gilts fed commercial feed (2.5 kg·day −1 ) plus O. ficus-indica (as a source of dietary fibre). O. ficus-indica (fresh base) supplementation was 1.0% with respect of the gilts' body weight; the body weight of the gilts corresponded to the stage at which the gilts were found (gestation or lactating). Immediately after farrowing, all gilts were fed ad libitum with a commercial diet for lactation ( Table 1). The only variation in the gilts' feeding at lactation was the addition of fresh O. ficus-indica to the diet of EG (Table 1). Cladodes of O. ficus-indica were offered to the gilts at approximately 90 days of age; their chemical composition is shown in Table 1. Cladodes were manually cut (41.0 kg per week) and stored at 4.0°C until they were fed to the gilts. For feeding, cladodes were manually fragmented into pieces of approximately 3.0 × 2.0 cm and were mixed with the commercial feed corresponding to the first meal of the day (8:00 am) procedure performed in both phases, gestation, and lactation. For farrowing and lactation sows were lodged in cages of stainless steel with plastic slatted floors until the moment of weaning (21 d post-farrowing). During farrowing and lactation, there was artificial light between 8:00 and 15:00 h, and the environmental temperature was 20-24°C. Each cage was provided with a heat source for the piglets. Farrowing occurred naturally on day 115 ± 0.12 of gestation (day 0 of lactation). The sows at farrowing had a litter size of 10.9 ± 1.3 piglets, 9.5 ± 0.8 piglets born alive, 1.3 ± 0.2 stillbirths, and 0.1 ± 0.01 mummies. Litters were balanced by eight piglets within the first 48 h post-farrowing. Piglets that died during lactation were not replaced. During lactation, seven casualties were recorded. In the CG, three died because of crushing, and in EG, four casualties occurred (three because of crushing and one from diarrhea). The casualties caused by crushing occurred during the first week of lactation and the casualty from diarrhea occurred in the second week of lactation. Blood sampling At day 85, 100, and 110 of gestation and at day 1, 3, 7, 14, and 21 of lactation, eight gilts from a group −1 were selected for preprandial (12 h fasting) blood sampling. A 10 mL blood sample was taken from the vena jugularis between 7:00 and 7:30 (1 h before the start of the morning meal). Immediately after sampling, each blood sample was divided into two subsamples: 6 mL in tubes with serum cloth activator (for an analysis of glucose, triglycerides, cholesterol, HDL and LDL) and 4 mL in tubes with lithium heparin (for analysis insulin and leptin). The subsamples were stored at 4°C until centrifugation. Subsequently, the tubes were centrifuged (1000 × g for 10 min), and plasma and serum samples were stored at −20°C until further analysis. Gilts' productive performance The feed intake, energy balance, loss of bodyweight of the gilts and piglet development were evaluated; for feed intake, the feed supplied and rejected was weighed daily with a digital scale (Dibatec®; capacity of 40 kg and accuracy of ±5 g). The energy imbalance was obtained through the methodology established by Noblet et al. (1990). Gilts were weighed pre-farrowing (day 110 of gestation) and at weaning (day 21 of lactation) using a fixed electronic scale (STG-1500-T1500SL, OCONY®; with a capacity of 1-1500 kg). To estimate the loss in body weight at lactation, the weight of the post-farrowing gilts was estimated using the prediction equation of Mallmann et al. (2018). Piglets were weighed at birth and on days 7, 14, and 21 (weaning) of lactation. Statistical analysis Data were analyzed by ANOVA through repeated measurements by PROC MIXED [SAS Inst. Inc., Cary, NC, EUA] (Littell et al. 1998). The gilt represented the experimental unit in the model. The effects of the group, day, and their interaction were evaluated in terms of gilts body weight, piglets' weight, plasma glucose, insulin, triglyceride, cholesterol, HDL, LDL, leptin, and HOMA-IR index. The model used was: where Y ijkl = response variable: gilt body weight, piglets' weight, plasma glucose, insulin, triglyceride, cholesterol, HDL, LDL, leptin, and HOMA-IR index; µ = constant common in the population; G i = fixed effects of the i-th group, with i = CG, EG; G(G) j(i) = random effect of the j-th gilt, nested within the i-th group, with i = CG, EG; D k = fixed effects of the k-th day; G*D ik = fixed effects of the interaction of the i-th group with the kth day; ɛ ijkl = random effect associated with each observation (∼NID = 0, σ 2 e ). The data of feed intake, intake energy, energy balance, body weight loss of the gilts and piglets weaned per gilt were evaluated through ANOVA in PROC GLM (SAS 9.4 Inst. Inc., Cary, NC, USA). The effects of the group, week, and their interaction were evaluated. The model used was: where Y ijk = response variable: feed intake, intake energy, energy balance, body weight loss of the gilts and piglets weaned per gilt; µ = constant common in the population; G i = fixed effects of the i-th group, with i = CG, EG; W j = fixed effects of the j-th week of lactation; G*W ij = fixed effects of the interaction of the i-th group with the j-th week of lactation; ɛ ijk = random effect associated with each observation (∼NID = 0, σ 2 e ). Significant differences among groups were considered at P < 0.05. Normality of distribution and homogeneity of variance for residuals were tested using PROC UNIVARIATE [SAS Inst. Inc., Cary, NC, EUA]. In the case of non-normality, parameters were normalized by log transformation prior to analysis to generate a normal distribution. Insulin resistance was indirectly estimated using HOMA-IR according to the equation proposed by Matthews et al. (1985): The values in tables and figures are presented as minimum squares ± SEM. Metabolic indicators The effects of the supplementation with O. ficus-indica on the biochemical indicators of gilts were estimated per group, sampling day, and group per day interaction. According to the group per day interaction, CG showed higher (P < 0.05) plasma glucose concentrations on each evaluation day (Figure 1). A plasma glucose peak was found at farrowing day in both groups; however, it was lower in EG gilts (P < 0.05): 94.7 vs. 112.9 ± 3.4 mg·dL −1 of CG. Plasma insulin according to group per day interaction (P = 0.0034) was higher (P < 0.05) at farrowing day and at days 3 and 7 of lactation in both groups (Figure 1), however, EG showed higher concentration (Figure 1). HOMA-IR index was higher (P < 0.05) in gilts that consumed O. ficus-indica on each evaluated day (Figure 1). According to HOMA-IR index, both groups presented IR from day 100 of gestation at day 7 of lactation ( Figure 1). However, IR degree was higher (P < 0.05) in EG gilts (10.8% more from day 100 of gestation to farrowing and 30.6% more in the first week of lactation) than that in CG (Figure 1). Group per day interaction effect (P < 0.001) on the lipid indicators was evaluated. Plasma triglycerides were lower (P < 0.05) at days 100 and 110 of gestation in EG (Figure 2). At lactation, from the second week, the gilts that consumed O. ficus-indica presented higher (P < 0.05) plasma triglycerides, a pattern that remained consistent until the end of lactation (Figure 2). Plasma total cholesterol showed a similar trend as triglycerides at gestation, that is, lower (P < 0.05) concentrations in EG (Figure 2). Day three post-farrowing was the only day that showed significant differences in total cholesterol between groups, with its concentration being higher (P < 0.05) in the CG gilts (Figure 2). HDL in gestation (day 100 and 110) was lower (P < 0.05) in gilts that consumed O. ficus-indica. At lactation, the gilts that consumed O. ficus-indica (EG) showed higher plasma HDL (P < 0.05; Figure 2). Gilts that consumed O. ficus-indica presented lower (P < 0.05) concentration of plasma LDL in both phases ( Figure 2). Group per day interaction effect (P < 0.001) on plasma leptin showed that, at gestation (day 100 and 110) plasma leptin was equal (P > 0.05) in both groups ( Figure 3). However, at lactation, the CG gilts showed higher (P < 0.05) leptin concentrations on each evaluated day (Figure 3). Gilts' performance The effects of supplementation gilts with O. ficus-indica on voluntary feed intake, energy balance, loss of bodyweight and piglet development in lactation were estimated by group (P < 0.001), week (P < 0.001), and their interaction. The group per week interaction affected the daily feed intake (P = 0.037) and energy balance (P = 0.012). Commercial feed intake on an average per day at lactation was higher (P < 0.05) in the EG gilts: 5.4 vs. 4.2 ± 0.06 kg·day −1 in CG gilts (Table 2). Energy balance on average showed a similar trend as feed intake; increased energy balance in gilts that consumed O. Ficusindica (0.43 ± 1.19 MJ·day −1 ) compared with the control (−7.33 ± 1.19 MJ·day −1 ). The loss of bodyweight in lactation was lower (P < 0.05) in EG gilts: 1.1 vs. 4.8 ± 1.2% in CG gilts (Table 3). Group per week interaction had a similar trend to the feed intake and energy balance, being higher in the gilts that consumed O. ficus-indica, in each evaluated week (Table 3). O. ficus-indica supplementation had no effect (P > 0.05) on the weight of piglets at birth and at weaning (Table 3). Discussion In sows, the gradual development of IR promotes the presence of dyslipidemia . IR detected at the end of gestation is accentuated at the first week of lactation because the sow physiologically requires more glucose for lactose synthesis (Pari and Latha 2007). In the presence of IR, the sow mobilizes body reserves (fat and protein) and thus increases the concentration of energy substrates (e.g. total cholesterol, triglycerides, leptin) because the sows do not yet present optimal feed intake to satisfy their nutritional requirements ). An increase in lipid indicators was observed in CG gilts in both late gestation and lactation. The behaviour of these biochemical indicators (total cholesterol, triglycerides, LDL, and leptin) in CG is associated with a higher concentration of glucose at the late gestation stage and during the first week of lactation and this relationship with the feed intake at lactation. In addition to the high glucose concentration during the peripartum period, high concentrations of endogenous opioid peptides are also present during this period, essential peptides to stimulate the production of endorphins, reduce the concentrations of GnRH, FSH and LH and synthesize prolactin to start lactation (Barb et al. 1986;Farmer 2016). The higher the synthesis of prolactin and its interaction with IR, the glucose concentration increases for the formation of milk components, in this case, lactose. In addition to the effect of the opioid peptides described above, they also have action on feed intake due to their interaction with proopiomelanocortin (POMC). It has been established that POMC has several post-translational metabolic pathways, not only giving rise to β-endorphins. At the same time, it synthesizes corticotropic hormone (ACTH), hormone that participates in the inhibition of feed intake (González et al. 2006). Quesnel et al. (2009) report that the administration of fibrous diets (>30% NDF) favours the feed intake at lactation. With respect to using O. ficus-indica as a source of dietary fibre in the present research, it has been reported in rats (Nuñez et al. 2013) that the consumption of this cactus decreases plasma glucose and increases plasma insulin. Soluble fibre in monogastric is not digested by the gastrointestinal enzymes, therefore, they modify the absorption of bile salts, cholesterol, and glucose (Morán et al. 2012). In addition, a diet rich in soluble fibre increases the viscosity of food bolus and propitiates the absorption of energy substrates (Shapiro and Gong 2002). Haber et al. (1977) showed that, when carbohydrates are present intracellularly in plant foods, their release in the intestine is slower and glucose-insulin responses in the blood decreases. HOMA-IR index for IR diagnoses in humans and sows is ≥ 3.0 (Matthews et al. 1985;Tan et al. 2015). The gilts that consumed O. ficus-indica had a higher HOMA-IR index with respect to the CG, this is due to the higher synthesis of insulin in the gilts of the EG. It should be noted that regardless of the highest HOMA-IR index in the EG the hyperinsulinemia is not a product of IR, it was due to the over-production of insulin by the consumption of O. ficus-indica. This behaviour is justified analysing glucose behaviour, where EG had no concomitant glucose-insulin increase. The consumption of O. ficus-indica reduced glucose concentration per direct action of insulin because this cactus stimulates insulin secretion, improves the insulin-stimulated phosphorylation of IRS-2 and Akt in the liver, and thus normalizes the excessive production of hepatic glucose (Pari and Latha 2005). O. ficus-indica can act the same way as oral antidiabetics, by shutting down the K + /ATP channels, depolymerizing membrane and stimulating the Ca 2+ channels for insulin secretion (Halmi et al. 2013). Changes in glucose kinetics due to the consumption of O. ficus-indica propitiates lower catabolism (indirectly assessed in loss of bodyweight), which is conducive to less dyslipidemia because of the higher feed intake at lactation. With regard to the greater dyslipidemia found in the CG gilts, it has been reported that regardless of the sows' genotype (lean or fat), the highest degree of dyslipidemia in the CG respect to EG is mainly associated with the IR by which the sow passes during peripartum (Mosnier, Le Floc'h, et al. 2010;Torres-Rovira et al. 2011). IR cause an increase in glucose concentrations, which limits feed intake. The low feed intake favours body catabolism in the sow, therefore the release of NEFA's, this is reflected in a higher concentration of cholesterol, LDL, and leptin (Figures 2 and 3). Increased triglycerides and HDL from the seventh-day postfarrowing at the end of lactating in EG gilts could be associated with the beginning of ovarian reactivation (Barb et al. 2008). Ovarian reactivation is characterized by an increase in reproductive hormones FSH, LH, and estrogenshormones that are synthesized through cholesterol precursors (Barb et al. 1991). During ovarian reactivation, the dietary fibre of O. ficusindica could act favourably by its effect on non-starch polysaccharides, that when subjected to fermentation by the colon microbiota lead to a higher production of volatile fatty acids (Cani et al. 2006). Volatile fatty acids intervene on energetic contribution of the organism and can facilitate the synthesis of precursors of cholesterol (Molist et al. 2009). Volatile fatty acids that are monomers in the luminal aqueous phase are absorbed by any segment of the digestive tract, therefore have a total digestibility (Berruezo et al. 2011), and this propitiates the increase in triglycerides and HDL from the second week of lactation in EG gilts (Figure 2). Leptin is a mediator of the regulation of the long-term energy balance, as it has an effect on the suppression of feed intake and induces weight loss (Martínez et al. 2014). Leptin resistance has been reported during the last third of gestation, however, it has no effect on feed intake (Saleri et al. 2015;Szczesna and Zieba 2015). Such behaviour was also observed in the present investigation, since no difference was found between groups in the leptin concentration at the last third of gestation. As far as the lactation phase is concerned, it has been reported (Tessier et al. 2013) that there is no resistance to leptin, since, in this phase, the loss of the intracellular signal of the function of leptin receptors. Therefore, in lactation, leptin does have an effect on food consumption. This could be observed in the feed intake of the sows, the sows from the CG had lower feed intake and higher leptin concentrations with respect to the EG (Table 2 and Figure 3). Leptin concentration is higher in sows of Asian genotypes than in European genotypes (Guay et al. 2001). Farmer et al. (2007) report a difference in leptin concentration on the eighteenth day of lactation. The Landrace genotype showed a higher leptin concentration with respect to the Yorkshire, Duroc, and synthetic lines genotypes. This behaviour is associated with the greater thickness of dorsal fat in Landrace sows and their association (r = 0.67) with leptin (Estienne et al. 2000). Kolaczynski et al. (1996), established that, the increase in 10% of bodyweight results in a 300% increase in the leptin concentration; a phenomenon that is observed during the last third of gestation and propitiates lower feed intake at lactation. Quesnel et al. (2009) report that a diet rich in fibre during gestation leads to a reduction in leptin concentration and an increase in feed intake at lactation. This behaviour was also observed in the present research: the gilts that consumed O. ficus-indica presented a lower concentration of leptin in lactation and a greater feed intake, in addition, energy balance was lower as was the loss of bodyweight at weaning. Finally, among the hypotheses that were had about the use of O ficus-indica in the feeding of pregnant and lactating sows to reduce the glucose concentration was the possible effect on the development of the piglet and the quantity and quality of the sow's milk. About it, the research group has already reported that the consumption of O ficus-indica does not affect the quality and quantity of the milk produced by the sow or the development of the piglet (Ortiz et al. , 2020. However, there are some limitations that must be considered when interpreting the results of this study. The study was carried out in sows of hybrid genotype, it would be necessary to carry out more investigations in the genetic lines of current hyperprolific sows which are more susceptible to a lower feed intake due to their higher energy demand to meet the requirements of a larger litter and higher milk production. It should be noted that when evaluating the energy balance in hybrid sows (with n = 8 piglets/litter) fed O ficusindica, it was higher (−2.1 ± 3.5 MJ·day −1 ) with respect to conventionally fed sows: −9.1 ± 2.7 MJ·day −1 (Ordaz et al. 2019). Reason for which, it is expected that the productive behaviour of hyperprolific sows fed with nopal as part of the diet is positive, since it would generate greater feed consumption which would reflect a better metabolic state of the animals. In addition, strategies must be sought to process and store O ficus-indica that facilitate its incorporation into the sow's diet without losing its properties, in order to be implemented in intensive swine production systems. Conclusion IR is an inherent physiological process of sows, however in current swine production systems IR limits the productive potential of the sow because it has effects on sow productivity: low feed intake in lactation, greater body weight loss in lactation, low productivity in the next production cycle, less longevity, etc. The intake of O. ficus-indica at late gestation and during lactation in gilts favourably modulates the regulation of biochemical indicators that participate in the development of insulin resistance and dyslipidemia, which is reflected in higher voluntary feed intake and less body weight loss at weaning. Disclosure statement No potential conflict of interest was reported by the author(s).
2021-11-01T15:07:35.642Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "1ac46b5e34307e55c06d08f67fa15398e09568ec", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/09712119.2021.1995391?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "b4a1563f3c59a55cde861ded3952494100c7b777", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
233646915
pes2o/s2orc
v3-fos-license
An Empirical Study on Image Segmentation Techniques for Detection of Skin Cancer Skin cancer is a crucial predicament in most of western countries including Europe, Australia and America. It is quite often curable whenever perceived and treated early. The significant hazard factors related are skin shading, deficiency of sun-lights, atmosphere, age, and hereditary. The most ideal approach to distinguish melanoma is to perceive another spot in the skin or recognize that is fluctuating in size, shape and shading. Early detection of skin malignancy can stay away from death. Finding of the skin ailment relies upon the extraction of the anomalous skin locale. Right now, methods to separate the skin injury districts are proposed and their outcomes are looked at dependent on the measurable and surface properties. In this study, the myriad kind of features of Dermoscopy image analysis has been thoroughly explores. Moreover, disparity segmentation techniques for detecting Melanoma Skin Cancer are discussed. The ultimate aim of this discussion is to provide suggestions for carrying a future research based about this relevance and limitations. Review Article Kavitha et al.; JPRI, 33(10): 71-81, 2021; Article no.JPRI.64030 72 INTRODUCTION A disease is a specific uncharacteristic condition that contrarily affects edifice or function of part or the entirely of an organism and that isn't because of any immediate external wound. For humans, infection is regularly utilized all the more comprehensively to allude to any condition that causes torment, social issues, distress etc. Diseases can affect physically as well as mentally, as diminishing and living with a disease can change the influenced individual's point of view on life. Nowadays, Cancer is taking into consideration as a paramount reason [1] of mortality as well as transience of human life. It is an intricate sickness instigated principally by genetic variability and a gathering of molecular revamps. Skin malignant growth [2] is one of the sicknesses that influence people. It is brought about by the advancement of malignant cells on any of the layers of skin and happens when cells in a body part start to develop wild and spread to different organs and tissues. The body continually makes new cells to enable us to develop, supplant destroyed tissue and mend wounds. Ordinarily, cells reproduce and expire in a systematic manner. A now and again cell doesn't enlarge partition and pass on in the standard way. This possibly will cause blood and lymph liquid in the body to get strange, or structure a protuberance called a tumor. It can be amiable or harmful. The primary cancer that initially creates either in a tissue or organ, has not spread to further body is called localize cancer. A tumor may perhaps assault auxiliary into surrounding tissue in addition to be capable of build up its dreadfully own veins (angiogenesis). On the other hand, those dangerous cells develop and structure another tumor at another site, it is known as a secondary cancer or metastasis. Since, skin is the biggest body part, it is taking care of numerous responsibilities including ensuring the body, managing temperature and controlling liquid misfortune. Skin, similar to all other body tissues, is comprised of cells where the two fundamental layers are the epidermis and dermis. The three primary sorts of skin malignant growths are named as basal cell carcinoma; squamous cell carcinoma; and melanoma (initiate exclusively cells of the epidermis). The former type, lofty cells structure the inferior layer of the epidermis. They increment incessantly, and the extra seasoned cells ascend inside the epidermis and smooth out to frame squamous cells. The intermediate types are level cells that are distended resolutely mutually to structure the apex and thickest layer of the epidermis. These cells are wrought from mature basal cells and they incessantly hut as fresh cells are made. The last type, cell constructs a tedious gloominess entitled melanin, the material that provides skin its gloominess Melanocytes construct supplementary melanin to safeguard it from getting scorched, when skin is obtainable [3] to the Sun. Dermis, this layer of the skin befalls underneath of the epidermis. It contains the underlying foundations of hairs (follicles), sweat organs, blood and lymph pots, and nerves, which are apprehended set up by collagen, a protein that invigorates skin its versatility and potency. Skin disease spreads by moving into the dermis by means of the storm cellar layer enabling malignant growth cells to arrive at blood or lymph vessels and move around the body. At present, this ailment speaks to a genuine medical issue; the quest for a precise clinical finding has been a consistent worry for dermatologists. Nowadays, the deadliest kind of cancers is hard to distinct because both are resembles alike in look at the preliminary stages. Just a specialist dermatologist can distinguish these in early stage. Cells are reserved to one region and not ready to extend to further parts of the body is called as Benign tumor. The cancerous cells, that be capable of spread by going through the circulation system or lymphatic framework is named as malignant tumor. If melanoma can be detected early, it can cure completely. Currently, a few strategies on the territory of picture preparing have been create utilizing calculations or systems for identification and characterization by methods for procedures and computational techniques, which have been applied in taking care of therapeutic issues. These techniques can be a viable instrument particularly where there are not experts, on other hand it is additionally a noninvasive device for the patient. Throughout the most recent decades image processing has been applied in various territories, permit ting to improve the data on a picture for its understanding, portrayal, depiction, and preparing. IMAGE PROCESSING It is performing an amazing work in diagnostic process by means of examine in MATLAB. Besides, it can be used for early diagnosis of any diseases in myriad medical applications. Detecting the cancer in initial stage and remedy is significant. Despite being melanoma is t extensively hazardous somewhat skin cancer; it can cure in early stage [4]. Recently, a few examinations and work relate with pictures of pigmented skin sores for conclusion and arranging skin injury, for example, skin disease have been created by methods for advanced image analysis. Their primary goals have been to give a precise analysis. Most examinations are identified with the analysis of threatening melanoma. Feature extraction and segmentation methods help to find the stage of cancer. Image segmentation act as vital role in image processing for solve myriad complicated issues, specifically those associate to chronic diseases, namely skin cancer. Generally, there are three phases for analysis of automatic dermoscopy image: specifically feature selection and extraction, image segmentation, and feature classification [5]. Skin cancer is a significant health problem influencing a tremendous part of the populace irrespective of skin colors. This affectedness could be identified with dermoscopy to conclude the obvious spots are dangerous or not. Systems aid the detection process to find out the occurrence of cancer by construing medical constraints, depending upon an exact method [6] to extricate pertinent features. The outstanding techniques are to analysis lesion based on Asymmetry, Border, Color and Differential structures that name das ABCD. The classification of a tumor is carried out when clinically-relevant features are extricated. On the other hand, lopsided and scatter lesion boundaries, low contrast, noise / artifacts in images, and the presence of diverse hues within the region of interest obscure the processing of images. The abnormal (Melanoma) moles are distinguished from normal by applying ABCD rule. A doctor ought to be verified that the moles that ensure whichever of the subsequent traits. IMAGE PRE-PROCESSING It is an obligatory advance to manage image that doesn't have adequate eminence to be examined. This absence of quality occur because of the artifacts namely hair, that can contrarily impact the recital of the succeeding steps. i) Color normalization is one more significant issue. Dermoscopy images may be captured using various gadgets [7] and lighting condition, rendering un-trust worthy color information. Thusly, it might be imperative to incorporate a color improvement steps. ii) Lesion segregation is a perplexing chore that has been completely analyzed in reviews. The incredible assortment of shape, sizes and color as well as various kinds and texture make it hard to build up a vigorous segregation procedure. To accomplish an appropriate feature extraction and lesion characterization, a precise segmentation is required. iii) A vital step to attain a discriminative depiction of the skin lesion is Feature extraction. Myriad research has been achieved by finding the appropriate topographies of lesion where it can be isolated into four classes: a) hand-crafted, b) dictionary-based, c) deep learning, d) clinically inspired features. iv) To diminish the dimensionality of the feature space in certain computer-aided systems by eradicating in appropriate feature. v) Classification of Lesion -A classification algorithm [8] is accomplished to predict a diagnosis. For diagnosis processes there are myriad classifiers have been performed. Recently, myriad hospitals and medical specialty clinics are using the computer based identification systems for detection of either carcinoma or melanoma. Image segmentation is vital task in analyzing dermoscopy image, since the skin lesion border extraction provides significant signs for precise designation. Key benefit of this computer based analysis is that patient doesn't necessity to take any excruciating diagnosing techniques like Biopsy in clinics. In this computer aided analysis, skin cancer dermoscopy image is given and it is exposed to numerous pre-processing and image enhancement. In segmentation, the cancer affected region is distinguished from the healthy skin. So as to diminish the classification complexity, specific exclusive features of malignant and benign melanoma are extracted. The following table (Table 1) depicts the various segmentation techniques to detect melanoma in skin lesion image. Moreover, performance of these methods and author details with publication year are included. This system is evaluated qualitatively and quantitatively. The segmentation and classification results of dermoscopic image dataset yield best results. 4 Halil Murat Unver and Ened Ayan [12] 2019 Merging the deep convolutional neural networks specifically YOLO and GrabCut algorithm, the skin lesion segmentation has performed in dermoscopic images. Produce higher resolution and dimension liberated segregation outcomes thru the amalgamation of any methods. 5 Lin Huang et al. [13] 2019 Object scale-oriented is used for training whereas for fine-tuning training fully convolutional networks were functional. This strategy is unpretentious and attained acceptable outcomes. 6 Tiejun Y et al. [40] 2019 Sampling with level set by integrating colour and texture Indicating that it produces finest outcomes than other existing ones in terms of accurateness and adaptableness. 7 Meskini E et al. [14] 2018 PSO procedure is purposeful to decide the greatest coefficients in lieu of converting RGB to gray level. Otsu method accustomed to detect the initial contour. Chan and Vese active contour is employed for ending lesion border detection. It leads to better segmentation results. Using gray scale images, less computational complexity than any one. Ning Wang et al. [16] 2018 Structure based convolutional neural networks for segmentation of skin lesion image Despite getting stable output (robustness), find a troublesome if more than one suspicious lesion spots in image. 10 Oludayo O et al. [17] 2018 A newfangled procedure namely PCDS including binary morphological exploration for segregation on dermoscopic image PCDS procedure leans towards vigorously elimination the occurrence of air bubble, bushy hair, and low contrast than any others. 11 Adria Romero L et al. [18] 2017 Deep-learning based method to rectify the delinquent of categorizing a dermoscopic image encompassing a skin lesion as malignant or not. Built around the VGGNet convolutional Neural Networks architecture includes uses the transmission erudition archetype. 12 Heydy Castillejos-F et al. [19] 2017 Wavelet-Fuzzy C-Means is applied for feature extraction. The detection of structures that befall in the lesion is extracted via Grey Level Co-occurrence Matrix. Erudition from the amalgamation of feature standards that represent either a malignant tumor or a benign lesion. MAEoC displays a better performance than single classifier. 13 Jafari MH et al. [20] 2016 CNN integrates local and global contextual statistics and outputs a label for every pixel, generating segregation mask those spectacles the lesion regions. This method reaches an excessive accuracy and sensitivity then any state-of-the art methods. 14 Catarina Barata [21] 2015 Compared feature of early fusion with late fusion. Capable of presenting the extracted color features, making the system and its verdicts more understandable for practitioners. 15 Faouzi Adjed et al. [22] 2015 Total Variation methods, a generality of Chan and Vese model for segmentation of melanoma. Results are qualitatively noble and more precise for the finding of the ROI that specifics inside this region. 16 Damilola A et al. [23] 2013 Designing a model of a structure that organize past Pigmented Skin Lesion (PSL) PSL image that acquire with the help of mobile, is given as input and specifically categorized the level of growth. 17 Nadia Smaoui [24] 2013 Feature extraction tracked by the ABCD rule for the diagnosis done the computation of the TDV score Automatic pick-out of the seed pixel too the threshold certifies the finest outcomes and avoids overlay amongst the lesion and health skin. 18 Peyman Sabouri [25] 2013 An elementary border detection procedure using ZYNQ-7000 SoC, with VIVADO -HLS tool. Extended 5 x 5 canny edge detection procedure executed on with this entrenched platform has enhanced recital. 19 Ferreira P.M et al. [26] 2012 A footnote tool for segregate manually. It permits enrichment a ground truth database with the manual segregates in cooperation of pigments of skin lesions or any ROI. 20 M Emre Celebi et al. [27] 2012 Ensembles of thresholding methods are performed f or detecting lesion border. Easy to implement and tremendously fast, however it may not perform well on images with substantial amount of hair or bubbles. 21 Gerald Schaefer [28] 2011 Association the enhancement procedure with two segmentation algorithms namely Iterative segmentation and Co-operative neural networks In cooperation techniques are proficient of given that noble segmentation, which the color enrichment step is certainly vital as validated thru evaluation with outcomes gotten as of the original images. 22 Maciel Zortea et al. [29] 2011 A reiterative fusion classification tactic, using a weighted amalgamation of predictable posteriors of a linear and quadratic classifier. This system is simple and flexible adequate to permit testing with different classifiers. 23 Liu Jianli and Zuo Baoqi [30] 2009 Executing genetic algorithm thru optimization of weightiness and thresholds in Neural Networks. For quantitative analysis and identification of the skin cancer, this continuous edge and contour technique has used. 24 Padmapriya N et al. [31] 2009 Integrate color and texture for the skin lesions segmentation from unaffected skin region In cooperation the organizational pattern and the image color thru the distribution of the resulting features. IMAGE SEGMENTATION As the name proposes image segmentation [32], is to split an image into significant segments [33]. It progresses image analysis pre dominantly [34] for image interrogating and retrieval. In Digital Image processing, Segmentation [35] acts as a vital role for analysis. The lesion pigments are segregated [36] from the healthy skin has carried out through this segmentation techniques [20]. Due to the diverse varieties in skin and lesion, there is a great challenge for segmentation of Dermoscopic images. Moreover, between lesion and its adjoining there is a low contrast or smooth changeover. Myriad segmentation [37] has been proposed to over whelm these difficulties. This study has highlights on few latest segmentation techniques namely, Grab Cut, and Social group optimization. These algorithms are recently implemented successfully for segregation of skin lesion image and provide the reliable results. Grab Cut Algorithm It is an iterative semi-automatic image segmentation approach; that to be segregated is epitomized through a Graph. It is constructed with a least cost reduction function that generates the finest results. Moreover, its node is made up of the image pixel; besides two superfluous nodes namely sink and source are included to it. Association points of the foreground pixels and background pixels are epitomizes the source and sink node respectively. A cost function a graph is contingent upon the region and borderline details in the image. The Grab Cut routine pertains Gaussian Mixture Models (GMMs) acquiring the region details through the color information. The Grab Cut algorithm [12] is, Artificial Bee Colony Algorithm (ABC) It is an optimization process [14] based on Swarm-based methodology that imitates the clever scrounging honey bee's activities. Honey bees crowd is named as a Swarm, which might be effectively complete obligations over a societal collaboration. This algorithm is classified into three kinds of bees namely worker bees, onlooker bees, and scout bees. The first kind is to search the food from near the resources from their memory; moreover share the acquaintance of the nourishment assets with the onlooker bees which decide on the excellent / fitness nourishment sources. The last kind bees are making out from few worker bees that discard their nourishment assets and search for crisp ones. The total Swarm in ABC algorithm is split into two halves, where the first part resides in worker bees and the remaining part reside in onlooker bees. The entire solution of the Swarm is corresponding to the quantity of honey bees either workers or on lookers. The primary steps of ABC algorithm are as follows Social Group Optimization There are myriad social characteristics for example, honesty, caring, sympathy, goodness, courage, tolerance and so on, deceitful lethargic in people that should be saddled and channelized the suitable way to empower him to tackle complex tasks in life. Hardly some people may have obligatory level of all these social qualities to be fit for explaining, adequately and proficiently, complex issues in life. However, the complex issues could be cracked with the impact of qualities from others (or group) in the society. Assemblage resolving ability has arisen to be added efficacy than singular ability to take care of a given issue. In view of this concept, another optimization system is proposed [38] and titled as Social Group Optimization (SGO), a population based soft computing algorithm. In SGO, the populace is considering as a group of pupil, every one obtain experiences so that having volume for rectifying a problematic, which is equivalent to the 'fitness', the best solution/ person. The best one stabs to share knowledge among all people, thus progress the level of knowledge for the whole people in the group. This algorithm [39] incorporates two primary advances, in particular (i) enlightening step that harmonizes the place of agents/people with the objective function, and (ii) obtaining stage which permits the agents to determine the finest probable result for the issue beneath anxiety. CONCLUSION Research in dermoscopic images has been expanded impressively for the early find-out of dangerous skin cancer that has reduces the death rate. Since this skin lesion is in irregular shape, it is complicated for processing till now. Feature extraction is one of the most imperative phases for analysis of it. This study assesses myriad segmentation techniques used for trailing the skin cancer lesions boundary. The results of the study depicts in table form. Myriad approaches for segregation and skin lesion detection are depicted. The result of each algorithm is greatly influenced by type of images used for analysis. Despite having advantage, each existing method shaving certain disadvantage. Among the above-said methods, the GrabCut method and ABC method provide the finest results. Mostly, the hybrid methods provided reliable results than any sole algorithm. This study will provide a significant guidance for researchers to enlarge development in their field. In upcoming days, new model will be developed with more reliable, user friendly and more robust. CONSENT It is not applicable. ETHICAL APPROVAL It is not applicable.
2021-05-05T00:09:29.403Z
2021-03-13T00:00:00.000
{ "year": 2021, "sha1": "965ddc257de3b0d8fdf259bb0aaa4b9d0b50a5d3", "oa_license": "CCBY", "oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/31235/58621", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7f9c858feb8bfe1f53540954efed8b510a7129df", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
250328323
pes2o/s2orc
v3-fos-license
The Leydig Steroid Cell Tumor in a Postmenopausal Woman with Clinical and Biochemical Hyperandrogenism: A Case Report Leydig cell tumors (LCTs) refer to tumors of the stroma of the genital strand, which are found mainly in postmenopausal women. The diagnosis of LCTs in postmenopausal women is associated with specific difficulties and is based on the identification of hyperandrogenism with clinical manifestations of virilization, which has an erased picture in postmenopausal women. LCTs require differential diagnosis with other causes of hyperandrogenism. We present the clinical case of a 55-year-old Russian postmenopausal patient with LCTs of the right ovary, significantly increased levels of androgens, and rapidly progressive clinical signs of hyperandrogenism. The patient underwent laparoscopic bilateral salpingo-oophorectomy, and the androgen indices reached average values by the first and third month after surgery. This case demonstrates that LCTs are often benign with a good prognosis and normalization of the clinical and laboratory manifestations of hyperandrogenism after surgical treatment. The type of surgery performed (bilateral salpingo-oophorectomy rather than unilateral) is recommended as the treatment of choice for LCTs in postmenopausal patients. Introduction According to the WHO Classification, Leydig cell ovarian tumors (LCTs) are relatively rare sex cord stromal tumors [1], occurring mainly in postmenopausal ages. The diagnosis of LCTs in postmenopausal women is challenging [8]. Firstly, due to the rare incidence of LCTs, the OB-GYN and other specialists are typically not familiar with this group of ovarian tumors. Secondly, LCTs may be asymptomatic [9], and these patients often do not have enlarged ovaries when estimated by bimanual vaginal examination and pelvic ultrasound [10], especially in the absence of Doppler scanning. Diagnosis of LCTs in this group of patients is based on the detection of hyperandrogenemia and/or clinical manifestations of virilization, which often has an erased picture against the background of the natural aging processes in postmenopause [2,[11][12][13]. LCTs require a differential diagnosis and the exclusion of other causes of hyperandrogenism [14]. The etiology and pathogenesis of LCTs as well as ovarian tumors in general are unknown. LCTs are usually benign with a good prognosis for normalization of clinical and laboratory manifestations of hyperandrogenism after surgical treatment [10,11,15,16]. The report aims to demonstrate a clinical case of a Leydig steroid cell tumor in a postmenopausal woman with late-onset, pronounced, rapidly progressive manifestations of clinical and biochemical hyperandrogenism. Clinical Information This paper presents a clinical case of a 55-year-old Caucasian (Russian) postmenopausal patient with severe, progressive clinical signs of hyperandrogenism (hirsutism, acne, alopecia, oily seborrhea, baryphonia) over the past year before visiting a doctor. The first examination was carried out in July 2019 at the outpatient department of the Scientific Center for Family Health and Human Reproduction (Irkutsk, Russia). In our work, we followed the ethical principles established by the Declaration of Helsinki of the World Medical Association of 1964 (revised, Brazil, October 2013). Informed consent was obtained before the investigation. The patient also agreed to publish the anonymized data. This publication was approved by the Local Ethics Committee. Complaints. Over the past year, the patient noted excessive hair growth along the upper lip, chin, lower abdomen, hair loss on the head, acne on the face and chest, alopecia, baryphonia, a sharp increase in body weight (15 kg over 3 months) without changing the diet, and the persistent rise in blood pressure in the absence of antihypertensive therapy. Gynecological History. The patient did not report any menstrual irregularities before menopause. Her menarche occurred at 13 years of age. She had 4 pregnancies (1: miscarriage of a short term; 2: caesarean section at 39 weeks, fetal weight 3400 g; 3−4: medical abortions). Contraception: intrauterine devices (3 times for 5 years each). The patient reached menopause at the age of 47 years without any severe climacteric manifestations. Gynecological diseases: uterine fibroids since 2013. The patient did not take antiandrogenic drugs or any medicines. The anthropometric examination was carried out according to generally accepted methods. The patient had morbid obesity (BMI was 41.8 kg/m 2 ). Blood pressure: 140/85 mmHg. Regarding hirsutism, the patient revealed excessive growth of terminal hair in androgendependent zones (upper lip, chin, lower abdomen, inner thighs), ranked 5 points by the Ferriman−Gallwey scale, according to her self-report because she applied shaving ( Figure 1). Acne and seborrhea (face, chest, and back) as well as alopecia 1 degree, according to the Ludwig scale, were also revealed. Bimanual gynecological examination: no pathological changes were revealed. Instrumental examination. Magnetic resonance imaging (MRI) was performed to exclude pituitary and extra pituitary masses. A multislice computed tomography (MCT) of the abdominal cavity, small pelvis, and retroperitoneal space with contrast (Aquilion One 640, Canon Medical Systems Corporation (formerly Toshiba Medical Systems),Ōtawara, Tochigi, Japan,) did not reveal pathological changes in the adrenal glands and ovaries as possible sources of hyperandrogenism. Ultrasound examination of the adrenal glands, abdominal organs, and kidneys (ultrasound machine Aplio XG, Canon Medical Systems Corporation (formerly -Toshiba Medical Systems),Ōtawara, Tochigi, Japan), did not clarify the cause of hyperandrogenism. The results of the transabdominal ultrasound and transvaginal gynecological examination (Voluson E8, GE Healthcare, Chicago, IL, USA) were as follows: in the right ovary, 27 mm × 23 mm × 23 mm in size, focal masses that were 10.9 and 7 mm in size with pronounced blood flow were detected (for comparison: the left ovary measured 21 mm × 18 mm × 18 mm without follicles and active blood flow). Notably, the patient underwent transvaginal ultrasound examinations during the previous year, and no changes were detected. Ultrasound examination of the adrenal glands, abdominal organs, and kidneys (ultrasound machine Aplio XG, Canon Medical Systems Corporation (formerly -Toshiba Medical Systems), Ōtawara, Tochigi, Japan), did not clarify the cause of hyperandrogenism. The results of the transabdominal ultrasound and transvaginal gynecological examination (Voluson E8, GE Healthcare, Chicago, IL, USA) were as follows: in the right ovary, 27 × 23 × 23 mm in size, focal masses that were 10.9 and 7 mm in size with pronounced blood flow were detected (for comparison: the left ovary measured 21 × 18 × 18 mm without follicles and active blood flow). Notably, the patient underwent transvaginal ultrasound examinations during the previous year, and no changes were detected. Before surgery, examinations of the gastrointestinal tract (video esophagogastroduodenoscopy and video colonoscopy) were performed, and no pathological findings were detected. The endometrial Pipelle biopsy followed by a histological examination revealed fragments of the endometrial glands of the indifferent type. Laboratory examination revealed high levels of total testosterone (TT), free (FT), and bioavailable testosterone (BT) whereas the levels of other hormones (including TSH, prolactin, cortisol, and insulin) were within the normative range ( Table 1). The level of CA-125 was lower than the reference values (23.62 units/mL). When assessing the general clinical blood test, a pronounced blood concentration was found (RBC 5.42 × 10 12 /L, HCT 50.5%, HGB 162 g/L); other biochemical parameters were within normal limits, including fasting glucose (5.22 mM/L). Before surgery, examinations of the gastrointestinal tract (video esophagogastroduodenoscopy and video colonoscopy) were performed, and no pathological findings were detected. The endometrial Pipelle biopsy followed by a histological examination revealed fragments of the endometrial glands of the indifferent type. Laboratory examination revealed high levels of total testosterone (TT), free (FT), and bioavailable testosterone (BT) whereas the levels of other hormones (including TSH, prolactin, cortisol, and insulin) were within the normative range ( Table 1). The level of CA-125 was lower than the reference values (23.62 units/mL). When assessing the general clinical blood test, a pronounced blood concentration was found (RBC 5.42 × 10 12 /L, HCT 50.5%, HGB 162 g/L); other biochemical parameters were within normal limits, including fasting glucose (5.22 mM/L). Based on the presented results of additional examination, the patient was diagnosed with hyperandrogenism (hirsutism, acne, alopecia, hyperandrogenemia) in postmenopause, most likely against the background of an androgen-producing tumor of the right ovary. Intervention The patient underwent laparoscopic bilateral salpingo-oophorectomy, a biopsy of the omentum (at the gynecological department of the Irkutsk Regional Clinical Hospital, Irkutsk, Russia). The patient was discharged on the second day in a satisfactory condition. The result of the histological examination showed a steroid cell tumor of the right ovary from Leydig cells (Reinke's crystals were absent) and focal hyperplasia of Leydig cells in the left ovary (Figure 2a-d). Based on the presented results of additional examination, the patient was diagnosed with hyperandrogenism (hirsutism, acne, alopecia, hyperandrogenemia) in postmenopause, most likely against the background of an androgen-producing tumor of the right ovary. Intervention The patient underwent laparoscopic bilateral salpingo-oophorectomy, a biopsy of the omentum (at the gynecological department of the Irkutsk Regional Clinical Hospital, Irkutsk, Russia). The patient was discharged on the second day in a satisfactory condition. The result of the histological examination showed a steroid cell tumor of the right ovary from Leydig cells (Reinke's crystals were absent) and focal hyperplasia of Leydig cells in the left ovary (Figure 2a-d). After-Surgery Examination and Outcomes To assess the clinical and laboratory manifestations of hyperandrogenism, the patient underwent a dynamic outpatient dispensary examination after 1 and 3 months with evaluation of complaints and physical and laboratory hormonal examinations, conducted at the Scientific Center for Family Health and Human Reproduction. Within a month, the patient's acne and seborrhea improved, and by a 3-month follow-up, we registered a decrease in hirsutism to 2 points on the Ferriman−Gallwey scale and a decrease in body weight by 10 kg. In the hormonal test, 1 and 3 months after the surgery, androgen levels reached the reference values (all tests were performed in the same laboratory) ( Table 1). After-Surgery Examination and Outcomes To assess the clinical and laboratory manifestations of hyperandrogenism, the patient underwent a dynamic outpatient dispensary examination after 1 and 3 months with Metabolites 2022, 12, 620 5 of 8 evaluation of complaints and physical and laboratory hormonal examinations, conducted at the Scientific Center for Family Health and Human Reproduction. Within a month, the patient's acne and seborrhea improved, and by a 3-month follow-up, we registered a decrease in hirsutism to 2 points on the Ferriman−Gallwey scale and a decrease in body weight by 10 kg. In the hormonal test, 1 and 3 months after the surgery, androgen levels reached the reference values (all tests were performed in the same laboratory) ( Table 1). We did not repeat the measurements of DHEAS, 17-OHP, prolactin, cortisol, and insulin at each visit because their levels at the baseline were within normal ranges. Discussion LCTs are sporadic tumors of the stromal sex cord of the ovary, most often occurring in postmenopausal women [17]. The etiology of the tumor is still unknown. However, we suppose that a certain contribution of previous morbid obesity to LCT development is not excluded. Recent experimental evidence suggests that tumor cells have estrogen receptors [18]. Simultaneously, obesity is often associated with hyperestrogenism. Thus, there is a theoretical basis for considering obesity in this study as an unfavorable factor. The above indicates the need for modification of lifestyle and nutrition, including the correction of gut microbiota, to prevent ovarian abnormalities influenced by obesity. A key role in diagnosing this disease is played by a careful collection of clinical history and a detailed physical examination of these patients. LCTs can be suspected when a patient suddenly presents rapidly progressive clinical manifestations of hyperandrogenism. However, the clinical symptoms of the disease do not provide reliable information regarding the genesis of hyperandrogenism. Hormonal research has a significant diagnostic value [13]. A woman who demonstrates hyperandrogenism with increased serum testosterone levels beyond the normal range is often considered to have ovarian and adrenocortical tumors [19]. In this study, we observed increased TT, FAI, and BT whereas DHEAS levels were within regular intervals, corresponding to the previously reported cases of LCTs [20][21][22]. Hyperandrogenism in postmenopausal patients requires differential diagnosis, which includes hormone-producing tumors of the adrenal glands and ovaries, a nonclassical form of congenital adrenal cortical dysfunction, Cushing's syndrome, and other causes (for example, medication intake). All these conditions were excluded in our patient. Sex cord stromal tumors of the ovaries due to their small size (less than 4 cm) [2] and the absence of an increase in the size of the ovaries are often not visible without the use of highly sensitive instrumental diagnostic methods [23]. The ovarian mass was diagnosed in our case by a transvaginal pelvic ultrasound with the Doppler procedure. Transvaginal Doppler ultrasonography is very useful for detecting and diagnosing ovarian tumors [24]. However, conducting highly sensitive methods such as MSCT research and ultrasonography with dopplerography does not always have a positive diagnostic value in determining the etiology of hyperandrogenism in postmenopausal women. Instrumental methods have sensitivity limits, and the accuracy of the methods can also be affected by the qualifications of the doctor who conducts this study [10]. Several studies have shown the high resolution of MRI studies in tumors of the stroma of the ovarian sex cord. The sensitivity of this method was higher as compared to ultrasound in assessing postmenopausal hyperandrogenism [25]. The analysis of the presented clinical case compared with the literature data showed that only a comprehensive assessment and comparison of the disease's clinical signs with hormonal and instrumental follow-up methods can achieve success in diagnosing LCTs. After surgical treatment, the most cases of LCTs are characterized by a good correction of hyperandrogenism. It is proved that Leydig cells are not distributed locally but in more than 80% of the volume of ovarian tissue. It is generally accepted that LCTs should be determined if Leydig cell nodules exceed 1 cm in diameter while a size less than 1 cm is called Leydig cell hyperplasia; these two conditions are somewhat tricky to distinguish [26]. In the clinical case, we described a histological picture of Leydig cell hyperplasia in the second (left) ovary with an unpredictable prognosis of the disease. The type of surgery (bilateral oophorectomy) used in this study is explained by the previously described probability of hyperplasia and LCT in the contralateral ovary [27][28][29][30][31]. Study strength. Over the past decades, a relatively small number of clinical cases of LCTs and case series have been published [32][33][34][35][36]. Therefore, the demonstration of our observations is valuable and allows for replenishing the database available for pooled analysis. In our opinion, our study limitations are as follows: we did not measure IGF-1 when excluding acromegaly and did not perform a dexamethasone test or pelvic MRI. Conclusions The diagnosis of LCTs in postmenopausal women is associated with specific difficulties and requires differential diagnosis with other causes of hyperandrogenism. We have presented the clinical case of a 55-year-old Russian postmenopausal patient with LCTs of the right ovary and Leydig cell hyperplasia in the left ovary. After laparoscopic bilateral salpingo-oophorectomy, androgen indices reached average values by the first and third month. Therefore, this case supports that LCTs have a good prognosis after surgical treatment. The type of surgery performed (bilateral rather than unilateral salpingooophorectomy) is the treatment of choice for LCTs in postmenopausal patients because of the high likelihood of pathological changes in the contralateral ovary. Informed Consent Statement: The patient signed written informed consent to publish this paper. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.
2022-07-07T15:08:54.367Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "f6d261c460e8db68b3f1db25241055de7be05a07", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-1989/12/7/620/pdf?version=1657089623", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "87288f00f134156723a472552ce83558374fc85f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
249957020
pes2o/s2orc
v3-fos-license
Cellular Distribution Pattern of tjp1 (ZO-1) in Xenopus laevis Oocytes Heterologously Expressing Claudins Epithelial barriers constitute a fundamental requirement in every organism, as they allow the separation of different environments and set boundaries against noxious and other adverse effectors. In many inflammatory and degenerative diseases, epithelial barrier function is impaired because of a disturbance of the paracellular seal. Recently, the Xenopus laevis oocyte has been established as a heterologous expression model for the analysis of transmembrane tight junction protein interactions and is currently considered to be a suitable screening model for barrier effectors. A prerequisite for this application is a physiological anchoring of claudins to the cytoskeleton via the major scaffolding protein tjp1 (tight junction protein 1, ZO-1). We have analyzed the oocyte model with regard to the interaction of heterologously expressed claudins and tjp1. Our experiments have revealed endogenous tjp1 expression in protein and mRNA analyses of unfertilized Xenopus laevis oocytes expressing human claudin 1 (CLDN1) to claudin 5 (CLDN5). The amphibian cell model can therefore be used for the analysis of claudin interactions. Graphical Abstract Introduction The epithelium acts as a biological, chemical, and physical barrier against multiple threats and challenges and provides a structural border between organ and tissue compartments (Powell 1981). The zonula occludens (tight junction, TJ), which is a complex intercellular junction, controls the permeability and transport of substances across the epithelium and is therefore indispensable for the physiology of the organism (Zihni et al. 2016). The tetraspan TJ protein family of claudins is the main determinant of organ-and tissue-specific TJs. Thus, detailed knowledge about claudin-claudin 1 3 interactions is fundamental for the exertion of a suitable pharmacological influence on the barrier, because the paracellular seal is mainly provided by claudin-claudin protein interactions . The establishment of an alternative amphibian model system for barrier research has recently been described by our group (Vitzthum et al. 2019), which has shown that oocytes of the African claw frog Xenopus laevis can be employed for the analysis of claudin-claudin interactions. Recently, we have been able to expand this heterologous expression system to the blood-brain barrier protein CLDN5 and to extend the analytical approach by using hydrostatic pressure impulses for the further characterization of claudin trans-interactions (Brunner et al. 2020). Our current study focuses on fundamental aspects involved in the application of Xenopus oocytes for barrier research in the context of the cytoskeleton of the oocyte. Simultaneously with the establishment of the oocyte as a cell model for ion channel activity and transport mechanism, the cytoskeletal organization of the oocyte has been unveiled (Carotenuto and Tussellino 2018). As a result, a wide range of techniques had been established which allow a manipulation, disruption, and rigidization of the oocyte membrane. Some of the pharmacological strategies, e.g., the block of actin polymerization by cytochalasin D (Galizia et al. 2012(Galizia et al. , 2013 or the disruption of cytoplasmic structures by the emptied-out Xenopus oocyte technique (EOO) to test potential drug effects on the intracellular binding sites of the oocyte membrane (Ozu et al. 2005 may become relevant in the clinical implementation of claudin-expressing Xenopus oocytes, as well. The major link between cytoskeletal actin filaments and tetraspan TJ proteins is provided by tjp1 (Furuse et al. 1994). Tjp1 is a cytoplasmic protein that contains PDZ-binding sites for barrier proteins including claudins (Furuse et al. 1998;Itoh et al. 1999), occludin (Furuse et al. 1994;Fanning et al. 1998), andtricellulin (Ikenouchi et al. 2005;Riazuddin et al. 2006). Further functions include binding to gene-regulating transcription factors, e.g., ZONAB (Balda and Matter 2000;Balda et al. 2003). Various alternative RNA splicing isoforms have been described for tjp1, namely a longer isoform with 80 extra amino acids (α + ) and a shorter isoform lacking this alpha domain (α − ) (Willott et al. 1992). Although tjp1 depletion has been shown to be lethal in mouse embryos (Katsuno et al. 2008), other authors have observed that claudins lacking the PDZ motif still localize to the TJ and can dynamically break and re-anneal into TJ strands (Ruffer and Gerke 2004;Van Itallie et al. 2017). Moreover, tjp1 plays a fundamental role in the kinetics of TJ assembly and has a stabilizing effect on the solute barrier through coupling to the cytoskeletal ring of the cells (Van Itallie et al. 2009). But also a manipulation through sense and antisense Shroom oligonucleotide injection as shown for xShroom1 has an impact on membrane protein function and maintenance mediated through the effects on amiloride-sensitive Na + currents in Xenopus oocytes (Zuckerman et al. 1999;Assef et al. 2011;Palma et al. 2016). Many of these regulatory proteins do share similarities in domains with PDZ. In our current study, we present a first assessment of the localization of the heterologously expressed claudins and PDZcontaining tjp1, which is of major interest for the employment of the amphibian cell model in membrane barriology. Xenopus laevis is a widely used model organism for developmental biology and translational research (Nenni et al. 2019), and thus, its genomic evolution and embryonic development have previously been described in detail Segerdell et al. 2008;Session et al. 2016). When Xenopus oocytes have been employed for the heterologous expression of proteins, unfertilized oocytes of stages V and VI have been used with a gene expression for tjp1 S and for tjp1 L of 0.9 transcripts per one million mapped reads (TPM) and of 1.9 TPM, respectively (Session et al. 2016). The relative protein expression for oocytes at stage VI is described as being 0.096, which represents the decimal fraction at this stage of total protein agglomerated over all profiled stages (Peshkin et al. 2019). In embryonic development, zygotic transcription starts from the 4000-cell stage onward (Fesenko et al. 2000), but TJs and associated structures can be observed from the (fertilized) 2-cell stage onward and are translated from maternal stores of mRNA (Cardellini et al. 1996;Heasman 2006). An investigation of the influence of endogenous tjp1 expression on claudin-expressing oocytes and an evaluation of the functionality of the protein-scaffold interaction are essential requirements for further application of Xenopus oocytes in the context of barrier research. In this study, we have screened Xenopus laevis oocytes for their endogenous expression and localization of tjp1 protein in context with heterologous claudin expression. Additionally, we have analyzed possible claudin-specific regulatory effects on tjp1 gene expression. Animals Oocytes were obtained from mature female African claw frogs. Animal treatments were conducted with approval by the animal welfare officer for the Freie Universität Berlin and under the governance of the Berlin Veterinary Health Inspectorate (Landesamt für Gesundheit und Soziales Berlin, permit O 0022/21). Anesthetics and Surgical Procedure To achieve surgical anesthesia of the frogs, they were transferred into a bath solution of buffered 2 g/L MS222 (ethyl 3-aminobenzoate methanesulfonate, Sigma-Aldrich, Taufkirchen, Germany, pH 7.5) for 5-10 min at 20 °C. Righting and corneal reflexes were used for the assessment of surgical anesthetic depth. Skin and abdominal muscle incisions were made to access the Xenopus ovaries. cRNA Preparation Relevant nucleotide coding consensus sequences were used for the synthesis of the human cRNA of CLDN1 to CLDN5 (ShineGene Bio-Technologies Inc., Shanghai, China; Thermo Fischer Scientific, Henningsdorf, Germany). Claudin sequences were cloned into suitable high copy ampicillin-resistant pGEM for transformation in competent DH10b Escherichia coli. A commercial T7 RNA-polymerase-based approach (T7 RiboMAX RNA Production System and Ribo m 7 G Cap Analog, Promega, Walldorf, Germany) was used according to the manufacturer's instructions to generate cRNAs for injection into the amphibian germ cells. Oocyte Isolation and cRNA Injection Follicular cell layers were removed by enzymatic digestion at room temperature for 90 min in 1.5 mg/ml collagenase (NB4 Standard Grade, Nordmark Pharma, Germany) dissolved in oocyte Ringer solution (ORi). Cells were then separated by incubation in Ca 2+ -free ORi (Vitzthum et al. 2019) for 10 min on a mechanical shaker at 50 rpm. Oocyte stages V and VI were injected (Nanoliter 2010, World Precision Instruments, Sarasota, USA) with 50.6 nl of 10 ng/µl, 20 ng/µl, or 40 ng/µl cRNA encoding for human CLDN1 to CLDN5 or with RNase-free water as controls. Based on the total cRNA amounts, this gave three experimental groups: 0.5, 1, and 2 ng cRNA/oocyte. Injected oocytes were incubated for 3 days at 16 °C in ORi for protein expression. Isolation of Membrane Fractions and Immunoblotting For Western blot analysis, ten injected oocytes were blended and resuspended in 500 µl oocyte homogenization buffer containing (in mM) 5 MgCl 2 , 5 NaH 2 PO 4 , 1 EDTA, 80 sucrose, and 20 Tris, pH 7.4 in accordance with the plasma membrane buffer established by Leduc-Nadeau et al. (Leduc-Nadeau et al. 2007). Oocyte suspensions were centrifuged twice at 200 rpm for 10 min at 4 °C, and the supernatant was centrifuged at 13,000 rpm for 30 min at 4 °C. The pelletized cell membrane fractions were resuspended in homogenization buffer. Membrane samples were then quantified with Protein Bioassay according to the manufacturer´s instruction in a 96-well plate (#500-0119 RC DC Protein Assay, Bio-Rad, Munich, Germany). Bovine Serum Albumin Standard (ThermoFischer Scientific, Henningsdorf, Germany) served as the protein standard. Before the loading of the gels, samples were mixed with 4× Laemmli buffer (Bio-Rad Laboratories, Munich, Germany). Samples were loaded onto a stain-free acrylamide gel (TGX Stain-Free FastCast Acrylamide Kit, 10% #1610183, Bio-Rad Laboratories, Munich, Germany) and electrophoresed. The proteins were transferred to PVDF membranes, and the binding of nonspecific proteins was blocked with 5% nonfat dry milk in Tris-buffered saline for 60 min. We detected the proteins of interest by incubation of the membranes with primary antibodies raised against the TJ proteins CLDN1 to CLDN5 and tjp1 (#51-9000, #51-61600, #35-2500, #32-9400, #34-1700, Life Technologies, Carlsbad, USA, and LS-C145545-100, Biozol, Eching, Germany) overnight at 4 °C. Peroxidaseconjugated secondary antibodies (#7074, #7076 Cell Signaling Technology, Danvers, MA, USA) were incubated with the membranes for 45 min at room temperature and detected using Clarity Western ECL Blotting Substrate and Chemi-Doc MP (#1705061, Bio-Rad Laboratories GmbH, Munich, Germany). Immunofluorescence Cytochemistry Using our established protocols, oocytes were paired for the analysis of claudin trans-interactions (Brunner et al. 2020). Briefly, vitelline membranes were removed, and claudinexpressing oocytes were clustered to induce adhering contact areas. Oocyte pairs were incubated in ORi at 16 °C for 24 h. Oocytes were fixed in 4% PFA (16% paraformaldehyde, E15700, Science Service, Munich, Germany) for 4 h at room temperature followed by dehydration in an alcohol gradient to xylol. Samples were embedded in paraffin, crosssectioned (5 µm), and mounted onto microscope slides. Primary antibodies were the same as those for immunoblotting, and secondary antibodies were conjugated with photostable Alexa Fluor 488 and Alexa Fluor 594 dyes (Life Technologies, Carlsbad, USA). Slides were examined by confocal laser-scanning immunofluorescence microscopy (Zeiss LSM 710). RNA Isolation and cDNA Synthesis The Nucleospin RNA (Macherey & Nagel, Dueren, Germany) commercial kit was used for RNA extraction from 10 oocytes per sample. NanoPhotometer P330 (Implen GmbH, Munich, Germany) was employed to determine the levels of possible contamination. An RNA absorption ratio of light at 260/280 nm > 2 was considered to indicate that the samples were free of protein contamination. A 260/230 nm absorption ratio of 1.7-2 was considered to indicate that the samples were free of buffer salt contamination. cDNA was synthesized using iScript (Bio-Rad, Munich, Germany) according to the manufacturer's instructions. A-RT sample (without reverse transcriptase) was used as a negative running control. For reverse transcription, a Biorad iCycler iQTM (Biorad, USA) was used with the protocol given in Table 1. Qualitative and Quantitative Real-Time PCR For PCR analysis, Xenopus laevis odc1 (ornithine decarboxylase 1), gapdh (glyceraldehyde-3-phosphate dehydrogenase), and h4c4 (H4 clustered histone 4) were used as housekeeping genes, and tjp1 as the gene of interest. Primers (Table 2) were purchased from Eurofins Genomics (Eurofins, Ebersberg, Germany). For qualitative PCR, cDNA samples from claudin-injected oocytes were pooled and transcribed using Taq PCR master mix (Qiagen, #201443, Düsseldorf, Germany) according to the instructions of the manufacturer. Following gene amplification (Table 3), PCR products were loaded onto a 2% agarose gel in TBE buffer. Additionally, quantitative PCR was performed using iQTM SYBR Green Supermix Kit (Biorad, USA) with three replicates per reaction and three technical replicates. Double-distilled H 2 O and -RT samples served as negative controls. As primer efficiency ranged between 1.93 and 2.03, gene expression was normalized relative to the housekeeping genes and to the control group by using the Delta-Delta CT method. Statistical Analysis Statistical analysis was performed with JMP Pro 15.0.0 (NC, USA). The normal distribution was checked using the Shapiro-Wilk test, and Delta CT values were analyzed by one-way analysis of variance (ANOVA). Heterologous Expression of TJ Proteins in Xenopus oocytes The successful expression and integration of claudins into the Xenopus laevis plasma membrane was verified by Western blot analysis. After 3 days of expression, membrane fractions of 10 oocytes having had injections of 0.5 ng/oocyte, 1 ng/oocyte, or 2 ng/oocyte claudin cRNA were loaded onto a stain-free acrylamide gel. All membranes revealed claudinspecific signals at the predicted protein mass in accordance with the injected cRNAs (20-27 kDa). RNAse-free waterinjected oocytes were treated identically and showed no signal for the endogenous expression of claudins (Fig. 1). Samples were also incubated with tjp1 antibody to check the endogenous tjp1 expression in the claudin-injected cells. All tested oocytes showed tjp1 isoform-specific signals at 187 kDa and 195 kDa. Oocytes Show Specific Signals of tjp1 in the Submembranous Space After removal of vitelline membranes, claudin-expressing and water-injected control oocytes were clustered into pairs. Both control and claudin-injected oocytes showed specific immunohistochemical signals after incubation with tjp1 antibodies. The signal was mainly located in the submembranous space of the cells and appeared as a submembranous belt immediately underneath the oocyte plasma membrane (Fig. 2). This accumulation of signals was particularly distinct in the CLDN1-, CLDN2-, and CLDN5-expressing cells and in naïve oocytes. In CLDN2-and CLDN3-expressing cells, claudin and tjp1 signals were selectively colocalized at the plasma membrane and resulted in a yellow signal (arrows). Claudin Injection Does Not Engage Endogenous tjp1 mRNA Expression Tjp1 was consistently detectable by qualitative PCR (Fig. 3). We therefore performed quantitative real-time PCR to investigate the effect of claudin injection on tjp1 mRNA levels. Delta CT values were analyzed for all three concentrations by one-way analysis of variance (ANOVA) to determine the effect of claudin injection and water-injected controls, F (5, 48) = 0.2367, p ≥ 0.9). All claudin-injected oocytes showed a negligible impact of the claudin injection on tjp1 expression compared with water-injected control oocytes, resulting in a mild n-fold upregulating trend of 1.28-2.10 for tjp1 expression in claudin-expressing cells (not significant; Table 4 and Fig. 4). Discussion In our present study, we have further characterized the established heterologous expression system of Xenopus oocytes for the analysis of barrier proteins (Vitzthum et al. 2019). As an interplay between the cytoskeletal scaffold and the expressed barrier proteins provides the foundation of physiological barrier formation (Rodgers et al. 2013), the investigation of interactions between these proteins in Xenopus oocytes appears mandatory for further applications of the model system. We employed immunoblotting and immunohistochemical staining in order to gain a comprehensive understanding of the expression, localization, and interaction of heterologously expressed claudins with tjp1 in oocytes at developmental stages V and VI. Xenopus oocytes at stage V-VI express small amounts of transcripts of claudin mRNA, ranging from approximately 0.06 up to 44.7 TPM (Session et al. 2016), and so, endogenous claudin protein expression might be expected in immunoblots. But the protein expression of claudins is described as a mere fraction, e.g., 0.001 for cldn3 (decimal fraction at stage VI of total protein agglomerated over all profiled stages), and the anti-human CLDN antibodies allowed a clear distinction to be made between injected and thus overexpressing oocytes and naïve germ cells. Nevertheless, we were able to verify endogenous tjp1 protein expression and to localize the protein to the submembranous space of naïve and claudin-expressing oocytes. In accordance with the literature in which both isoforms of tjp1 have been reported to be present in the Xenopus embryo from the first cleavage onwards (Fesenko et al. 2000), we were able to detect α + and α − tjp1 in oocytes at stages V and VI. Furthermore, our quantitative PCR analyses revealed that claudin expression did not significantly affect tjp1 mRNA expression levels. Previously, tjp1 has been shown to have a modeling effect on cell-cell contacts by regulating nuclear processes (Gottardi et al. 1996). In addition, claudins have been described as transcriptional regulators (Hagen 2017) that not only affect other transcription factors, e.g., ZONAB (Ikari et al. 2014), but also have the ability to interact with Fig. 1 Immunoblot analysis of tight junction (TJ) proteins in X. laevis oocytes Cell membrane lysates applied to 10% stain-free acrylamide gel and transferred onto PVDF membranes. All claudin-injected oocytes membranes revealed claudin-specific signals at the predicted protein mass in accordance with the injected cRNAs (20-27 kDa). RNAse-free water-injected oocytes were treated identically and showed no signal for endogenous expression of claudins. However, specific signals for both tjp1 isoforms α + (195 kDa) and α − (187 kDa) in claudin-expressing oocytes and water-injected controls confirmed endogenous tjp1 protein expression Fig. 2 Immunohistochemical staining of TJ proteins in X. laevis oocytes All claudin-injected oocytes revealed claudin-specific signals at their cell membranes in accordance with the injected CLDN cRNAs (green). RNAse-free water-injected oocytes were treated identically and showed no signal for the endogenous expression of claudins (* representative image of water-injected oocyte screened for endogenous CLDN3 expression). Additionally, immunofluorescent staining in claudin-and water-injected oocytes revealed specific tjp1 signals (red) in oocytes, whereas in no primary antibody controls, no specific signals were detected by confocal microscopy. Tjp1 signals were concentrated in the submembranous space and appeared as a belt-like structure. In CLDN2-and CLDN3expressing oocytes, claudin and tjp1 signals were selectively colocalized at the plasma membrane and resulted in a yellow signal (arrows). Scale bars: 20 mm the scaffold. Schlingmann et al. have demonstrated that the binding of CLDN5 to tjp1 in alveolar epithelial cells results in paracellular leakage and the rearrangement of TJs by inhibiting the interaction of CLDN19 with the scaffold (Schlingmann et al. 2016). Moreover, a reduction in tjp1 to CLDN4 binding has been shown to lead to lower CLDN4 expression (Hamada et al. 2013). In our study, heterologous claudin expression did not affect tjp1 gene expression in the oocytes, and claudin-scaffold interactions were reflected by only a partial colocalization of the two binding partners (CLDN and tjp1) at the same intracellular location, as shown by confocal laser-scanning analyses. Unlike the localization in epithelial cells and cell culture experiments in which tjp1 and claudins largely colocalize in the apical part of the cells, the clear distinction between the membranous claudins and the submembranous scaffolding protein tjp1 becomes more apparent, because of the large size of the germ cell of up to 1300 µm. Nevertheless, the limited counts of colocalization indicate that claudins and tjp1 are only intermittently associated corresponding to the dynamic coupling of claudin strands with the cytoskeleton (Van Itallie et al. 2017). Fig. 3 Qualitative PCR of tjp1 and housekeeping genes odc1, gapdh, and h4c4 PCR products were loaded onto a 2% agarose gel in TBE buffer. Pooled samples of claudin-injected oocytes showed gene products in accordance with the predicted amplicon size (Table 2) of the housekeeping genes and the gene of interest In the literature, actin filaments of oocyte stage VI have been observed to surround the germinal vesicle and also extend from the cortex into the subcortical cytoplasm. After this stage, a dynamic change of actin distribution has only been described after the meiotic arrest of prophase I is terminated and fertilization occurs (Roeder and Gard 1994;Christensen et al. 1984). Furthermore, independent of the interaction with tjp1 or actin, claudin strands are capable to break and re-anneal (Van Itallie et al. 2017), although the accumulation of the tjp1 signal in the submembranous space is described as an indicator of the formation of the subjunctional cytoplasmic plaque of the TJ (D' Atri and Citi 2002). The accumulation of tjp1 in a submembranous belt in oocytes resembles the concentration of tjp1 in the junctional complex region in polarized epithelial cell lines (Umeda et al. 2006), and thus, the formation of the submembranous belt in Xenopus oocytes might mirror this process of organization. We conclude that, in this experimental setting, physiological binding is unhampered. The reason that the submembranous signal is more apparent in CLDN1-, CLDN2-, and CLDN5-expressing cells compared with CLDN3-and CLDN4-expressing cells remains unclear and needs to be examined in more detail in future studies. An overexpression of CLDN3 and CLDN4 has been described to enhance tumorigenesis of human ovarian surface epithelial (HOSE) cells. The more diffuse pattern of tjp1 in the oocytes might therefore result from tjp1 interacting with not only claudins, but also numerous other cytosolic and nuclear proteins, e.g., pten and zonab, which play a role in the regulation of germ cell function (Heinzelmann-Schwarz et al. 2004;Agarwal et al. 2005). Additionally, Nomme et al. have identified factors of claudin specificity and affinity of binding to cytoplasmic scaffolding proteins, such as tjp1. They analyzed the binding of claudins to the tjp1 PDZ1 domain and discovered that the binding can be influenced by the presence or absence of a tyrosine residue at P -6 and that the affinity is reduced if the tyrosine is modified by phosphorylation (Nomme et al. 2015). However, these findings can not depict a full molecular explanation for the structural distinct cellular localization of tjp1 in the Xenopus oocytes, because CLDN1 and CLDN4 do not share this tyrosine residue at P -6 . Moreover, a potential difference might arise because of a disparate distribution of yolk platelets along the animal-vegetal axis of the oocytes (Danilchik and Gerhart 1987), rather than because of differences with regard to claudin family members. Although Xenopus laevis is widely used for the investigation of transport mechanisms, signaling pathways, and human hereditary genetic diseases (Miller and Zhou 2000;Blum et al. 2009;Blum and Ott 2018), the use of Xenopus oocytes for barrier research is a novel approach. Two studies have recently been conducted on the mechanistic suitability of the oocytes for barriology by our group (Vitzthum et al. 2019;Brunner et al. 2020). The current study contributes to this specific field of barrier research and encourages the application of the model. Despite the information that a single or two-cell (paired oocyte) model can contribute to a multifunctional and multicellular barrier system being limited, it nevertheless allows an in-depth examination of claudin interaction in a restricted and therefore verifiable, reproducible, and cost-efficient model system. In our experimental setup, the effects of claudin expression on the cytoskeletal scaffold are demonstrated for tjp1. In a further step toward a better understanding of tjp1-CLDN colocalization, Förster resonance energy transfer (FRET) technology or coimmunoprecipitation (coIP) could be conducted in follow-up studies to gain a sterical perception of the involved mechanism and give proof of an interaction between the binding partners. In particular, the detection of small quantities of endogenous tjp1 in Xenopus oocytes might be improved as it was shown for cystic fibrosis transmembrane regulator (CFTR) protein localization by Kreda et al. (Kreda and Gentzsch 2011). Additionally, a coinjection of CLDNs and tjp1 cRNA leading to a tjp1 overexpression may lead to further insights into the tjp1-CLDN interaction and might also allow a manipulation of CLDN function through the utilization of tjp1 orthologs and mutants. This might further allow clinical implications, toward an understanding and therapeutical options including the role of the actin cytoskeletal scaffold in barrier-related diseases, e.g., IBD (Kuo et al. 2021). Although tjp1 plays an important role with regard to TJ assembly, structure, and regulation, the development of a functional barrier is dependent on a variety of factors, such as MARVEL domain proteins (Raleigh et al. 2010), junctional adhesion molecules, and cingulin Zihni et al. 2016;Vasileva et al. 2020). Indeed, tjp1 can be regarded as a key point of TJ scaffolding, as reduced tjp1 expression correlates with increased TJ permeability and ineffective epithelial healing processes (Kuo et al. 2021). Thus, our present examination of tjp1-CLDN interactions provides a timely evaluation of the accessibility of the amphibian cell model for barrier research. Data Availability In accordance with the rules of good scientific practice, all data are archived and available on request. Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-06-24T06:17:50.644Z
2022-06-23T00:00:00.000
{ "year": 2022, "sha1": "eb9f347178a3eaec2a3c6b8b4fb3bf137403540c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00232-022-00251-z.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "23903dc75f22ed0604a62aa3d50d9f264de3e729", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
224995028
pes2o/s2orc
v3-fos-license
How to get a grip on testing The UK has grand ambitions for testing, but is struggling to get it right. There are solutions, reports Clare Wilson " Without testing, which is our eyes and ears, we don't understand where the outbreak is going" THIS month, UK Prime Minister Boris Johnson announced an ambition to increase the country's capacity for coronavirus testing to several million tests a day. Billed as Operation Moonshot, the idea was received with widespread incredulity. The UK is currently failing to meet demand for coronavirus testing, with roughly half a million daily requests outstripping supply by up to fourfold. Yet there are also reports of new technologies in development that could make testing faster and cheaper. If the UK had the capacity to test not just those with symptoms of covid-19, but to regularly test symptomless people too, it could be a game changer in the ability to control the disease. From the beginning of the pandemic, many countries have struggled to provide enough coronavirus tests for all those who need them. A lack of tests is disruptive because anyone with symptoms that resemble those of covid-19 has to stay at home and isolate, and must also be treated as infectious within hospitals. Insufficient tests also make it impossible to accurately track how the epidemic is progressing in a region, whether cases are rising or falling. "Without testing, which is our eyes and ears, we don't understand where this is going," says Stephen Griffin at the University of Leeds in the UK. The UK faced this problem initially in its first wave of covid-19, when even hospitals were going short of tests. To expand capacity, five large facilities known as Lighthouse Labs were set up to process polymerase chain reaction (PCR) tests, a well-established technique. In this case, the tests are used to compare samples from a nose or throat swab to the genes of the new coronavirus. The labs are dotted around the UK and, for a few months, capacity seemed largely sufficient. As UK cases have begun to increase again in recent weeks, though, demand has risen. The drivers seem to be people socialising and returning to work and school. Although children are generally less affected by the coronavirus, schools are known hotbeds for spreading coughs, colds and flu, which have similar symptoms to covid-19 and so can trigger test requests. Media reports have been full of stories of testing centres with empty car parks, while people trying to book online are being offered appointments hundreds of kilometres away. The bottlenecks aren't at the testing centres where swabs are taken, but at the Lighthouse Labs where they are sent. If labs fall behind on processing, they tell testing centres not to release more appointments. Although the UK's current capacity for tests is around 250,000 a day, some are reserved for hospitals, so only about 160,000 are available to the public. Based on estimates of phone requests and website usage, about three or four times as many people are seeking tests as are able to get one, according to comments made to members of parliament by Dido Harding, head of England's test-and-trace scheme. Two further Lighthouse Labs are opening in the next few weeks, which should increase testing capacity to 500,000 a day by the end of October. However, Harding admitted to MPs that, by then, it QR codes are used to scan in samples at a new testing centre in Glasgow How to get a grip on testing The UK has grand ambitions for testing, but is struggling to get it right. There are solutions, reports Clare Wilson Any mass screening of people who don't have symptoms can hit the problem of "false positives". Imagine a test that is 95 per cent accurate when it produces a positive result. It is important to remember that very few of the screened group really are infected -say about one in 1000 at any time. Of the 999 people without the coronavirus, 949 would correctly test negative, and 50 would wrongly test positive. Assuming that the one person out of the 1000 who really has the virus correctly tests positive too, then for every 51 positive test results, 50 would be wrong. Across the population, that would lead to thousands of people unnecessarily staying at home and self-isolating. There is a solution, says Julian Peto at the London School of Hygiene & Tropical Medicine. With fast tests that give a result in an hour or two, anyone who gets a positive result could have another test, reducing the number of people wrongly told to isolate. "The idea of false positives is a complete red herring," he says. False positives Travellers queue at a covid-19 testing centre at Frankfurt Airport, Germany equipment in research labs at universities and hospitals. Peto now says that increased capacities for other tests -for example, a genetic test called RT-LAMP -could make mass screening easier. Unlike PCR, this doesn't need sophisticated lab equipment, but merely a heater to warm the sample to about 65°C. It gives a result in 20 to 45 minutes. It might be possible to speed up testing further by switching from looking for the virus's genes to hunting for molecules on its surface, known as antigens. These can be detected using artificial versions of the antibodies of our immune system that normally recognise viral antigens. This is the same mechanism as home pregnancy tests and, like these, coronavirus antigen tests can produce fast results. Antigen tests aren't generally as sensitive as genetic ones, but that has both pros and cons. They can fail to spot some people whose infection is waning and so have relatively few virus particles in their nose or mouth, but still have enough viral genetic material to Moonshot, other approaches may be needed. Earlier this year, Julian Peto at the London School of Hygiene & Tropical Medicine proposed a plan in medical journal the BMJ to use mass testing to eliminate the coronavirus from the UK. To achieve the necessary level of testing, he proposed commandeering all the PCR ALEX KRAUS/BLOOMBERG VIA GETTY IMAGES still won't be enough to meet rising demand. Official documents leaked earlier this month suggest that Operation Moonshot is aiming for a capacity of 10 million tests a day by early next year. This is around the same number that would be needed to eliminate the virus from the UK, by testing everyone in the country once a week, although the government hasn't stated that elimination is the goal. Instead, it has focused on testing as a means for people to return to regular activities. New testing methods could help. One option is a small machine called NudgeBox that can process a sample on the spot and give a result in 90 minutes, instead of it having to be sent to a lab. Developed by UK biotech firm DNANudge, the device is already being used in eight hospitals, and the UK has ordered 5000 more. Recent research shows it is almost as sensitive as standard lab testing. "It allows you to start therapy much more quickly," says Graham Cooke at Imperial College London, who led the study. While this kind of machine can help in hospitals, it can only process one sample at a time and so turn around at most 16 samples a day. That means it can't raise testing capacity enough to screen millions of people daily unless hundreds of thousands of devices are manufactured. What is needed are mass-testing devices that process multiple samples at once. Various other kinds of PCR tests are in use or in development around the world that could help. Some of these are cheaper or easier than the standard lab tests, or use different chemicals to get round any shortages of the commonly used ones. But if testing capacity is to be boosted to the levels mooted in Operation be picked up by PCR. However, such people are less likely to be spraying virus into the air from their lungs, so antigen tests might be good for quickly picking out only people who are infectious. For now, UK mass-testing schemes are sticking with genetic tests. There are two large trials combining saliva testing with the fast RT-LAMP method in two cities. In Salford, screening of people at indoor and outdoor venues is due to begin next month. In Southampton, children at several schools are starting weekly checks. It was initially thought that saliva tests wouldn't catch as many positive cases as swab tests because mucosal fluid from inside the nose or the back of the throat should in theory contain more virus particles. So the first tests approved were swab ones. But it now seems that testing people's saliva is effective. The US Food and Drug Administration granted emergency approval last month to two saliva tests. Such tests would be especially useful in schools because administering the invasive swab tests is particularly hard with small children. A ready supply of tests to enable mass screening would allow for a radical new containment strategy, as testing would include people who are infected but have no symptoms and so can spread the virus unknowingly. If enough people are reached and all infected individuals self-isolate, it should reduce the virus's prevalence. There are big questions around false positives (see box, left) and who would pay for the tests, and it would also be vital to test visitors and returning travellers, as is happening in Germany. But done properly, these two strategies together might be able to more or less eliminate the virus from a nation without a vaccine in sight. ❚
2020-09-29T13:08:47.155Z
2020-09-25T00:00:00.000
{ "year": 2020, "sha1": "60ff7ed7112d9f68c514c3b7b8aea87030a52e6a", "oa_license": null, "oa_url": "https://doi.org/10.1016/s0262-4079(20)31686-9", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "82ae5329dfbd239ea96578febcc405f52d3bae7b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "History" ] }
219929101
pes2o/s2orc
v3-fos-license
Theoretical and experimental analysis of photovoltaic module characteristics under different partial shading conditions Received Jul 6, 2019 Revised Dec 2, 2019 Accepted Feb 16, 2020 Recently, the renewable energy resources have gained more attention in the electricity sector as promising technology to tackle the depletion in the traditional energy resources. Solar energy grows rapidly due to its vast applications. The performance of Photovoltaic (PV) system is affected by partial shading that results from building, clouds, and fallen leaves. This paper investigates theoretically and experimentally the impacts of various cases of partial shading; such as vertical string, horizontal string, and single cell at environmental conditions on the current-voltage and power-voltage characteristics of 88 W PV panel. In addition, diagonal shading with multi steps is considered in the analysis. The experiments are conducted with considering various parameters; such as shading position and ratio to validate the simulated results. The results show that at 100% shading condition, the maximum power drops by 99.36 %, 43.7%, and 41.15% for horizontal, cellular and vertical shading at the same solar radiation level comparing with their initial state value. Horizontal string shaded has the highest negative impact on the power and efficiency among other types of shadings. The comparison between the theoretical and experimental results reveals considerable agreement between the theoretical and experimental results. INTRODUCTION The emission of carbon from the traditional electrical power generation technology is one of the most important reasons that lead to employ the renewable energy as alternative sources of electrical power generation. Solar energy is the best choice for electricity generation [1] because it is clean [2], cheap [3], silent [4], abundant and environmentally friendly energy resources [5]. Among different forms of renewable energy, PV represents one of the most promising renewable energy in the world [6,7] due to its great advantages; such as zero emission of greenhouse gases, low maintenance cost, not rotating or moving parts [8,9]. A PV cell is basically a semiconductor diode with a large barrier layer exposed to light allowing portion of the energy in the light photons arriving at the cell convert directly to DC electrical power [10]. The most notable factors that have a clear impact on the performance of PV system are: Partial shading, dust [5], sand [11], temperature [12], and solar radiation [13]. Partial shading is the most common encountered problem in a PV system. In this phenomenon, the received sunlight is reduced significantly and results in lower system efficiency [14]. In the design stage, the shadow of nearby objects avoided as possible. However, parts of PV module face unexpected shadows; such a sun melted snow, bird dung, fallen leaves, the neighboring buildings, towers, and passing clouds [15,16].When the PV module partially shaded, the shaded cell operates at current levels lower than unshaded cell. As consequence, the shaded cells are forced into reverse bias and begin consuming power instead of generation power. This lead to undesired increase in cell temperature this is lead to localized overheating (hot spot). When this temperature reaches the critical limit, the effected cell (shaded cell) can be damaged [17].Passive and active methods are used for decreasing the power losses which are caused by partial shading. In the first method, the bypass diode is connected in antiparallel with the photovoltaic cells in order to pass the current and avoid the destructive impact of shading. Current flow in the diode causes losses in power, therefore, it is not possible to avoid losses completely. On the other hand, the dynamic reconfigurations between the panels of photovoltaic represent second method (active method) [18,19]. H. H. Khaing et al [20]: Studied the effect of different partial shading on four various types of PV modules that involve amorphous thin-film, CIGS thin-film, CdTe thin-film and multi-crystalline PV modules. The module tested along the length side and along the breadth side with different shading rate varied from 10% to 60% from the PV area with increase of 10%. The result showed that shading along the breadth side more effected than shading along it's length side. G. S. Reddy et al. [21] Presented PV module model by MATLAB/ Simulink to analyze their performance under nonuniform irradiance condition. Moreover, Various PV array simulation schematic could be created by the proposed model, and parameters of irradiance of each PV module can be set independently. Different simulation process is conducted and compared for different array configurations (2 series, 3 series, 2series × 2parallel and 3 series × 2 parallel) under nonuniform irradiance conditions of I-V and P-V characteristics curves. M. Abdullah et al [22], investigated the impact of shading on the effectiveness of PV panel. The experiments have been accomplished with a 90-W solar panel under constant and changeable irradiations. Horizontally shaded area which changed from 0 to 80% has been applied to detect the impact of varying irradiation at appointed shading points. The results showed that for each 100W/m2 increase in radiation level leads to enhance the output power by 3.89 W, 3.37 W, 2.27 W, and 2.02W at 0 %, 25%, 50%, and 75% shaded area respectively. In addition, the efficiency has been raised by 0.29 %, 0.27%, 0.25%, and 0.22% at 0 %, 25%, 50%, and 75% shaded area respectively. The drop in output power and efficiency were 12.41W and 2.3% respectively when the shading area increased by 10%. However, this study considers the horizontal shading and ignored the other types of shading. F. Bayrak et al (2017) [23], analyzed thermodynamic and electrical performance under the shading shape and shading ratios on 75 W polycrystalline PV. Horizontal, vertical and single cell shading at different percentage were applied. The results showed that at 100% shading rate, the power losses were 99.98 %, 66.93% and 69.92% for horizontal, vertical and cellular shading respectively. The gap of this research is that the results are obtained from a practical study only without simulating study. Gutierrezet al [24] presented the effortless approach to model and analyze the performance of PV module under partial shading using shading ratio. The shadow opacity and characteristics of shaded [25], investigated the partial shading effect and the critical point that reduced the sensitivity of shading heaviness. Under various numbers of shaded modules and shading heaviness, The P-V characteristics showed that the PV string became impervious to gravity shad when the irradiance of shaded panels arrived a critical point. The results showed that when irradiation on the shaded modules on the PV system was between 1000 and 700 W/m2, the power dropped by about 6.2% for each 100W/m2 drops in the irradiation. However, when irradiation was between 700 and 0 W/m2, each 100 W/m2 drops in the irradiation, leads to drop the power by 0.24%. In [21] and this research, the results obtained from MATLAB/SIMULINK did not validated experimentally. In the present paper, different cases of partial shading with different percentages are analyzed experimentally and theoretically to determine its impact on power and efficiency of mono crystalline PV panel with 36 series connected cells and three bypass diode. Insulated material is employed as shading element in different cases with different percentage of shaded area. The I-V recorded and P-V calculated before and after applying shadow is utilized to find the power loss corresponding to this shadow. MODELING OF PHOTOVOLTAIC MODULE PV module is the basic unit of power PV generation system. PV module has non-linear characteristics which depend on solar radiation and cell temperature. In this paper, PV module with 36 series connected solar cell is chosen. Figure 1. Shows the PV module model that is employed in this study. Besides, the irradiance (G) and temperature (T) with the electrical characteristics parameter of PV module such as, open circuit voltage (Voc), short circuit current (Isc) are shown in the figure. It is having 23.42V, 5.192A and 1.5 as open circuit voltage (Voc), short circuit current and ideality factor respectively, while series resistance value (Rs) is 0 ohm. Different cases are carried out using Matlab/Simulink to determine I-V and P-V characteristics (Table 1). To develop the model of PV, the solar cell block is taken from Sim Electronics block set-Matlab. The parameters of solar cell are defined in equations (1) and (2) [25]. ℎ : The output powr is [26] = Fill factor (FF) represents the ratio of maximum power divided by the open circuit voltage and short circuit current [26] . = ℎ = (4) I ,I Ph and I rs are the output, photo-generated and the diode saturation currents respectively, V is the output voltage, R S is the series resistance, N S is the number of cells, V T is the junction thermal voltage, A is the ideality factor, k is the Boltzman constant (1.3806503 ×10-23 J/K), T is the cell temperature and q is the electron charge (1.6021765 ×10-19 C). Different cases were conducted as illustrated in Table 2. The adopted PV module includes 36 solar cell which are divided into two group, each group consist of 18 cells. Different solar irradiations and constant temperature (25°C) were applied to find their effect on the I-V and P-V curve of the PV model. Shading factor (FS) represents the ratio of irradiance on the shaded Surface (G T,S ) to the irradiance on the unshaded surface (G T ) [27]. In addition, other cases were considered by applying different irradiation level on three groups of solar cells (each group 12 cells) as shown in Table 2 the model in Figure 1 used in these cases with simple changes. Figure 3 shows the PV module, the vertical string shading with 100% shading area are applied to the model and the results are compared with the experiment results. EXPERIMENTAL SETUP In order to obtain the I-V and P-V characteristics of the PV panel under partial shading, the current and voltage were measured. Solar radiation and ambient temperature affect these characteristics therefore; it is necessary to measure these parameters. At environmental conditions, the change in atmospheric condition may lead to rapid change in the solar radiation therefore; the data must be recorded quickly in short time as possible. Figure 4 shows the experimental setup, where the experiment was set up in Baghdad (33.33° N latitude and 44.39° E longitude) for collecting the required data. The utilized PV pane is Monocrystalline PV panel with 36 cells connected in series, where each 12-cell connected to one bypass diode. The panel is placed on a mobile metallic holder and it is installed to face the south with an inclination of 31.2. Table 3 explains the electrical characteristics of the PV panel under standard test condition. The required data are radiation intensity, ambient and module temperatures, wind speed and relative humidity therefore; two digital multi-meters used to measure the current and voltage. Thermocouple type k was used for obtaining the panels temperature by using a digital data recorder. Relative humidity and ambient temperatures were measured by using UNI-T UT332 digital Thermo-hygrometer devise. The measurement of wind speed was obtained using wind gage type Kaindl Wind master 2. Solar radiation was measured by solar meter pyrometer type TES 1333R and rheostat (100W, 10 ) were used as load to measure the maximum current and maximum voltage. Many experiments were carried out in clear sky condition during September, 2018 to investigate partial shading on the panel. Different cases with different percentage of shading were applied on the panel by using non-transparent material as shading element to closing as 0, 25, 50, 75, 100% shaded area of horizontal string, vertical String and single cell as illustrated in Figure 5 and 6. Moreover, diagonal shading also investigated as shown in Figure 7. Table3. Electrical characteristic of monocrystalline solar panel When the irradiance on the module is non-uniform as illustrated in Table 2, multi steps in the I-V characteristics and multiple peaks in the P-V characteristics curves are observed. This is because the bypass diodes are activated, where this bypasses the shaded group cells and allow the unshaded group cells to have different P-V characteristics from the shaded group cells. Figure 9 (a) shows the simulation results of the cases study in Table 2. The same figure shows that the short circuit current is negatively affected by decreasing the irradiance levels. This leads to decrease the maximum current and consequently, the Experimental results An experiment was conducted to analyze the performance of PV panels under shading conditions. Figures 5, 6 and photo of Figure 7. Show the cases of shading with different shading area. I. Diagonal shading with multi step (one, two, three, and four steps). Each case were applied on monocrystalline panel which shaded by 25, 50, 75, 100% respectively under radiation changes between (985±7 W/m2). In addition to that, the recorded environmental temperatures were close during measurement days. The outside temperature is 42±2°C while, the module temperature is approximately 66.2±2 °C. In general Isc, Im decreasing with increasing shaded area, consequently Pm decreasing with increasing shading area as showed in Figure 10, Figure 11 and Figure 12. Table 4 presents the Imax, Vmax, Pmax, ΔP, Plosses (ΔP/P (%)) and FF values which are calculated from Figure 10. Fill factor (FF) is decreased from 70.61% to 0.42%. It can. In comparison with current at no shading, the current is decreased by 98%. The maximum power at no shading 63.45W while, at 100% shading conditions, the power was drops to 0.504 W. In the other words, the power decreased by 99.36 % for 100% shading condition. In this case of shading, each string in the photovoltaic module affected by shading, therefore, the bypass diodes which connected between them completely disabled. Case II: Vertical string shaded as previous case applied and the results explained in Figure 11. It can be noticed that the power output at 0% shading was 63.62W. While, at 100% shading condition, the power was decreased to 29.1W. In comparison with power at no shading, it mean that the power output dropped by 54.26% at 100% shading condition. At these condition, the F.F decreasing from 69.84% to 48.92%. Case III: In this case single cell shaded by 25%, 50%, 75% and100%, the result clarified in Figure12. It can be noticed that at no shading (0% shading) the current is 4.7A, however, at 100% shading conditions, the current was 2.81A. The same figure showed that the power at no shading is 63.87W, and then dropped to 26.33 W. This means that power decreased by 68.71 % as the shading increased to100%, as compared With power at no shading condition. By comparing the results of the three cases, it can be concluded that the decreasing in maximum power output in the third case is less than the decreasing in the first cases (horizontal and vertical shading). Diagonally shading with multi step (one, two, three, and four steps) investigating at 860 W/m2 radiation level, as illustrated in Figure (5.19a & b). The results of this type of shading appeared that in the first step where no shading applied the Im was 4.38, then it decreased to1.9, 4.11, and 2.11A for second, third and fourth steps. On the other hand he power dropped from 61.32W at no shading to 2.43W for the fourth steps. The comparison is done for two cases random reading for no shading condition and 100% of vertical string shading when solar radiation is 743 Wlm 2 and module temperature is 57℃. The I-V and P-V curves of the collected data that were achieved experimentally and theoretically are explained in the Figures 14 a and b. The error between the experimental and theoretical results is about 3.3 and 6.28% for first and second cases respectively. A comparison between present experimental results of power losses of PV module as a function of shading area with the experimental results of previous studies explained in Figure 15 CONCLUSION In this paper, non-transparent material with different shading position and different percentage of area are applied on monocrystalline solar panel. The importance of the presence of diodes in the PV panels is their ability to divide each panel into several sections. The results showed that the Photon current under partial shading decreases. This reduces Isc and Im, which leads to decrease the output power. Besides, the results reveals that when the radiation changes between 965 to 975 W/m2 and 100% shading condition is considered the power decreases by 99.36 %, 43.7 % and 41.15% for horizontal, single cell, and vertical respectively comparing with their initial value at no shading condition. Further, when considering the diagonal shading with no shading at first step the power is 62.06W. However, the power decreases by 48.56 %, 63.3% and 96.22% for second, third and fourth steps respectively. The parameters of PV panel are simulated by using MATLAB to investigate the partial shading effect at different irradiation levels. The results show noticeable agreement between the experimental and the theoretical results.
2020-06-04T09:04:14.353Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "d825830dd6acb606aa61e206e1d57fb1fd092300", "oa_license": "CCBYSA", "oa_url": "http://ijpeds.iaescore.com/index.php/IJPEDS/article/download/20300/13200", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b2769214a601807d08bd7df62c42f1a57fffbc24", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
7791551
pes2o/s2orc
v3-fos-license
under a Creative Commons License. Ocean Science The backward Îto method for the Lagrangian simulation of Random walk models are a powerful tool for the investigation of transport processes in turbulent flows. However, standard random walk methods are applicable only when the flow velocities and diffusivity are sufficiently smooth functions. In practice there are some regions where the rapid but continuous change in diffusivity may be represented by a discontinuity. The random walk model based on backwardˆIto calculus can be used for these problems. This model was proposed by LaBolle et al. (2000). The latter is best suited to the problems under consideration. It is then applied to two test cases with discontinuous diffusivity, highlighting the advantages of this method. Introduction The transport of a tracer can be described by using the advection-diffusion equation. In general, this equation cannot be solved analytically, so that numerical methods must be resorted to. The most popular method is an Eulerian approach, in which the transport equation is solved on a fixed spatial grid. The finite element method and finite difference method are primary examples of this class of solution methods. An alternative method is the Lagrangian approach, which follows particles through space at every time step. The movement of an individual particle is usually modeled in two steps: the advection, which is deterministic, is simulated by a translation of each particle with a velocity derived from the local fluid velocity field. Diffusion is generally simulated using stochastic methods. Then, by averaging the positions of many particles the advection-diffusion processes can be described (Thomson, 1987;Sawford, 1993;Costa and Ferreira, Correspondence to: D. Spivakovskaya (d.spivakovskaya@ewi.tudelft.nl) 2000; Zimmermann et al., 2001;Proehl et al., 2005;Delhez and Deleersnijder, 2006). Particle-tracking models offer advantages over Eulerian methods in several respects. First, the solution obtained by using the particle tracking method is always mass conservative and non-negative, while the Eulerian methods is susceptible to excessive numerical dispersion and artificial oscillations (Zheng and Bennett, 2002) for advective dominated problems or problems with large gradients on the initial concentration field. Second, for problems where the tracer does not occupy the whole model domain, the Lagrangian methods models may be computationally more efficient than their Eulerian counterparts (Hunter, 1987;Spivakovskaya et al., 2005). Third, if the velocity field can be locally described by an analytic function, then particles may be advected exactly through that field by simple integration (Hunter et al., 1993). However, it should be noted, that the numerical flow can affect the accuracy of the particle tracking method. In this case the interpolation of flow variables in arbitrary particle location that can lead to local mass balance error and solution anomalies (LaBolle et al., 1996). In general, both approaches have their own advantages and disadvantages: for instance, the Lagrangian approach can be an alternative to the Eulerian methods in case of steep concentration profile. On the other hand, the Eulerian approach is more suitable for dispersion-dominated problems, for which it provides accurate solution in reasonable time. The choice of a method depends on the problem under consideration. Sometimes, it is not easy to classify the problem and decide which method should be applied. The mixed Eulerian-Lagrangian methods attempt to combine the advantages of Lagrangian and Eulerian methods (Konikow and Bredehoeft, 1978;Celia et al., 1990;Yeh, 1990;Zhang et al., 1993;Zheng and Wang, 1999). For space-varying diffusivity the advection part of the random walk model requires an additional correction term, which is proportional to the diffusivity gradient. Because of this correction term the particles do not accumulate in regions of low diffusivity (Hunter et al., 1993;Visser, 1997;Ross and Sharples , 2004). This random walk model can be introduced by using the theory of stochastic differential equations (SDE) (Heemink, 1990;Dimou and Adams, 1993;Stijnen et al., 2006;Spivakovskaya et al., 2007). The advectiondiffusion equation is interpreted as the Fokker-Planck equation (Oksendal, 1985) and the corresponding SDE inÎto sense can be derived. As a result, the particle's track is simulated by a stochastic process, whose transition density function coincides with the tracer concentration. TheÎto formulation is not the only way to introduce the particle tracking model. Another random walk model based on Stratonovich stochastic calculus is also quite popular. Unfortunately, the common random-walk methods for simulating transport can only be applied when the diffusivity is sufficiently smooth, otherwise the correction term in the advection part dominates the flow velocity. In many situations the rapid but continuous change in turbulence statistics that occurs may be represented by a discontinuity (Thomson et al., 1997). Even without the large gradients, numerical simulation of the flow can result in discontinuities in the velocity field and, therefore, the velocity-dependent dispersion tensor may become discontinuous. One of the method to treat this problem is to interpolate the velocities in order to generate a smooth dispersion field (LaBolle et al., 1996). Recently LaBolle et al. (2000) proposed a random walk model based on backwardÎto calculus that requires no corrective velocity. In this paper, we discuss the random walk models based on Ito, Stratonovich and backwardÎto calculus. The backward Ito random walk model is seen to be appropriate for dealing with a discontinuity in the diffusivity field. It is applied to two test cases, for which key properties of the solutions can be derived analytically. TheÎto, Stratonovich and the backwardÎto random walk models Let us consider the following one-dimensional advectiondiffusion problem: Here C(t, x) is the concentration of a passive tracer, u is flow velocity and k(x) is diffusivity term. Equation (1) can be interpreted as a Fokker-Planck equation (see Karatzas and Shreve, 1998;Oksendal, 1985) and the corresponding Stochastic Differential Equation (SDE) inÎto sense can be considered where W is a Wiener process, i.e. a stochastic process with the following statistics (t 1 ≤t 2 ≤t 3 ≤t 4 ) Here E(X) denotes the expectation of the random variable X. The solution of the advection-diffusion problem (1) is then th e probability density function of the stochastic process X(t). The SDE (2) actually is not a "differential" equation, but can be interpreted as an integral equation The first integral (advection) in the right hand side of (4) is a standard Lebesgue integral, while the second part (diffusion) of (4) may be introduced as the limit of the sum (LaBolle et al. (2000)) Here 0=t 0 <t 1 < . . . t n−1 =t n =t and ms− lim denotes the limit in the mean square sense. In general, to define unique stochastic integral one needs to specify at which point the function f (X, t) is evaluated. For instance, in the definition of theÎto integral the function f is always evaluated in the beginning of subinterval [t k−1 , t k ] rendering f (X(t k−1 ), t k−1 ) statistically independent of [W (t k )−W (t k−1 )] and thus ensuring that theÎto integral has zero mean. One well-known alternative, the Stratonovich integral, may be defined as a limit of the sum in which the function is evaluated at the middle of the time interval The corresponding random walk model can be written as follows: The random walk models in theÎto or Stratonovich sense contain the diffusivity gradient in the advection part. For problems with large space variations of the diffusivity, this gradient may be very high and, therefore, dominates in the advection term. As a result, the solution obtained by the random walk model in theÎto or Stratonovich sense will not be accurate. To circumvent this diffusivity, one may have recourse to a random walk model that does not require a diffusivity gradient in the advection part. This formulation is based on the backwardÎto integral (see Karatzas and Shreve (1998); LaBolle et al. (2000)). (bI) Using the backwardÎto SDE for modelling advectiondiffusion processes with discontinuous diffusivity was proposed by LaBolle et al. (2000). The corresponding random walk model may be written as follows: The sensitivity of the limit of the integral sums to the choice of location at which the function is evaluated is a consequence of the unbounded variation of the Wiener process (Karatzas and Shreve, 1998). However, each of the random walk methods introduced above is consistent with the advection-diffusion Eq. (1). For the continuous diffusion term, all these methods provide the same solution of Eq. (1). Remark. The stochastic differential equations in theÎto, Stratanovich and backwardÎto formulations are equivalent in case of smooth coefficients. When the coefficients are discontinuous the convergence of the stochastic differential equation in backwardÎto formulation is not guaranteed. In LaBolle et al. (2000) the convergence of the back-wardÎto stochastic differential equations were proven for the one-dimensional case and demonstrated for two-dimensional case. However, there is no proof of the convergence for the multi-dimensional case. From the mathematical point of view, it would be more correct to use the term "generalized backwardÎto" method to distinguish the difference between the stochastic differential equation with continuous and discontinuous parameters. For the sake of simplicity, we use the term "backwardÎto" in this paper. Numerical integration of the SDEs It can be shown from the advection-diffusion Eq. (1) (see Hunter et al., 1993) that the mean and variance of the tracer cloud spread during time range (t, t + t) are given by Here N i , i=1, 2 denote the ith moment of the concentration. Now we show that the first two moments of the displacement . . , L in the random walk models (inÎ to, Stratonovich and backwardÎto senses) are the same as the first moments of the concentration C. Specific numerical schemes are associated with each of the stochastic methods mentioned above. For instance, the SDE in theÎto sense can be numerically integrated by applying the explicit Euler method: Here, , . . . , L − 1, t=t/L and R i are mutually independent normally distributed random numbers with parameters (0, 1), e.g. the random variables with the following density function We need only to find the probability law of the solution X(t) of the SDE (in other words, solution in the weak sense), but not to approximate the solution itself. For these purposes, it is not necessary to chose normally distributed random variables. We can use any distribution with the same mean and variance, for instance, random numbers uniformly varying between − √ 3 and √ 3. The solution obtained by using the random walk model (11) has the same properties As a result the solution obtained by this random walk model is consistent with the advection-diffusion Eq. (1). The Heun method is more suitable for the Stratonovich formulation of the particle model (Kloeden and Platen, 1999) Let us consider the mean and variance of X i obtained by method (14) E( X i ) = E u i t + 1 2 k (X i ) t Spivakovskaya et al.: The backwardÎto method for the Lagrangian simulation of transport processes 4 D. Spivakovskaya et. a l.: The backwardÎto method for the Lagrangian simulation of transport processes Random walk model (consistent with the advection-diffusion equation) In other words the following equation is valid Substituting (16) into (15) yields (17) The variation of the displacement ∆X in the Heun scheme coincides with the variation of the concentration We can conclude that the random walk model (14) has the same first two moments as a standard random walk model in theÎto sense and as in (10). Finally, the backward Euler scheme is appropriate for the backwardÎto formulation (see LaBolle et al. (2000)) Using Eq. (16) we can again find the moments of the distribution of X(t) obtained by backward Euler scheme (21) As a result the solution obtained by the backwardÎto random walk model is consistent with the advection-diffusion equation (1). The main differences between theÎto, Stratonovich and the backwardÎto formulation are shown in Fig. 1. Illustrations In this section the random walk models (inÎto and backward Ito senses) are applied to two test cases. In general, the analytical solution of the direct problem (1) cannot be found; however, the residence time of a tracer can be obtained (Delhez et al. (2004); Deleersnijder et al. (2006a,b)). The residence time of a water or tracer parcel in a control domain is usually defined as the time taken by this parcel to leave the domain of interest (Bolin and Rodhe (1973); Zimmerman (1976,1988 Let us expand the function B(X i In other words the following equation is valid Substituting (16) into (15) yields The variation of the displacement X in the Heun scheme coincides with the variation of the concentration We can conclude that the random walk model (14) has the same first two moments as a standard random walk model in theÎto sense and as in (10). Finally, the backward Euler scheme is appropriate for the backwardÎto formulation (see LaBolle et al., 2000) Using Eq. (16) we can again find the moments of the distribution of X(t) obtained by backward Euler scheme As a result the solution obtained by the backwardÎto random walk model is consistent with the advection-diffusion Eq. Illustrations In this section the random walk models (inÎto and backward Ito senses) are applied to two test cases. In general, the analytical solution of the direct problem (1) cannot be found; however, the residence time of a tracer can be obtained (Delhez et al., 2004;Deleersnijder et al., 2006a,b). The residence time of a water or tracer parcel in a control domain is usually defined as the time taken by this parcel to leave the domain of interest (Bolin and Rodhe, 1973;Zimmerman, 1976 1988; Braunschweig et al., 2003;Takeoka, 1984;Delhez and Deleersnijder, 2006). As such, the residence time is one of the most popular tool to describe and understand environmental issues. Mathematically, the mean residence time θ(x) of the tracer of initial mass m(t 0 ) released at time t 0 can be computed by monitoring the temporal evolution of the mass of the tracer in the control region (Bolin and Rodhe (1973); Takeoka (1984)) Delhez et al. (2004) introduced an alternative procedure designed for numerical models. They showed that the residence time can be found from the solution of the adjoint problem to the advection-diffusion equation. For both examples, we assume that the diffusivity is discontinuous at some location. Such diffusivity profile does not exist in the nature; however, there are regions of large space variations of the diffusivity. The discontinuous diffusivity can be considered as a limit case for which it is generally easier to find the analytical solution. In addition, if the Lagrangian method under consideration can successfully handle a discontinuity in the diffusivity field, it is safe to assume that this method will be able to deal with region of high gradients of the eddy coefficient. Illustration 1: Settling and diffusion problem First, we apply the random walk model (9) to the settling and diffusion model (Fig. 2) proposed and analyzed by Deleersnijder et al. (2006a,b). In this model we assume that x is a vertical coordinates that increases upwards. It is zero at the interface between the mixed layer and the underlying pycnocline. If h is the height of the mixed layer, the water-air interface is located at x=h. w represents the settling velocity (we assume that w is a constant) and k(x) is the vertical eddy diffusivity, which is positive in the interval 0<x<h and is zero in the pycnocline, i.e. underneath the domain of interest. We suppose that the upper boundary of the domain is impermeable It is only by settling that the particles of the tracer under study can leave the domain of interest and enter the pycnocline, so the turbulent diffusion flux must be prescribed to be zero at the bottom of the mixed layer The initial concentration is where δ(x) denotes Dirac delta function. Deleersnijder et al. (2006b) showed that the residence time may exhibit a discontinuity at the interfaces between the mixed layer (0<x<h) and pycnocline (x<0), for the eddy diffusivity is zero in the latter and positive in the former. Now we assume that the boundary of interest is x= , rather than x=0. is positive or negative according to whether the boundary is located in the mixed layer or the pycnocline, respectively. The corresponding residence time is hereinafter denoted θ (x 0 , ) = which may be recast as a function of x. There is no closed form solution for C(t, x), but the residence time may be calculated analytically (Deleersnijder et al., 2006a). For the sake of simplicity, it is assumed that the eddy diffusivity is a positive constant λ, in the mixed layer and zero in the pycnocline. It is also desirable to introduce dimensionless variables: From now on, only dimensionless variables will be used, so that it is appropriate to drop the primes. If >0, the lower boundary is located at a level where the eddy diffusivity is nonzero, while <0 corresponds to the case, when the lower boundary is located below the pycnocline. Let us assume that the lower boundary of the domain is pushed towards the bottom of the mixed layer >0, →0 + , <0, →0 − . Deleersnijder et al. (2006b) show that the corresponding residence times are different. In particular, for the chosen value of the diffusivity and We apply theÎto and the backwardÎto random walk models to simulate the transport of the tracer in the proposed model. TheÎto random walk model formulation corresponds to the case when the lower boundary of the domain is placed above the pycnocline, while the backwardÎto random walk model provides the solution of the case when the lower boundary of the domain is placed under pycnocline. The exact and the numerical solutions for N=10 4 particles are shown in Fig. 3. From Fig. 3 we can conclude that the residence times obtained by applying theÎto and backwardÎto random walk schemes are different. One can wonder which scheme provides the right solution. In reality, the both methods are correct, however they give answers for two different problems. In Sect. 3 it was shown that for a smooth diffusivity function both random walk schemes are identical. In theÎto case, an additional drift due to the spatial variation of the diffusivity is present. Because of this additional drift particles cannot stay in regions with low diffusivity. In the backwardÎto formulation the additional drift term has disappeared and is included in the random term by applying the two-steps backward Euler scheme. The disadvantage of theÎto formulation is that it cannot handle the case of discontinuous diffusivity. By applying an Ito model in this case the diffusivity drift is zero everywhere except exactly at the boundary where it is infinite. By applying a numerical scheme, particles will never reach exactly the pycnocline and as a result the diffusivity drift becomes essentially zero. Therefore a particle that comes close to the boundary will never go back into the domain (see Fig. 4a) and the residence time computed is in fact the residence time O(x, 0 + ). By applying the backwardÎto model the diffusivity drift is included in the random term of the model. Now a particle does get back into the domain even if it is very close to the boundary (see Fig. 4b). So the presence of the pycnocline is taken into account, leading to the residence time O(x, 0 − ). 532 D. Spivakovskaya et al.: The backwardÎto method for the Lagrangian simulation of transport processes 4.2 Illustration 2: The direct and adjoint problems for the residence time In the previous section we considered a model, in which the diffusivity exhibits a discontinuity at the boundary of the domain. However, in practice the diffusivity can change rapidly inside the domain of interest. An example of such a problem is needed. In this respect, inspiration may be found in Delhez and Deleersnijder (2006). Let t and x denote time and a space coordinate, respectively. In the domain −L≤x≤L, the concentration of the tracer C(t, x) obeys the following partial differential problem where the positive constant u is the fluid velocity, while k(x)>0 denotes the eddy diffusivity. The residence time in the domain of interest of the tracer whose concentration obeys the partial differential problem (29) is (Delhez et al., 2004) θ In principle this value may be evaluated for any admissible value of x 0 . The ensuing function may then be recast as a function of x, i.e. θ(x). However, obtaining the analytical solution of the direct problem (29) is usually considered as difficult. Fortunately, it is much easier to obtain the residence time by solving the adjoint problem (Delhez et al., 2004;Delhez and Deleersnijder, 2006): For the purposes of the present study, the eddy diffusivity must exhibit a discontinuity inside the domain of the interest. The simplest expression that satisfies this constraint probably is the following piecewise constant function where k + and k − are positive constants. Therefore, at x = 0, the residence time must satisfy two matching conditions: In the developments below, the residence time at x = 0 will be denoted θ 0 . In other words, the latter satisfies the equalities The zero advection case If the advection is zero (u=0), then it is appropriate to introduce the dimensionless parameter µ = k + /k − and variables For the sake of simplicity the primes can be dropped. Hence, the dimensionless diffusivity is After some calculations, the residence time is obtained: with The analytical and numerical solutions obtained byÎto, Stratonovich and backwardÎto random walk methods are shown on Figs. 5, 6 respectively. Clearly, the backward Ito solution is much better than the Stratonovich solution, which, in turn, is better than that obtained by the classicalÎto method. The advection-diffusion case If advection is present (u>0), then it is appropriate to introduce the following dimensionless parameters and variables: t = t L/u , (x , x 0 ) = (x, x 0 ) L , P e ± = uL k ± , C = C 1/L , (θ , θ 0 ) = (x, x 0 ) L/u As in the previous example the primes can be dropped. It is also useful to define a piecewise constant Peclet number: P e(x) = P e + , 0 < x 0 ≤ 1 P e − , −1 ≤ x 0 < 0 After some calculations, the residence time is obtained: with a ± = e ∓P e ± θ 0 ∓ 1 e ∓P e ± − 1 , a ± = ±1 − θ 0 e ∓P e ± − 1 and θ 0 = P e + − P e − P e + P e − Figure 7 shows the analytical solution and the numerical solutions obtained fromÎto, Stratonovich and backward Ito formulations are shown on Fig. 8. One can see easily that only the solution obtained by the backwardÎto random walk model is very close to the analytical solution, while the Stratonovich andÎto solutions significantly differ from the exact residence time. Conclusions In this paper we considered the random walk model that can be applied to model the transport process in the regions with large space variations of the diffusivity, or as the limit case with discontinuous diffusivity. This model proposed by LaBolle et al. (2000) is based on the backwardÎto stochastic integral. It is consistent with the advection-diffusion equation and does not contain the diffusivity gradient in the advection part. Two test cases were analyzed: a sinkingdiffusion model, in which the diffusivity exhibits a discontinuity at one boundary of the domain and an advectiondiffusion problem with a discontinuity in the diffusivity inside the domain of interest. For both test cases the analytical solution of the indirect problem, e.g. finding the residence time, is known. The backwardÎto random walk model was applied and the results show that this model provides the correct results for discontinuous diffusivities, while other, better known, random walk models perform rather poorly.
2014-10-01T00:00:00.000Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "c4f93e05e0b5641d49e73414622ecb8901789698", "oa_license": "CCBYNCSA", "oa_url": "https://www.ocean-sci.net/3/525/2007/os-3-525-2007.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "344d1767b7bd1d8c67e4b519f6c7b4799efe65d2", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [] }
31580965
pes2o/s2orc
v3-fos-license
Evaluation of Muscle Function of the Extensor Digitorum Longus Muscle Ex vivo and Tibialis Anterior Muscle In situ in Mice Body movements are mainly provided by mechanical function of skeletal muscle. Skeletal muscle is composed of numerous bundles of myofibers that are sheathed by intramuscular connective tissues. Each myofiber contains many myofibrils that run longitudinally along the length of the myofiber. Myofibrils are the contractile apparatus of muscle and they are composed of repeated contractile units known as sarcomeres. A sarcomere unit contains actin and myosin filaments that are spaced by the Z discs and titin protein. Mechanical function of skeletal muscle is defined by the contractile and passive properties of muscle. The contractile properties are used to characterize the amount of force generated during muscle contraction, time of force generation and time of muscle relaxation. Any factor that affects muscle contraction (such as interaction between actin and myosin filaments, homeostasis of calcium, ATP/ADP ratio, etc.) influences the contractile properties. The passive properties refer to the elastic and viscous properties (stiffness and viscosity) of the muscle in the absence of contraction. These properties are determined by the extracellular and the intracellular structural components (such as titin) and connective tissues (mainly collagen) 1-2. The contractile and passive properties are two inseparable aspects of muscle function. For example, elbow flexion is accomplished by contraction of muscles in the anterior compartment of the upper arm and passive stretch of muscles in the posterior compartment of the upper arm. To truly understand muscle function, both contractile and passive properties should be studied. The contractile and/or passive mechanical properties of muscle are often compromised in muscle diseases. A good example is Duchenne muscular dystrophy (DMD), a severe muscle wasting disease caused by dystrophin deficiency 3. Dystrophin is a cytoskeletal protein that stabilizes the muscle cell membrane (sarcolemma) during muscle contraction 4. In the absence of dystrophin, the sarcolemma is damaged by the shearing force generated during force transmission. This membrane tearing initiates a chain reaction which leads to muscle cell death and loss of contractile machinery. As a consequence, muscle force is reduced and dead myofibers are replaced by fibrotic tissues 5. This later change increases muscle stiffness 6. Accurate measurement of these changes provides important guide to evaluate disease progression and to determine therapeutic efficacy of novel gene/cell/pharmacological interventions. Here, we present two methods to evaluate both contractile and passive mechanical properties of the extensor digitorum longus (EDL) muscle and the contractile properties of the tibialis anterior (TA) muscle. . The contractile and passive properties are two inseparable aspects of muscle function. For example, elbow flexion is accomplished by contraction of muscles in the anterior compartment of the upper arm and passive stretch of muscles in the posterior compartment of the upper arm. To truly understand muscle function, both contractile and passive properties should be studied. The contractile and/or passive mechanical properties of muscle are often compromised in muscle diseases. A good example is Duchenne muscular dystrophy (DMD), a severe muscle wasting disease caused by dystrophin deficiency 3 . Dystrophin is a cytoskeletal protein that stabilizes the muscle cell membrane (sarcolemma) during muscle contraction 4 . In the absence of dystrophin, the sarcolemma is damaged by the shearing force generated during force transmission. This membrane tearing initiates a chain reaction which leads to muscle cell death and loss of contractile machinery. As a consequence, muscle force is reduced and dead myofibers are replaced by fibrotic tissues 5 . This later change increases muscle stiffness 6 . Accurate measurement of these changes provides important guide to evaluate disease progression and to determine therapeutic efficacy of novel gene/cell/pharmacological interventions. Here, we present two methods to evaluate both contractile and passive mechanical properties of the extensor digitorum longus (EDL) muscle and the contractile properties of the tibialis anterior (TA) muscle. Video Link The video component of this article can be found at http://www.jove.com/video/50183/ Evaluation of the Contractile and Passive Properties of the EDL Muscle Ex vivo The contractile and passive properties of the EDL muscle are measured ex vivo using the Aurora Scientific in vitro muscle test system. Refer to Table 1 for materials and equipment. Equipment preparation 1. Assemble the tissue-organ bath by securing the oxytube to the water-jacket tissue bath. Attach the assembled bath to the muscle mounting apparatus. Connect the gas line to the oxytube. Fasten the water circulation lines to the water-jacket tissue bath and place the needle valve into the bath drainage. 2. Turn on the circulating water-bath and adjust the temperature to 30 °C 7 . Allow 5 PSI (pounds per square inch) of 95% O 2 -5% CO 2 to flow through the oxytube. Fill the bath with Ringer's buffer. Equilibrate the buffer for at least 10 min with a steady gas flow by adjusting the oxytube valve. 3. Turn on the instruments (stimulator, dual-mode lever system, and signal interface). Load the dynamic muscle control (DMC) software according to manufacturer's instruction. 1. Anesthetize the mouse with intraperitoneal injection of 2.5 μl/g body weight of the anesthetic cocktail (refer to materials section). Throughout the surgical procedure, the depth of sedation was checked by performing a toe pinch. A supplement of 10% of the initial anesthetic dose is administrated when needed to keep the animal under anesthesia. Shave the hind limb. Maintain the body core temperature at 37 °C prior to dissection procedure by placing the mouse on a heating pad. The body temperature is monitored by constantly measuring the rectal temperature using a thermal probe. 2. Position the mouse supine on the dissection board (Figure 1). Peel off the leg skin to expose the hind limb muscles. Secure the leg on the sylgard block using two dressmaker pins, one in the foot and the other in the gracilis muscle. Place a heat lamp above the mouse body to maintain the core body temperature at 37 °C. Constantly superfuse all exposed muscles with Ringer's buffer. Drain excess buffer through a vacuum line. 3. Expose the distal TA tendon and the extensor ligament under a stereomicroscope by dissecting the skin toward the foot. Gently remove the fascia covering the TA muscle. Cut the extensor ligament to release the distal TA tendon. 4. Cut the distal TA tendon and use it to peel off the TA muscle. Carefully remove the TA muscle at its proximal attachment. Place a thin piece of Ringer's buffer soaked cotton next to the EDL muscle to absorb bleeding caused by the rupture of the TA muscle vasculature. Use the vacuum line to remove excess buffer and blood. 5. Tie a double square knots followed by a loop knot using a bread silk suture at the muscle tendon junction (MTJ) of the distal EDL muscle (Figure 2). Make an incision in the distal portion of the biceps femoris muscle to expose the proximal EDL muscle. Repeat the same set of knots (Figure 2) at the MTJ of the proximal EDL tendon. Attach the lever arm hook to either the proximal or the distal knots with a double square knot using the same suture line. Cut off the remaining suture line. 6. Cut the proximal EDL tendon superior to the proximal suture knot. Lift up the EDL muscle with the hook and cut the vasculature beneath the muscle. Cut the distal EDL tendon inferior to the distal suture knot to remove the EDL muscle from the hind limb. Cover the exposed hind limb with a piece of Ringer's buffer soaked cotton. 7. Attach the hook to the lever arm. Align the muscle vertically between two electrodes. Secure the distal suture line to the fixed post. Lift up the tissue bath to submerge the muscle in Ringer's buffer. Adjust the resting tension to 1.0 g using the dual coarse/fine translation stage and allow the muscle to equilibrate for at least 10 min. Measuring the contractile and passive properties of the EDL muscle Use Table 2 to set up the parameters in the DMC software for each of the following measurements. Analyze the data using the dynamic muscle analysis (DMA) software. Measuring the contractile properties of the EDL muscle 1. Stimulate the EDL muscle three times at 150 Hz with 60 sec apart to stabilize the muscle 8 . 2. Stimulate the EDL muscle at different resting tensions to determine the optimal length (Lo). The optimal length is the length at which muscle develops maximum twitch tension. Allow the muscle to relax for 2 min. 3. Adjust the resting tension to Lo. Measure muscle force at single twitch stimulation. Determine the absolute twitch force (Pt), time to peak tension (TPT) and half relaxation time (½ RT) of the Pt. Allow the muscle to relax for 2 min. 4. Adjust the resting tension to Lo. Measure tetanic muscle force generated at different stimulation frequencies (50, 80, 100, 120, 150 and 200 Hz). Determine the absolute maximal tetanic muscle force (Po) where muscle force reaches the maximal. Measure the TPT and ½ RT of the Po 9 . 5. Allow the muscle to relax for 5 min. Adjust the resting tension to Lo. Apply 10 cycles of eccentric contractions with 2 min rest between cycles. Calculate the relative force loss of the Po after each cycle of eccentric contraction. 6. Detach the EDL muscle from the apparatus and cut the tendons at the suture site. Determine the muscle wet weight and calculate the muscle cross sectional area (CSA) 6,10 . 1.3.2 Measuring the passive properties of the EDL muscle 1. Dissect the contralateral EDL muscle and attach it to the apparatus as described in Section 1.2, steps 2 to 7. 2. Subject the EDL muscle to a six-step stretching protocol where the muscle is strained to 160% Lo with an increment of 10% Lo. Analyze the stress-strain profile 6 . 3. Evaluate the viscous property of the EDL muscle by measuring the stress relaxation rate (SRR) at the following time frames after stretching and holding the muscle at 10% Lo: from peak to 0.1s post-peak (pp), from 0.1 to 0.2s pp, from 0.2 to 0.5s pp, from 0.5 to 1s pp and from 1 to 1.5s pp. 4. At the end of study, euthanize the mouse by cervical dislocation and/or decapitation while the mouse is still under anesthesia. Detach the EDL muscle from the apparatus and cut the tendons at the suture site. Determine the muscle wet weight and calculate the muscle cross sectional area (CSA) 6,10 . Evaluation of the Contractile Properties of the TA Muscle In situ The contractile properties of the TA muscle are measured using the Aurora Scientific in situ muscle test system. Refer to Table 1 for materials and equipment. Equipment preparation 1. Heat up the thermo-controlled animal stage to 37 °C using the circulating water-bath. 2. Turn on the instruments (stimulator, dual-mode lever system, and signal interface). Load the DMC software according to manufacturer's instruction. Preparation of the TA muscle for in situ force measurement 1. Anesthetize the mouse, shave the hind limb and expose the TA muscle as described in steps 1 to 3 in Section 1.2. 2. Tie a double square knot around the patella ligament using a bread silk suture. Tie a double square knot followed by a loop knot at the MTJ of the distal TA muscle (Figure 2), tie another double square knot leaving a ~10 mm loop from the distal TA tendon knot using the same suture line. Place the second double square knot on the side of the loop. 3. Remove the pins from the hind limb and position the animal prone. Expose the biceps femoris muscle. Make an incision in the midline to reveal the sciatic nerve. Tie a double square knot around the proximal end of the sciatic nerve. Trim one side of the suture lines and cut the nerve superior to the knot. Gently, pull the sciatic nerve toward the knee using the suture line and clear the surrounding connective tissue to free ~ 5 mm of its length. Do not stretch the nerve during this procedure and constantly superfuse the nerve with ringer buffer. 4. Prepare the contralateral TA muscle as described in steps 1 to 3. Cover one of the exposed hind limb with a piece of Ringer's buffer soaked cotton. Constantly superfuse both hind limbs with pre-warmed (37 °C) Ringer's buffer. Remove excess buffer through a vacuum line. 5. Position the animal prone on the animal platform. Attach the knee clamp holder to the animal platform and secure both knees to the metal pin with double square knots using the patella ligament suture lines. Pin both feet on the sylgard block using dressmaker pins. Secure the animal platform onto the thermo-controlled stage. Position the heat lamp to maintain the animal core body temperature at 37 °C. 6. Secure the electrode holder to the animal platform and lay the sciatic nerve on the electrode using the suture line. Keep the electrode away from the hind limb muscles. Cut the distal TA tendon of the uncovered hind limb at the MTJ suture site. Attach the distal TA tendon suture loop to the lever arm hook. Cover the exposed hind limb muscle with a warm Ringer's buffer soaked cotton. Table 2 to set the parameters in the DMC software. Follow the same protocol described in Section 1.3.1 to determine the contractile properties of the TA muscle. Analyze the data using the DMA software. 2. After contractile property measurement, detach the distal TA tendon suture loop from the leveler arm hook. Remove the TA muscle. Use Determine the muscle wet weight and calculate the CSA 10 . 3. Measure the contractile properties of the contralateral TA muscle according to steps 1 to 3 described above. Euthanize the mouse according to institutional guidelines at the end of the study. Representative Results The following results are a representation from our previous reports 6,9 . Data are presented as mean±standard error of mean. Table 3 shows the morphometric properties of the EDL muscle in normal BL10 and dystrophin-deficient (mdx) mice at 4 to 6 months of age. Figure 4 shows representative contractile and passive properties of the EDL muscle from BL10 and mdx mice. The contractile properties of the EDL muscle are described by the following terms including the specific (absolute force divided by the CSA) twitch force (Figure 4A), specific maximal tetanic force (Figure 4B), TPT and ½ RT of the absolute maximal tetanic force (Figure 4C and D). The TPT and ½ RT can also be calculate from the absolute twitch force. The stress-strain profile ( Figure 4E) and SRR ( Figure 4F) are used to describe the passive properties of the EDL muscle. Absence of dystrophin has a significant impact on the contractile and passive properties of the EDL muscle 6,9 . Specific twitch and tetanic forces are significantly reduced in the mdx EDL muscle. The TPT is significantly faster while the ½ RT is significantly slower in the mdx EDL muscle. The stress-strain profile suggests that stiffness is significantly increased in the mdx EDL muscle. The mdx EDL muscle also yields a significantly much higher resistance force (passive stress) before reaching the peak stress, while the post-peak stresses decline much faster. Further, the SRR was significantly higher in the mdx EDL muscle compared to that of the BL10 EDL muscle. Statistical analysis Statistical significance between two groups is analyzed by the Student t-test. For statistical significance among multiple groups, One-way or Twoway ANOVA analysis followed by Bonferroni post hoc analysis is recommended using the SAS software (SAS Institute Inc., Cary, NC). Difference is considered significant when p < 0.05. Table 3. Morphometric properties of the EDL muscle. *, the value in mdx mice is significantly different from that of age-matched BL10 mice. /> Figure 4. Representative results for the contractile and passive properties of the EDL muscle. The contractile properties of the EDL muscle are characterized by the specific twitch force (A), the specific tetanic force (B), the time to peak tension (C) and the half relaxation time (D). The passive properties of the EDL muscle are assessed by the stress strain profile (E) and the SSR. *, mdx mice are significantly different from age-matched BL10 mice. Discussion In this protocol, we have illustrated physiological assays for measuring the contractile and passive properties the EDL muscle and the contractile properties of the TA muscle. A major concern in muscle physiology studies is the oxygenation of the target muscle. For large muscles (such as the TA muscle), the in situ approach is preferred because oxygen diffusion from Ringer's buffer may not reach the center of the muscle in an in vitro assay. In situ approach does not disturb normal blood supply and hypoxia-associated artificial effects are avoided. The EDL muscle is one of the most commonly used muscle in physiology study. Adequate oxygenation of the entire muscle can be achieved in an in vitro system because of the small size of the muscle. Further, the in vitro system provides an enclosed environment to manipulate the concentrations of ions (Ca 2+ , Na + and K + ) and chemicals (ATP and glucose) that are necessary for optimal muscle force generation. This offers a great opportunity to study the effect of these variables on force production. Accurate measurement of the contractile and passive properties of the limb muscle is critical to study skeletal muscle function. Characteristic changes of these properties are often considered as the hallmarks of various muscle diseases. Changes in these parameters are also important indicators to determine whether an experimental therapy is effective or not. Disclosures Open access fees have been paid for by Aurora Scientific.
2017-06-30T17:13:25.601Z
2013-02-09T00:00:00.000
{ "year": 2013, "sha1": "a55fbde19c8cbe5d20901ea6886ecb1f851a7c26", "oa_license": "CCBYNCND", "oa_url": "https://www.jove.com/pdf/50183/evaluation-muscle-function-extensor-digitorum-longus-muscle-ex-vivo", "oa_status": "BRONZE", "pdf_src": "Anansi", "pdf_hash": "48fff8d53aa02f76367c401ea46d8e390117b4b0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225041725
pes2o/s2orc
v3-fos-license
Hijacking SARS-CoV-2/ACE2 Receptor Interaction by Natural and Semi-synthetic Steroidal Agents Acting on Functional Pockets on the Receptor Binding Domain The coronavirus disease 2019 (COVID-19) is a respiratory tract infection caused by the severe acute respiratory syndrome coronavirus (SARS)-CoV-2. In light of the urgent need to identify novel approaches to be used in the emergency phase, we have embarked on an exploratory campaign aimed at repurposing natural substances and clinically available drugs as potential anti-SARS-CoV2-2 agents by targeting viral proteins. Here we report on a strategy based on the virtual screening of druggable pockets located in the central β-sheet core of the SARS-CoV-2 Spike's protein receptor binding domain (RBD). By combining an in silico approach and molecular in vitro testing we have been able to identify several triterpenoid/steroidal agents that inhibit interaction of the Spike RBD with the carboxypeptidase domain of the Angiotensin Converting Enzyme (ACE2). In detail, we provide evidence that potential binding sites exist in the RBD of the SARS CoV-2 Spike protein and that occupancy of these pockets reduces the ability of the RBD to bind to the ACE2 consensus in vitro. Naturally occurring and clinically available triterpenoids such as glycyrrhetinic and oleanolic acids, as well as primary and secondary bile acids and their amidated derivatives such as glyco-ursodeoxycholic acid and semi-synthetic derivatives such as obeticholic acid reduces the RBD/ACE2 binding. In aggregate, these results might help to define novel approaches to COVID-19 based on SARS-CoV-2 entry inhibitors. INTRODUCTION The coronavirus disease 2019 (COVID-19) is a respiratory tract infection caused severe acute respiratory syndrome (SARS)-CoV-2, a newly emerged coronavirus first identified in the city of Wuhan in China in December 2019 (Zhu et al., 2020). Globally, as of June 9, 2020 there have been more than ∼7 million confirmed cases of COVID-19, including 404,396 deaths (World Health Organization, 2020) in 216 countries (Fauci et al., 2020). Genetic sequencing SARS-CoV-2 demonstrates that the virus is a betacoronavirus sharing ∼ 80% genetic identity with SARS-CoV andMERS-CoV, identified in 2003 and2012, respectively, and ∼ 96% identity with bat SARS-related CoV (SARS-CoV) RaTG13 (Wang et al., 2020b;Wrapp et al., 2020;Yan et al., 2020). Similarly to the 2003 and 2012 pandemics caused by these viruses (De Wit et al., 2016), the human infection caused by SARS-CoV-2 induces respiratory symptoms whose severity ranges from asymptomatic/poorly symptomatic to life threatening pneumonia and a cytokine related syndrome that might be fatal (Guan et al., 2020;Zou et al., 2020). It is well-established that, similarly to SARS-CoV, SARS-CoV-2 enters the host cells by hijacking the human angiotensin converting enzyme receptor (ACE2) (Gui et al., 2017;Yuan et al., 2017;Walls et al., 2019Walls et al., , 2020Wang et al., 2020b;Yan et al., 2020). The interaction of the virus with ACE2 is mediated by the transmembrane spike (S) glycoprotein, which shares 80% of the amino acid sequence identity with SARS-CoV and 97.2% of sequence homology with the bat SARS-CoV-RaTG13. In the case of SARS-CoV and SARS-CoV-2, the spike glycoprotein (S protein) on the virion surface mediates receptor recognition and membrane fusion (Lu et al., 2020). In the intact virus, the S protein assembles in a trimeric structure protruding from the viral surface. Each monomer of the trimeric S protein has a molecular weight of ≈180 kDa and contains two functional subunits, S1 and S2 that mediate, respectively, the attachment to ACE2 and the membrane fusion. The S1 binds to the carboxypeptidase domain of ACE2 with a dissociation constant (Kd) of ∼15 nM (Hoffmann et al., 2020). Structural analysis has demonstrated that the N-and Cterminal portions of S1 fold as two independent domains, Nterminal domain (NTD) and C-terminal domain (CTD), with the latter corresponding to the receptor-binding domain (RBD) (Wang et al., 2020b). According to the high-resolution crystal structure information available so far, the RBD moves like a hinge between two conformations ("up" or "down") to expose or hide the residues binding the ACE2. Within the RBD, there is a receptor binding motif (RBM), containing two binding loops separated by a short β-sheet, which makes the primary contact with the carboxypeptidase domain of ACE2. Importantly, while amino acid alignment studies have shown that the RBD of SARS-CoV-2 shares 73.5% homology with SARS-CoV, the identity of RBM, the most variable region of RBD, is significantly lower (∼ 50%) making it unclear whether the RBMs of the two viruses can induce cross-reactive antibodies. The region outside the RBM is thought to play an important role in maintaining the structural stability of the RBD. The entry of SARS-CoV-2 in the host cells requires the cleavage of the S protein, a process that takes place in two steps. After binding to ACE2, the S protein is cleaved between the S1 and S2 subunits by a camostat-sensitive transmembrane serine protease, TMPRSS2 (Li et al., 2003;Lan et al., 2020;Shang et al., 2020;Wang et al., 2020b). Unlike SARS-CoV, SARS-CoV-2 has a distinct furin cleavage site (Arg-Arg-Ala-Arg) between the S1 and S2 domains, at residues 682-685, which may explain some of the biological differences. This furin cleavage site expands the versatility of SARS-CoV-2 for cleavage by cellular proteases and potentially increases the tropism and transmissibility owing to the wide cellular expression of furin proteases especially in the respiratory tract (Belouzard et al., 2009;Ou et al., 2020). Cleavage at the S1/S2 site is essential to unlock the S2 subunit, which, in turn, drives the membrane fusion. Importantly, a second S2 site of cleavage has been identified at the S2 ′ site which is thought essential to activate the protein for membrane fusion. The spreading of the COVID-19 pandemic and the lack of effective therapies targeting the viral replication have prompted an impressive amount of investigations aimed at targeting several aspects of SARS-CoV-2 biology and viral interaction with ACE2. In this scenario, drug repurposing is a well-established strategy to quickly move already approved or shelved drugs to novel therapeutic targets, bypassing the time-consuming stages of drug development (Ghosh et al., 2020;Khan et al., 2020;Micholas and Jeremy, 2020). This accelerated drug development and validation strategy has led to numerous clinical trials for the treatment of COVID-19 (Li and De Clercq, 2020;Liu et al., 2020). Despite several encouraging results, however, treatment of SARS-CoV-2 infection remains suboptimal and there is an urgent need to identify novel approaches to be used in clinical settings. One of such approaches is to prevent the S protein/ACE2 interaction as a strategy to prevent SARS-CoV-2 entry into target cells. Several virtual screening campaigns have already identified small molecules able to bind residues at the interface between the RBD of SARS-CoV-2 S protein and the ACE2 receptor (Ghosh et al., 2020;Micholas and Jeremy, 2020;Senathilake et al., 2020;Utomo et al., 2020;Wang et al., 2020a;Yan et al., 2020;Zhou et al., 2020). In this paper, we have expanded on this area. Our results demonstrate that several potential binding sites exist in the SARS CoV-2 S protein and that the occupancy of these pockets reduces the ability of the S protein RBD to bind to the ACE2 consensus in vitro. Together, these results might help to define novel treatments by using SARS-CoV-2 entry inhibitors. Virtual Screening The electron microscopy (EM) model of SARS-CoV-2 Spike glycoprotein was downloaded from the Protein Data Bank (PDB ID: 6VSB). Missing loops were added from the Swiss-Model web-site (Wrapp et al., 2020). The obtained model was submitted to the Protein Preparation Wizard tool implemented into Maestro ver. 2019 (Schrödinger, 2019) to assign bond orders, adding all hydrogen atoms and adjusting disulfide bonds. The pocket search was performed by using the Fpocket website (Schmidtke et al., 2010). The AutoDock4.2.6 suite (Morris et al., 2009) and the Raccoon2 graphical interface (Forli et al., 2016) were employed to carry out the virtual screening approach using the Lamarckian genetic algorithm (LGA). This hybrid algorithm combines two conformational research methods, the genetic algorithm and the local research. For the first low-accuracy screening, for each of the 2906 drugs, 3 poses were generated using 250,000 steps of genetic algorithm and 300 steps of local search, while in the second high-accuracy screening protocol, we generated 20 poses for each ligand, increasing the number of genetic algorithm steps to 25,000,000. The MGLTools were used to convert both ligands and each pocket into appropriate pdbqt files. Virtual screening was performed on a hybrid CPU/GPU HPC cluster equipped with 2 NVIDIA R Tesla R V100 GPUs and 560 Intel R Xeon R Gold and 64 AMD R EPYC R processors. Each of the six selected RBD pockets were submitted to the AutoGrid4 tool, which calculates, for each bonding pocket, maps (or grids) of interaction, considering the different ligands and receptor-atom types through the definition of a cubic box. Subsequently, for each grid AutoDock4 calculates interaction energies (ADscore) that express the affinity of a given ligand for the receptor. The library of FDA approved drugs has been obtained both from DrugBank (2106 compounds) (Drugbank, 2020) and from the Selleckchem website (FDA-approved Drug Library, 2020) (tot. 2638). Each database was converted to 3D and prepared with the LigPrep tool (Schrödinger, 2019) considering a protonation state at a physiological pH of 7.4. Subsequently, the two libraries were merged and deduplicated with Open Babel (O'Boyle et al., 2011), giving a total amount of 2,906 drugs. The bile acids (BA) focused library was prepared with the same protocol described above. All the images are rendered using UCSF Chimera (Pettersen et al., 2004). Molecular Dynamics (MD) MD simulations were performed using the CUDA version of the AMBER18 suite (Lee et al., 2018) on NVIDIA Titan Xp and K20 GPUs, using the Amber ff14SB force field (Maier et al., 2015) to treat the protein. RBD was then immersed in a preequilibrated octahedral box of TIP3P water and the system was neutralized. The system was then minimized using energy gradient convergence criterion set to 0.01 kcal/mol Å 2 in four steps involving: (i) an initial 5,000 minimization steps (2,500 with the steepest descent and 2,500 with the conjugate gradient) of only hydrogen atoms, (ii) 20,000 minimization steps (10,000 with the steepest descent and 10,000 with the conjugate gradient) of water and hydrogen atoms, keeping the solute restrained, (iii) 50,000 minimization steps (25,000 with the steepest descent and 25,000 with the conjugate gradient) of protein side chains, water and hydrogen atoms, (iv) 100,000 (50,000 with the steepest descent and 50,000 with the conjugate gradient) of complete minimization. Successively, the water, ions and protein side chains were thermally equilibrated in three steps: (i) 5 ns of NVT equilibration with the Langevin thermostat by gradually heating from 0K to 300K, while gradually rescaling solute restraints from a force constant of 10 to 1 kcal/mol Å 2 , (ii) 5 ns of NPT equilibration at 1 atm with the Berendsen thermostat, gradually rescaling restraints from 1.0 to 0.1 kcal/mol Å 2 , (ii) 5 ns of NPT equilibration with no restraints. Finally, a production run of 500 ns was performed using a timestep of 2 fs. The SHAKE algorithm was used for those bonds containing hydrogen atoms in conjunction with periodic boundary conditions at constant pressure and temperature, particle mesh Ewald for the treatment of long range electrostatic interactions, and a cutoff of 10 Å for nonbonded interactions. Dynamical Network Analysis The Dynamical Network Analysis was performed on 500 ns long MD trajectories of the RBD domain using the plugin Carma ver. 0.8 (Glykos, 2006) implemented in VMD 1.9.2 (Humphrey et al., 1996), The optimal community distribution is calculated by using the Girvan-Newman algorithm (Girvan and Newman, 2002). Edges between each node (here defined as Cα atoms) were drawn between those nodes whose residues were within a default cut-off distance (4.5 Å) for at least 75% of our MD trajectories. Communities map analysis and representation were obtained using the NetworkView tool, implemented in VMD 1.9.2. ACE2/SARS-CoV-2 Spike Inhibitor Screening Assay Kit We tested the selected compounds (UDCA, T-UDCA, G-UDCA, CDCA, G-CDCA, OCA, BAR501, BAR502, BAR704, betulinic acid, oleanolic acid, glycyrrhetinic acid, potassium canrenoate) using the ACE2: SARS-CoV-2 Spike Inhibitor Screening Assay Kit (BPS Bioscience Cat. number #79936) according to the manufacturer's instructions. All compounds were tested at different concentrations in a range from 0.01 to 100 µM. In addition, a concentration-response curve for the Spike protein (0.1-100 nM) was constructed to confirm a concentration-dependent increase in luminescence. A spike concentration of 5 nM was used for the screening of the compounds. Briefly, thaw ACE2 protein on ice and dilute to 1 µg/ml in PBS. Use 50 µL of ACE solution to coat a 96well nickel-coated plate and incubate 1 h at room temperature with slow shaking. Wash the plate 3 times and incubate for 10 min with a Blocking Buffer. Next, add 10 µL of inhibitor solution containing the selected compound and incubate for 1 h at room temperature with slow shaking. For the "Positive Control" and "Blank, " add 10 µL of inhibitor buffer (5% DMSO solution). After the incubation, thaw SARS-CoV-2 Spike (RBD)-Fc on ice and dilute to 0.25 ng/µL (∼5 nM) in Assay Buffer 1. Add the diluted Spike protein to each well, except to the blank. Incubate the reaction for 1 h at room temperature, with slow shaking. After 3 washes and incubation with a Blocking Buffer (10 min), treat the plate with an Anti-mouse-Fc-HRP and incubate for 1 h at room temperature with slow shaking. Finally, add an HRP substrate to the plate to produce chemiluminescence, which then can be measured using FluoStar Omega microplate reader. In another experimental setting, we have tested the selected compounds using the ACE2: SARS-CoV-2 Spike Inhibitor Screening Assay Kit with a slight modification to the protocol. In particular, tested compounds were pre-incubated for 2 h with the Spike-RBD, and immediately afterwards the mix was incubated with ACE2 coated on the 96-well plate. Quantitative Analysis of the Anti-SARS-CoV-2 IgG Antibodies To confirm the validity of the assay used in this study, five remnants of plasma samples used to test levels of anti-SARS CoV2 IgG in post COVID-19 patients were used. The original samples were collected at the blood bank of Azienda Ospedaliera of Perugia from post COVID-19 donors who participate to a program of plasma biobanking. An informed and written consent was signed by donors recruited in this program. The program's protocol included the quantitative analysis of the anti-SARS-CoV-2 IgG antibodies directed against the subunits (S1) and (S2) of the virus spike protein. IgGs were therefore measured by chemiluminescence immunoassay (CLIA) technology (LIAISON R SARS-CoV-2 IgG kit, DiaSorin R , Saluggia, Italy). Leftovers of five samples from this assay of ≈ 40-50 µL whose destiny was to be discharged were used to validate the SARS-CoV-2/ACE2 assay used in our study. While donors have provided a written informed consent for plasma donation as mentioned above, and no blood samples were taken specifically for this study, we (SB and DF) have contacted the five donors whose serum leftovers were used in this study by a phone call and asked the permission to use the sample remnants. The permission was granted by all five donors. We wish to thank all of them for the kind collaboration. Virtual Screening of the FDA-Approved Drug Library With the aim to identify chemical scaffolds capable of inhibiting ACE2/Spike interaction by targeting the RBD of the S1 domain of the SARS-CoV-2 ( Figure 1B), we carried out a virtual screening campaign on an FDA-approved drug library, using the RBD 3D structure obtained from the Protein Data Bank (PDB ID 6SVB; Chain A, residues N331-A520) (Wrapp et al., 2020). Missing regions in the structure were built through the SwissModel webserver (Bertoni et al., 2017). A pocket search was performed with the Fpocket web-server (Le Guilloux et al., 2009), resulting in the identification of ≈ 300 putative pockets on the whole trimeric structure of the S protein. This search was further refined to identify selected pockets in the RBD according to three main factors: (i) the potential druggability, i.e., the possibility of interfering, directly or through an allosteric mechanism, with the interaction with ACE2; (ii) the flexibility degree of the pockets, i.e., excluding pockets defined, even partially, by highly flexible loops, whose coordinates were not defined in the experimental structure; (iii) sequence conservation with respect to SARS-CoV RBD ( Figure 1A). On these bases, 6 pockets were selected on the RBD and numbered according to the Fpocket ranking ( Figures 1A,C). First, these pockets were used for the virtual screening of 2,906 FDA-approved drugs from the DrugBank and the Selleckchem websites, using the AutoDock4.2.6 program (Morris et al., 2009) and the Raccoon2 graphical user interface (Forli et al., 2016). This step was followed by a high-accuracy screening, based on the binding affinity predicted by AutoDock4 (ADscore), with a focus on the results showing an ADscore lower than −6 kcal/mol. These studies allowed the identification of several compounds with steroidal and triterpenoid scaffold, including glycyrrhetinic acid, betulinic acid and the corresponding alcohol (betulin), canrenone and the corresponding open form on the γ-lactone ring as potassium salt (potassium canrenoate), spironolactone and oleanolic acid, showing robust binding selectivity toward the RBD's pocket 1 ( Table 1). Pocket 1, located on the β-sheet in the central core of the RBD, is the less conserved among the screened, presenting five conservative (R346K, S438T, L440I, S442A) and two nonconservative (G445T and L451K) mutations from SARS-CoV-2 to SARS-CoV. Glycyrrhetinic acid, the best compound according to the AD score, binds the pocket through both hydrophobic and polar interactions. The triterpenoid scaffold relied between the hydrophobic side of the β-sheet core of RBD, defined by W436, F374 and the side chain of R509, and L441 on the other side, engaging hydrophobic contacts. In addition, the binding is reinforced by ionic contacts between the carboxyl group with R509, and by hydrogen bonds between the carbonyl group with N440 and the hydroxyl group with S375. Oleanolic acid and betulinic acid showed similar binding modes with the main difference in the carboxylic groups oriented toward the solvent. Finally, potassium canrenoate showed a different orientation of the steroidal system within the binding site, with the carboxylic function weakly bonded to S375 (3.1 Å), and the π-system of rings A and B stacked between W436 and L441 (Figure 2). Because the above mentioned triterpenoids have been identified as natural ligands for two bile acid activated receptors, the Farnesoid-X-Receptor (FXR) and G protein Bile Acid Receptor (GPBAR)-1 (Sepe et al., 2015;De Marino et al., 2019;Fiorucci and Distrutti, 2019), we have further investigated whether mammalian ligands of these receptors were also endowed with the ability to bind the above mentioned RBD's pockets. More specifically, oleanolic, betulinic and ursolic acids have been proved to act as selective and potent GPBAR1 agonists (Sato et al., 2007;Genet et al., 2010;Lo et al., 2016), while glycyrrhetinic acid, the major metabolic component of licorice, and its corresponding saponin, glycyrrhizic acid, have been shown to act as dual FXR and GPBAR1 agonists in transactivation assay , also promoting GLP-1 secretion in type 1-like diabetic rats . Bile acids are steroidal molecules generated in the liver from cholesterol breakdown (Fiorucci and Distrutti, 2019). Primary bile acids include cholic acid (CA) and chenodeoxycholic acid (CDCA), which have been recognized as functioning as the main FXR ligands in humans (Fiorucci and Distrutti, 2019). Secondary bile acids, deoxycholic acid and lithocholic acid (DCA and LCA) generated by intestinal microbiota, are preferential ligands for GPBAR1 (Maruyama et al., 2002;Fiorucci and Distrutti, 2019). Ursodeoxycholic acid (UDCA), which is a primary bile acid in mice, but a "tertiary" bile acid found in trace in humans, is, along with CDCA, the only bile acid approved for clinical use, and is a weak agonist for GPBAR1 and considered a neutral or weak antagonist toward FXR (Carino et al., 2019). Taking into account the structural similarity and the ability to bind the same receptor systems, we have carried out an in-depth docking analysis of natural bile acids and their semisynthetic derivatives currently available in therapy or under pre-clinical and clinical development (De Marino et al., 2019) and tested them for their ability to bind the above-mentioned pockets in the RBDs of SARS-CoV-2 S protein ( Table 2). As shown in Table 2, natural bile acids and their semisynthetic derivatives exhibit higher affinity scores for pocket 5. This pocket (Figures 3A-C) included residues bearing to the central β-sheet core but on a different side than pocket 1. The pocket resulted to be very conserved, showing only one mutation, I434L, from SARS-CoV-2 to SARS-CoV. In the binding mode of UDCA, the carboxylic group on the side chain is positioned between K378 and R408 and the steroidal scaffold is placed in a hydrophobic surface defined by the side chains of K378, T376, F377, Y380 and P384. Additionally, the 3β-hydroxyl group on ring A forms Hbonds with the backbone carbonyl of C379. The corresponding glycine and taurine-conjugated derivatives (G-UDCA and T-UDCA, respectively) showed the same ionic interactions of their negatively charged groups with K378 and R408. Albeit the greater length of the side chain, the H-bond with the backbone carbonyl of C379 induces a shift of the steroidal system toward T376, and an additional π-interaction between the electron density of the glycine amide region and the guanidine moiety of R408. This results in a better score for G-UDCA, and a reduction in the case of T-UDCA, likely due to a nonoptimal arrangement of the taurine moiety within the binding pocket. CDCA showed a very similar binding mode, with the only difference that it formed an additional H-bond with the backbone carbonyl of F377 due to the modification in the configuration of the C-7 hydroxyl group (α-oriented in CDCA and β-oriented in UDCA). As for G-UDCA, also G-CDCA established the same H-bonds network of the parent CDCA, while the steroidal core slightly shifted as described for G-UDCA. Interestingly, AD scores of G-UDCA and G-CDCA clearly indicated that the H-bond between the hydroxyl group at C-7 and F377 does not contribute significantly to the binding mode. With respect to CDCA, the introduction of the ethyl group at the C-6 position as in OCA and in BAR704 improves the internal energy of the ligand (−0.27 for CDCA vs. −0.59 and −0.60 kcal/mol for OCA and BAR704, respectively) and further favors the binding (Figure 3B), even if, albeit in close proximity of P384 and Y369, the 6-ethyl group did not show any particular contact within the RBD region. BAR501, a neutral UDCA derivative, with an alcoholic sidechain end group and the ethyl group at C-6 β-oriented showed a very similar binding mode compared to the parent compound, with the side chain hydroxyl group H-bonded to R408. Finally, BAR502, with a one carbon less on the side chain positioned the steroidal core as for G-CDCA, thus allowing the C-23 OH group H-bonding with the side chain hydroxyl group of T376. Dynamical Network Analysis To support our hypothesis about the allosteric inhibitory potential of the identified pockets, we performed a dynamical network and community map analysis on 500 ns of molecular dynamics (MD) simulations of the RBD domain. Overall, the network analysis found 12 communities (Com1-Com12) (Figures 4A-C and Table 3). Each community corresponds to a set of residues in the RBD domain that move in concert with each other. By definition, nodes (defined here as the Cα atoms) belonging to the same community are highly interconnected, however, few nodes (called "critical") may also connect to the edge of different communities by a metric called betweenness ( Figure 4C). In our network analysis, the 12 communities identified are distributed as follows: the RBM region resulted in a split into three communities (Com4, Com6, and Com7), with Com4 including the short β-sheet, while Com6 and Com7 include residues of the binding loops G496-Y505 and F456-F490 (Table 3), respectively. Pocket 1 and pocket 5 residues lie mainly in Com11 (Table 3), but few residues are included in other communities, in particular pocket 1 residue Y451 in Com4 and residues S438 and D442 in Com12, while pocket 5 residues T376, K378, C379, R408 in Com8 and Y380 Com10. In order to highlight the potential allosteric communication among the different communities, we analyzed the edge betweenness ( Figure 4C), which is a measure of the shortest paths between pairs of nodes belonging to two different communities. We found that communities including residues of pocket 1 and pocket 5 indirectly communicate with Com6 and Com7, through Com4. In particular, Com8, Com10, Com11, and Com12, including most of the residues in both pockets 1 and 5, were connected to Com4, which in turn was strongly connected to Com6 and weakly to Com7, thus indicating at least a strong potential allosteric communication among the pockets and the loops at the receptor interface. In vitro Screening Given the results of the virtual screening, we have then investigated whether the agents mentioned in Tables 1, 2 impact on the binding of S protein to the ACE2 receptor. For this purpose, a Spike/ACE2 Inhibitor Screening Assay Kit was used. The assay is designed for screening and profiling inhibitors for RBD/ACE2 interaction. To validate the assay, we first performed a concentration-response curve by adding increasing concentrations of the Spike RBD (0.1-100 nM) and confirmed a concentration-dependent increase of luminescence (n = 5 experiments, Figure 5A). Since the curve was linear in the range from 0.1 to 10 nM, we have used the concentration of Binding affinity values (ADscore) are expressed in kcal/mol. 5 nM for all the following assays. As illustrated in Figure 5, we found that incubating the Spike RBD with betulinic acid, glycyrrhetinic acid, oleanolic acid, and potassium canrenoate (the active metabolite of spironolactone) results in concentrationdependent reductions of the binding of S Spike RBD to the ACE2 receptor. While all agents effectively reversed the binding at a concentration of 10 µM, betulinic acid and oleanolic acid showed a significant inhibition at a concentration of 0.1 and 1 µM, respectively (n = 3 replicates). Because these data demonstrate that betulinic acid and oleanolic acid were effective in inhibiting the binding of the S protein RBD to ACE2, and the two triterpenoids were known for their ability to modulate GPBAR1, we then tested whether natural GPBAR1 bile acids ligands were also effective in reducing the SARS-CoV-2-ACE2 interaction. As illustrated in Figure 6, the secondary bile acid UDCA and its taurine conjugate, T-UDCA, caused a slight and dose dependent inhibition of the bind of the S protein RBD to the ACE2 receptor (Figures 6A,B). G-UDCA, i.e., the main metabolite of UDCA in humans, inhibits the RBD binding to the ACE2 receptor by ∼20% in a concentration dependent manner. Similar concentration dependent effects were observed with CDCA and to a greater extent with its metabolite, G-CDCA (Figure 6D). A combination of UDCA and G-CDCA exerted a slight additive effect, confirming that UDCA itself has a very limited inhibitory activity. Continuing the in vitro screening, we investigated whether the semisynthetic bile acid derivatives obeticholic acid (OCA), BAR704, BAR501, and BAR502, exerted comparable or better effects than G-CDCA. As illustrated in Figure 7, adding OCA to the incubation mixture reduced the binding of SARS-CoV-2 S spike to ACE2 by ≈20%. In contrast, BAR704, a 3-deoxy 6ethyl derivative of CDCA, and a highly selective and potent FXR agonist, was significantly more effective and reduced the binding by ∼40% at the dose of 10 µM. On the other hand, BAR501 and BAR502, alcoholic derivatives of UDCA and CDCA, respectively, were only slightly effective in reducing the binding of S protein RBD to ACE2. To further confirm our results, additional in vitro experiments were carried by pre-incubating the Spike RBD alone with 10 µM of selected compound. As shown in Figure 8, several of the compounds exhibited a greater ability to reduce the interaction between Spike and ACE2 when pre-incubated with Spike-RBD compared with the standard incubation performed in the same experiment (Figures 8A-M, * p < 0.05). In particular, we found that oleanolic and glycyrrhetinic acid reduced the binding of Spike-RBD to ACE2 by 40% when preincubated with the RBD, whereas betulinic acid and potassium canrenoate showed no additional gain (Figures 8A-D, * p < 0.05). Several natural bile acids, such as UDCA, T-UDCA, CDCA and G-CDCA, exerted a greater inhibitory effect when preincubated with Spike reaching ∼45-50% of binding inhibition (Figures 8E-I, * p < 0.05). Among the semisynthetic bile acid derivatives, their pre-incubation with Spike-RBD improved the efficacy of OCA (40%) and BAR502 (45%) (Figures 8J,K, * p < 0.05) and BAR704 that reduced the interaction ACE2/Spike-RBD by 55% (Figure 8L, * p < 0.05). These results suggested that the reduction of Spike-ACE2 interaction is actually due to the binding of tested compounds with the residues of Spike-RBD, thus confirming the molecular docking results. Effects of Plasma Samples From Post-COVID-19 Convalescent Patients on Spike RBD -ACE2 Interaction To confirm the concept that binding the pockets in the central βsheet core of Spike RBD effectively prevents its interaction with the consensus of ACE2 receptor, we then carried out a set of control experiments using remnants of the plasma samples from five donors that have recovered from COVID-19. These donors had a slightly different title of anti SARS-CoV-2 antibodies (See Material and Methods, Table 4), but all the dilutions tested effectively inhibited the Spike RBD binding to ACE2 in our assay system by more than 95%. These data highlight that the test used in this paper correctly identify the binding of SARS-CoV-2 RBD to ACE2, but the levels of inhibition, were, as expected, significantly lower than those that could be reached by anti-SARS-CoV-2 antibodies. DISCUSSION In this study we report the results of a virtual screening campaign designed to identify natural and clinically available compounds that might have utility in the prevention/treatment of the SARS-CoV-2 infection. In the light of the need of effective therapies to be rapidly tested for preventing or treating COVID-19, we initiated an in silico campaign to identify putative molecular targets that could be exploited to prevent the interaction of the SARS-CoV-2 Spike protein with the cellular machinery hijacked by the virus to enter target cells. To this end, we identified the Spike RBD as a potential pharmacological target. Accordingly, we developed the concept that putative pockets on FIGURE 3 | Graphical representation of the binding mode of the best compounds resulting from the screening in pocket 5. The RBD region is represented in tan cartoon, while the pocket 5 residues as transparent surface colored by residues hydrophobicity. Color codes are: dodger blue for the most hydrophilic regions, white, to orange-red for the most hydrophobic. the surface of the central β-sheet core of the S protein RBD could be exploited eventually to prevent the binding of the virus to ACE2. Our in silico screening has allowed the identification of six potentially druggable pockets and the virtual screening of the FDA-approved drug library identified steroidal compounds as potential hits against two pockets, namely pocket 1 and pocket 5. Interestingly, high accuracy docking demonstrated that flat steroidal scaffolds (i.e., A/B rings junction in trans configuration Table 1) prefer pocket 1, while compounds with the A/B junction in cis configuration ( Table 2, such as bile acids) show greater affinity for pocket 5. Our in vitro testing has largely confirmed the functional relevance of the two main pockets identified by in silico analyses. One important finding of this study has been that several steroidal molecules were effective inhibitors of the binding of the RBD to ACE2 in vitro. In particular, the most interesting compounds in Table 1, glycyrrhetinic and oleanolic acid, showed good agreements in terms of docking AD score and in their ability to inhibit the spike/ACE2 interaction in vitro. The results also suggested that the main determinant for the inhibition efficacy is the hydrophobicity, as demonstrated by oleanolic acid, lacking any charge interaction within the pocket and resulting the most effective inhibitor in the series. Hydrophobicity is also the main determinant of the activity of the bile acids and their semisynthetic derivatives, as demonstrated by CDCA, the corresponding glyco-conjugated derivative (G-CDCA) and its semisynthetic derivatives OCA, BAR704, and BAR502. Indeed, comparing the binding mode and the inhibition efficacy of CDCA and OCA with the related 6-ethyl derivative BAR704 highlighted the critical effect of the 6α-ethyl group in the inhibition activity and the negligible contribution of the 3β-hydroxyl group. The above positive effect could be explained considering the internal energy contribution of these ligands to the AD score, as well as the possibility of engaging more hydrophobic contacts. Indeed, the AD score internal energy contribution, significantly higher for the 6-ethyl derivatives, represents a measure of the conformational energy , were tested to evaluate their ability to inhibit the binding of Spike protein (5 nM) to immobilized ACE2, by using the ACE2:SARS-CoV-2 Spike Inhibitor Screening assay Kit. Luminescence was measured using a Fluo-Star Omega fluorescent microplate reader. Luminescence values of Spike 5 nM were arbitrarily set to 100%. Results are expressed as mean ± standard error. *p < 0.05 vs. Spike 5 nM. Data are the mean ± SE, n = 3. FIGURE 7 | The ACE2:SARS-CoV-2 Spike Inhibitor Screening assay was performed as described in Materials and Methods section. The semi-synthetic bile acid receptor agonists OCA, BAR704, BAR502, and BAR501, were tested at different concentration (0.1, 1, and 10 µM) to evaluate their ability to inhibit the binding of Spike protein (5 nM) to immobilized ACE2, by using the ACE2:SARS-CoV-2 Spike Inhibitor Screening assay Kit. Luminescence was measured using a Fluo-Star Omega fluorescent microplate reader. Luminescence values of Spike 5 nM were arbitrarily set to 100%. Results are expressed as mean ± standard error. *p < 0.05 vs. Spike 5 nM. Data are the mean ± SE, n = 3. of the bound vs. unbound state of the ligand, thus indicating that the ethyl group facilitates the assumption of the bioactive conformation. Moreover, the analysis of the binding mode of this compound highlighted that the 6-ethyl in the α-position could establish hydrophobic contacts with P384 and Y369, positioned at a slightly longer distance than the optimal admitted for VdW interactions. However, it should be noted that the docking approach considers the protein receptor as rigid and didn't allow for mutual adaptation, which is an important process in ligandreceptor binding. In agreement with docking results, the lower efficacy observed for BAR502 could be explained with a slight change in the binding mode, with a different position of the compound in the pocket in order to allow the hydroxyl group on a shortened side chain to interact with the side chain hydroxyl group of T376. Moreover, also the comparison of the binding modes for G-CDCA and G-UDCA supported the hypothesis that the main determinant for the activity should be related to the network of hydrophobic interactions more than to the lack of a punctual hydrogen bond. Indeed, unlike the weakly active UDCA, the steroid core of G-UDCA is shifted to T376, and the resulting binding mode looks very similar to G-CDCA's. Finally, the better inhibitory efficacy of BAR501 with respect to UDCA, further confirmed the not-essential effect of the charged group on the side chain in terms of inhibition activity. Interestingly, the analysis of the binding mode of BAR501 also suggested that the stereochemistry of the ethyl group at C-6 is not pharmacophoric, being the 6β-ethyl group still able to potentially interact with P384 and Y369. In the present study, we have developed a strategy to target the interaction of SARS-CoV-2 S protein RBD with the ACE2 receptor. As described in the introduction, SARS-CoV-2 enters the target cells by binding the carboxypeptidase domain of the ACE2 receptor, exposing a cleavage site, a hinge region between S1 and S2, to TMPRSSS2, which in turn allows the S2 subunit of the Spike protein to bind with the cell membrane, leading to the virus/host cells membrane fusion and SARS-CoV-2 penetration in to host cells. The two pockets we have identified in the β-sheet core of the Spike RBD appear to be targetable by steroidal molecules and, importantly, we found that both naturally occurring bile acids and their metabolites in humans reduce the binding of Spike's RBD to ACE2. Of interest, natural bile acids, such as UDCA, T-UDCA, CDCA, and G-CDCA, exerted a greater inhibitory effect when preincubated with Spike reaching ∼45-50% of binding inhibition. Importantly, we found that most of the agents tested in this study were agonists of two main bile acid activated receptors, i.e., the Farnesoid-x-Receptor (FXR) and a cell membrane receptor known as GPBAR1. Thus, betulinic acid and oleanolic acid, along with UDCA and its metabolites, BAR501 and BAR502 are effective ligands for GPBAR1. In contrast, glycyrrhetinic acid, CDCA, G-CDCA and T-CDCA, OCA and BAR704 are known for their ability to bind FXR (Festa et al., 2014). The fact that FXR/GPBAR1agonists bind the SARS-CoV-2 RBD is of general interest and deserve further investigations. Of interest, some of these agents have been reported for the potential use as anti-HIV agents (Rezanka et al., 2009), and oleanolic acid has been reported as a broad spectrum entry inhibitor of influenza viruses (Yang et al., 2018). On the other side, betulinic acid has been demonstrated to be useful in reducing inflammation and pulmonary edema induced by influenza virus (Hong et al., 2015), and potassium canrenoate, the main metabolite of spironolactone in vivo, is an anti-aldosteronic/diuretic used in the treatment of hypertensive patients. Finally, several GPBAR1 and FXR ligands, as bile acid derivatives, have been proved to exert beneficial effects in immune disorders (Fiorucci et al., 2018) and among these, BAR501, the first example of a C-6βsubstituted UDCA derivative with potent and selective GPBAR1 activity, has been recently demonstrated as a promising lead in attenuating inflammation and immune dysfunction by shifting the polarization of colonic macrophages from the inflammatory phenotype M1 to the anti-inflammatory phenotype M2, increasing the expression of IL-10 gene transcription in the intestine and enhanced secretion of IL-10 by macrophages (Biagioli et al., 2017). One important observation we have made in this study is that, while two different pockets of Spike RBD are potentially druggable, these are contiguous, and indeed, when we attempted drug combinations, none of these combinations effectively increased the anti-adhesive efficacy in comparison to the single agent. This study has several limitations. First of all, we observed that the anti-adhesive efficacy of hyperimmune plasmas obtained from donors who have recovered from COVID-19 and containing high titles of neutralizing antibodies, in inhibiting the Spike RBD/ACE2 interaction, is close to 99%. This percentage is significantly higher than what we measured with our compounds. One possible explanation of this different efficacy can be found in terms of difference in affinity of our compounds with respect to the antibodies but could also be related to the mechanism of allosteric connections suggested by dynamical network and community map analysis. Indeed pockets 1 and 5 resulted tightly connected with the loop G496-Y505, and weakly with the larger loop F456-F490. This suggests that small molecules binding the hydrophobic pockets are less effective than a neutralizing antibody. This also suggests that our pharmacological approach will likely be poorly effective in the presence of a high viral load, and the approach we have developed might have some efficacy only in the case of low viral load. Nevertheless, the mild inhibition efficacy showed by bile acids and their derivatives could pave the way for a further optimization of the binding mode in order to identify additional potential interactions, particularly in pocket 5, which has been demonstrated the least exposed to mutations. Another limitation is that we have not tested the effect of these treatments on viral replication and further studies are needed to clarify this point. In conclusion, in this paper, we report the identification of several potential binding sites in the RBD of the SARS-CoV-2 S protein. Several triterpenoids, such as glycyrrhetinic and oleanolic acids, and natural bile acids and their semisynthetic derivatives have been proven effective in reducing the Spike RBD's adhesion to its ACE2 consensus in vitro. Altogether, these results might help to define novel approaches to COVID-19 by using SARS-CoV-2 entry inhibitors. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS SB and DF provided serum samples. BF, FM, and BC performed virtual screening and analyzed the data. CF and VS performed chemical synthesis. AC, SM, and MB generated the in vitro data and performed the data analysis. AZ, BC, ED, and SF conceived the study. All authors drafted the manuscript and wrote the final submission. ACKNOWLEDGMENTS Authors wish to thank all the donors for the kind collaboration. This manuscript has been released as a PrePrint (Carino et al., 2020).
2020-10-23T13:06:14.640Z
2020-10-23T00:00:00.000
{ "year": 2020, "sha1": "cd6600fc4e956948598edfc74b425a7a81984cf2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2020.572885/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd6600fc4e956948598edfc74b425a7a81984cf2", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
270153903
pes2o/s2orc
v3-fos-license
Colonoscopy Finding: Pseudomembranous Colitis in Chronic Kidney Disease Patient of Introduction Colitis is acute or chronic inflammation that affects the colon.Based on the cause, colitis can be divided into infectious and non-infectious colitis. Infectious colitis is divided into amebic colitis, shigellosis, tuberculous colitis, and pseudomembranous colitis.Non-infectious colitis consists of ulcerative colitis, Crohn's disease, radiation colitis, ischemic colitis, microscopic colitis, and non-specific colitis.The colitis most often found in tropical areas such as Indonesia is infectious colitis.The prevalence of amebic colitis in tropical areas is 50-80%.However, the prevalence of shigellosis, tuberculous colitis, pseudomembranous colitis, and non-specific colitis in Indonesia is not known with certainty.This happens because studies on the epidemiology of colitis in Indonesia are still rarely carried out.Diagnosis of colitis is confirmed through history taking, physical examination, and supporting examinations.However, the clinical symptoms of infectious colitis can be similar to those of Crohn's disease or ulcerative colitis.[3] The human gastrointestinal system is home to most microbes, such as gut microbiota.In conditions of dysbiosis (a condition of imbalance in the microflora population in the gastrointestinal tract), Department and advised to undergo dialysis, but the patient was not ready.Then, the patient was outpatient with the anti-hypertension medication methyl dopa 2x1 tab and nifedipine 1x10 mg. Approximately 2 months before admission to the hospital, the patient complained that both lower legs were becoming increasingly swollen.Her body feels weak.There is no shortness of breath.The patient was readmitted and received a blood transfusion.The patient was educated on dialysis and was still not ready.At approximately 1 month to 8 months of pregnancy, the patient was treated again.The patient complains of shortness of breath and body weakness, and both lower legs becoming increasingly swollen. The patient was advised to terminate the pregnancy with a caesarean section and undergo dialysis.The patient is willing to undergo dialysis.One day after surgery the patient began to complain of diarrhea, frequency 3-4 x/day, slimy and smelly but not bloody. There are no lumps or sores on the anus.The stomach feels tight and painful.Nausea is there. Vomiting is absent. Discussion Chronic kidney disease is associated with dietary restrictions, slow colonic transit, changes in the biochemical environment of the digestive tract, and the use of certain medications such as antibiotics, phosphate binders, and iron-containing compounds. A colonoscopy revealed edematous, hyperemic mucosa and yellowish plaque with fragile walls and bleeding easily, consistent with the picture of pseudomembranous colitis.Histopathological examination showed that the mucosa was lined with single-layer columnar epithelium and goblet cells.The lamina propria is in the form of powdery edematous fibrocollagen connective tissue with inflammatory cells, lymphocytes, plasma cells, eosinophils and neutrophils which extends to the muscularis mucosa, among which intestinal crypts are visible.Impression: Non-specific chronic colitis. All these factors contribute to the development of gut dysbiosis in chronic kidney disease patients.Chronic kidney disease sufferers are characterized by decreased consumption of fiber foods.Indigestible carbohydrates are essential nutrients for the saccharolytic microbiota and the reduction of these substrates results in decreased production of shortchain fatty acids.Lack of dietary fiber causes an increase in amino nitrogen which can be converted into uremic toxins by the intestinal microbiota.Chronic kidney disease sufferers are characterized by an imbalance between saccharolytic (fermentative) and proteolytic (putrefactive) microbiota.Imbalances have detrimental effects on the development of chronic kidney disease. 8-10In patients with chronic kidney disease, prolonged colonic transit can cause an increase in the number of proteolytic species that contribute to an imbalance between saccharolytic and proteolytic microbiota.This results in increased production and absorption of the end products of bacterial protein fermentation.Urea is a waste product that accumulates in people with chronic kidney disease.Increased urea into the intestinal lumen causes overgrowth of ureaseexpressing bacteria.Hydrolysis of urea by intestinal microbes results in the formation of large amounts of ammonia.Ammonia increases the pH of the intestinal lumen and changes microbiota composition, which leads to dysbiosis.Patients with chronic kidney disease generally receive antibiotics to treat vascular access and other infections.The use of antibiotics results in a decrease in the number of important gut microbiota needed to maintain homeostasis, loss of biodiversity, changes in metabolism, and expansion of pathogens.On the other hand, long-term consumption of phosphate binders and ironcontaining compounds can cause changes in the luminal environment of the digestive tract and affect the microbial flora, thereby causing dysbiosis. 11-16This patient was treated as pseudomembranous colitis suspected of being caused by Clostridium difficile infection.After receiving antibiotic therapy with metronidazole 3x500 mg infusion for 4 days and continued administration of meropenem 1x500 mg injection (dose adjustment) for 6 days, clinical improvement and improved colonoscopy images were obtained.The clinical response to therapy is improvement in diarrhea within 1-4 days after antibiotic administration, with resolution within 2 weeks.Recurrence of diarrhea is a difficult clinical problem and occurs in 10-50% of cases with an overall recurrence risk of approximately 20%. 17-204. Conclusion Pseudomembranous colitis is an inflammatory condition of the colon that is most often caused by Clostridium difficile infection.The presence of pseudomembrane visualization on colonoscopy can significantly confirm the diagnosis.Chronic kidney disease predisposes to loss of balance in the intestinal microbial flora (dysbiosis) and on the other hand, intestinal dysbiosis influences the development of chronic kidney disease.
2024-06-01T15:32:15.552Z
2024-03-08T00:00:00.000
{ "year": 2024, "sha1": "08556bbd314c81fee4b508505bbea89c0ff4e3b6", "oa_license": "CCBYNCSA", "oa_url": "https://bioscmed.com/index.php/bsm/article/download/996/1153", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b90fdf31b47f8edddd70de5c181b28c853b0b678", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
249561064
pes2o/s2orc
v3-fos-license
The Multi-Omics Analysis Revealed Microbiological Regulation of Rabbit Colon with Diarrhea Fed an Antibiotic-Free Diet Simple Summary After being fed an antibiotic-free diet, some rabbits showed typical diarrhea symptoms. In order to explore the reasons, this study used multiple omics analysis. Bacteroidetes and Proteobacteria were significantly upregulated in the colon of diarrhea rabbits, and the ratio of Firmicutes to Bacteroidetes was decreased. The significantly upregulated differential genes were mainly enriched in the IL-17 signaling pathway and were involved in promoting inflammatory response. The different metabolites were mainly enriched in tryptophan metabolism and bile secretion, which affected the anti-inflammatory function. In addition, Bacteroides is positively correlated with 4-Morpholinobenzoic acid and Diacetoxyscirpenol, which is believed to be an important cause of inflammation. The enrichment of Proteobacteria is also related to the high expression of the IL-17 signaling pathway. Abstract Diarrhea symptoms appeared after antibiotics were banned from animal feed based on the law of the Chinese government in 2020. The colon and its contents were collected and analyzed from diarrheal and healthy rabbits using three omics analyses. The result of the microbial genomic analysis showed that the abundance of Bacteroidetes and Proteobacteria increased significantly (p-value < 0.01). Transcriptomes analysis showed that differentially expressed genes (DEGs) are abundant in the IL-17 signaling pathway and are highly expressed in the pro-inflammatory pathway. The metabolome analysis investigated differential metabolites (DMs) that were mainly enriched in tryptophan metabolism and bile secretion, which were closely related to the absorption and immune function of the colon. The results of correlation analysis showed that Bacteroidetes was positively correlated with 4-Morpholinobenzoic acid, and 4-Morpholinobenzoic acid could aggravate inflammation through its influence on the bile secretion pathway. The enriched DMs L-Tryptophan in the tryptophan metabolism pathway will lead to the functional disorder of inhibiting inflammation by affecting the protein digestion and absorption pathway. Thus, the colonic epithelial cells were damaged, affecting the function of the colon and leading to diarrhea in rabbits. Therefore, the study provided an idea for feed development and a theoretical basis for maintaining intestinal tract fitness in rabbits. Introduction Antibiotics have played an important role in animal feed over the past half-century. It was used to prevent and treat animal infection for a long time, expedite intestinal absorption and digestion, and expedite its production capacity [1]. Long-term addition of antibiotics in feed, but the problem also followed [2]. Long-term feeding the feed containing antibiotics usually leads to drug resistance in animals [3,4]. Moreover, more and more studies proved that the residue and accumulation of antibiotics in all kinds of meats would affect the health of consumers [5]. An increasing number of people are paying attention to the rational use of antibiotics. China's Ministry of Agriculture and Rural Affairs signed a national law banning antibiotics in 2020. Then, typical symptoms of diarrhea appeared in some individuals, and the mortality and feeding cost of rabbitry increased [6]. The intestinal tract is an important organ for defense and immune response, as well as the key to nutrient absorption [7,8]. The colon has a special structure and the highest pH, which also leads to a more complex internal environment of the colon tissue [9]. Inflammation of the colon can cause diarrhea [10] and could damage the mucosal structure of the colon epithelium [11]. Damage to the colon structure could affect the composition of the microbial community and then affect the normal absorption function of the colon, resulting in metabolic disorders [12,13]. Multi-omics research methods can reveal the complex process of rabbit nutrient absorption in a more three-dimensional way [14,15]. Thus, this study aimed to explore the mechanism of diarrhea in rabbits caused by colon lesions. The colon tissues and contents were collected and analyzed by microbial group, transcriptome, and non-targeted metabolism group. It provided an idea for the development of healthy feed and a theoretical basis for further study of rabbit diarrhea caused by colonic inflammation. Ethics Statement The experimental procedures in this study have been approved by the Animal Care and Utilization Committee from the College of Animal Science and Technology, Sichuan Agricultural University, China. Animals Feeding Condition The project was performed in the rabbitry, Leibo County (103.57 • E, 28.26 • N), Sichuan Province, China. The rabbits were raised in clean cages with regularly inoculating vaccines. The feed was purchased from a local feed manufacturer. July 2021, the rabbit farm began to use the antibiotic-free diet based on the requirements of national policy. The rabbits appeared to have typical diarrhea symptoms, such as loss of appetite, feces not forming, and feces stench. Samples Collection 40-day-old weaned rabbits were selected. Six female rabbits with typical diarrhea symptoms were selected as the diarrhea group (DIA), and six female rabbits without diarrhea symptoms were selected as the control group (CON). The selected rabbits were fasted for 24 h and were slaughtered by the electric bloodletting method. The colon tissue and its content were taken immediately and were preserved in liquid nitrogen at −80 • C. The RNA and DNA in samples were extracted by conventional methods and sent to Novogene Bioinformatics Technology Co., Ltd. (Beijing, China) for sequencing and preliminary analyses [15,16]. Morphological Section Analysis of Rabbit Colon Some colon tissues were washed with normal saline, fixed with neutral formalin, dehydrated with ethanol, embedded in paraffin, sectioned, and stained with hematoxylin and eosin (HE). The histopathological features, using a CX22 microscope (OLMPUS, Tokyo, Japan), were observed and photographed using Leica microscopic imaging system (DM1000, Leica, Wetzlar, Germany). Microbial Genomic16S rRNA Gene Sequencing and Sequencing Analysis After DNA amplification, purification, and construction of the SMART bell library, the bam file was exported by the SMRT analysis software of PacBio [17]. OTUs (operational taxonomic units) clustering and species classification analysis were performed after distinguishing samples according to barcode [18,19]. The representative sequences of each OTU were annotated to obtain the corresponding species information and relative abundance of species. The diversity and richness of microbial communities in the samples were analyzed by alpha diversity analysis. Qiime software (Version1.9.1, Novogene, Beijing, China) was used to calculate the alpha diversity index (Shannon index, chao1 index, and Simpson index), R software was used to analyze the difference between groups of alpha diversity index, and the Wilcox rank-sum test was used for comparison. The microbial communities between the two groups of samples were compared and analyzed by beta diversity analysis. The species annotation results of the two groups and the abundance information of OTUs were combined, and then the weighted unifrac distance was calculated and constructed through the abundance relationship between OTUs of the same classification in different groups. Finally, multivariate statistical methods are used to enter non-metric multi-dimensional scaling (NMDS) and multi-response permutation procedure (MRPP). The microorganisms with significant differences between groups at each classification level were found by t-test (p-value < 0.05). LEfSe was used to find high-dimensional biomarkers, and the microbial population with the most significant difference was determined with "LDA score >4" as the standard. RNA-Seq Data and Differential Expression Analysis After the extracted RNA was screened, amplified, and purified, the library was finally obtained using NEBNext ® UltraTM Directional RNA library Prep Kit for Illumina ® (San Diego, CA, USA) [20]. The sequencing fragment image data measured by a high-throughput sequencing instrument was transformed into sequence data (reads) by CASAVA base recognition to obtain clean reads. Q20, Q30, and CG of clean readings are also calculated. In the correlation analysis of data between samples, the R2 of the Pearson correlation coefficient is greater than 0.92 (ideal sampling and experimental conditions). In the specific operation of the project, R2 between biological replicate samples is at least greater than 0.8. Otherwise, samples need to be properly explained or retested. According to the FPKM values of all genes in each sample, the correlation coefficients of intra-group and intergroup samples are calculated, and the heat map is drawn, which can visually display the differences between groups and the repetition of samples within groups. The higher the correlation coefficient between samples, the closer the expression pattern. Then DESeq2 software (1.20.0) was used to analyze the differential expression of the two comparative combinations. The p-value ≤ 0.05, and the FDR ≤ 0.01. It was considered that the genes with |log2 (fold change)| >1 were significantly differentially expressed genes, and the results were shown in the voLithocholic acidnic map. Subsequently, GO classification function annotation and KEGG enrichment analysis were performed on these DEGs to obtain further differential enrichment results. Metabolome Analysis The metabolites in the intestinal segment were studied based on LC-MS technology. After liquid nitrogen grinding and centrifugation of intestinal tissue samples, the supernatant was injected into the ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) system for analysis [21]. Firstly, the raw data of mass spectrometry were imported into Compound discoverer 3.1 software for spectral processing and database retrieval, and the qualitative and quantitative results of metabolites were obtained. Then, the quality of data was controlled to ensure the accuracy and reliability of the data. Using high-resolution mass spectrometry (HRMS) technology, we can make the non-target metabolic group as much as possible to detect the molecular characteristic peaks in the sample. The raw data after offline is preprocessed by CD3.1 data processing software. In order to make the identification accurate, we extract the peaks according to the set of ppm, signal-to-noise ratio (S/N), additive ions, and other information and quantify the peak area. Then mzCloud, mzVault, and MassList databases were compared to identify metabolites. Finally, metabolites with a coefficient of variation of less than 30% in QC samples were retained as the final result. The metabolites were compared with KEGG, HMDB, and other databases to obtain the annotation results. Then, a multivariate statistical analysis of metabolites was performed, including principal component analysis (PCA), partial least squares discriminant analysis (PLS-DA), and other methods to establish the relationship between the expression of metabolites and samples [22]. According to the results of Q2 and R2, the model was judged to reveal the differences in metabolic patterns between different groups. KEGG enrichment pathway analysis was performed on the differential metabolites to obtain clearer and more detailed differential analysis results. Association Analysis The association analysis was performed on the composition of colony differences and the results of differential metabolites obtained from 16S rDNA and metagenomic analysis of genus-level differences to compare the association degree between species diversity and metabolites in environmental samples. The value between (−1,1) is the correlation coefficient, which is negatively correlated when the correlation coefficient is less than 0, and positively correlated when the correlation coefficient is greater than 0. The Pearson correlation coefficient rho and p-value of top10 and top20 were calculated, and the scatter plot analysis was drawn with the results of |rho| ≥0.6 and p-value ≤ 0.05, which intuitively showed the results of the correlation scatter plot analysis of the expression of different bacteria and different metabolites. The top100 of genes with significant differences obtained by transcriptome analysis and the top50 of metabolites with significant differences obtained by metabolic progenitor analysis were analyzed based on the Pearson correlation coefficient to compare the correlation between the two. The results are presented in the clustering heat map. When the correlation coefficient (−1,1) is greater than 0, it is positive and negative when the correlation coefficient is less than 0. Then all the differential genes and differential metabolites were compared with the KEGG pathway database at the same time to obtain their common pathway results and determine the main biochemical pathways and signal transduction pathways that differential metabolites and differential genes participate in together. Colon Tissue Sections Colon tissue samples stained with HE are shown in Figure 1. In the DIA group, the colonic mucosa epithelium was exfoliated, part of the intestinal wall was necrotic, and the number of lymphocytes was decreased, presenting typical that is diarrhea. In contrast, the rabbit of the CON group had intact colon structure and no pathological features. Microbial Community Imbalance After alpha diversity analysis, the results are shown in Table 1. All the alpha diversity indices in the DIA group were reduced, and the Shannon index and Chao1 index were significantly reduced (p-value < 0.05), indicating that the richness and evenness of microbial communities in the DIA group were significantly reduced (p-value < 0.05). The results of MRPP analysis also showed that the microbial separation between the two groups was obvious. Table 1. Alpha diversity analysis of intestinal microbial of rabbit colon. The notation * indicate significant differences (p < 0.05) among groups. The results of PCoA and NMDS were consistent with the above results. The expecteddelta value and A value jointly indicated that the difference between groups was greater than that within the group, and the significance < 0.05 indicated that the difference was significant ( Table 2). The relative abundance differences of microorganisms between the two groups were compared (Figure 2), and a t-test was performed to identify the species with significant differences (p-value < 0.05) between groups ( Table 3). The results showed that Bacteroidetes were the dominant bacteria in the DIA group at the levels of phylum, class, order, family, and genus, with an average proportion of 42%. In addition, Proteobacteria was also significantly enriched in the DIA group, and the difference was significant (p-value < 0.01). Some endemic microorganisms, Peptostreptococcaceae, Synergistetes, and Cyanobacteria, were detected in the DIA group. Firmicutes, Clostridia, and Ruminococcaceae were the dominant flora in the CON group, and the differences were significant (p-value < 0.01). It was also found that Bacteroidetes and Firmicutes were the main flora at the phylum level, but Bacteroidetes > Firmicutes in the DIA group and the opposite in the CON group. Moreover, the dominant flora at other levels fall under the classification of these two microorganisms. Table 3. T-test analysis of rabbit colon intestinal microflora in DIA group and CON group at phylum, class, order, family, and genus level. LEfSe was used to analyze the differences in microbial communities between the two groups at different classification levels. The results are shown in Figure 3. In the DIA group, Bacteroidetes were significantly enriched at all levels, and Proteobacteria were also significantly enriched at the phylum level. The analysis results of the CON group showed that Ruminococcaceae, Clostridia, Clostridiales and Firmicutes were significantly enriched. Differential Expression of Genes in Colon After raw data filtering, sequencing error rate inspection, and GC content distribution test. In the DIA group, 44,623,012 raw reads and 43,329,060 clean reads were read on average. The average Q20 of clean reads was 97.67%, and the average Q30 was 93.65%. Then, 45,259,949 raw reads and 44,084,909 clean reads were read in the CON group, with an average Q20 of 97.67% and Q30 of 93.67%. The results of correlation analysis between samples ( Figure 4) showed that when the correlation coefficient R2 > 0.8, it was considered that the correlation between samples was high. The general correlation of the samples in the CON group was high, with an average R2 of 0.892, while that in the DIA group was 0.706. The sample correlation between the DIA group and the CON group is low, with an average R2 of 0.773. A total of 21,703 DEGs were obtained, as shown in the volcano map ( Figure 5). There were 321 DEGs with significant indigenous differences, 131 DEGs were upregulated, and 190 DEGs were downregulated. The significantly upregulated DEGs included S100A9, S100A8, MMP1, CXCL8, and other DEGs, while significantly downregulated DEGs included ITGA11, FN1, COL6A6, COL1A1, and other DEGs. The result of GO enrichment analysis indicated that DEGs were mainly enriched in extracellular space, extracellular matrix, and collagen-containing extracellular matrix. The results after GO enrichment analysis and KEGG enrichment analysis of these DEGs were shown (Figures 6 and 7). KEGG enrichment analysis indicated that significantly upregulated DEGs were mainly enriched in the IL-17 signaling pathway. The downregulated DEGs were mainly concentrated in protein digestion and absorption, ECM-receptor interaction, and focal adhesion. Differential Metabolite Analysis The results of metabolite quantitative analysis showed that a total of 1663 metabolites were collected, including 1191 positive metabolites and 472 negative metabolites. The results of the QC sample correlation score (Figure 8) showed that the correlation R2 of both negative and positive metabolites was between 0.989 and 0.992, indicating that the correlation between samples was high and the data accuracy was high. PCA analysis was performed on all samples, and the results of PCA analysis are shown in Figure 8. The cathodic and anodic metabolites of the two groups were significantly separated, and the dispersion of the DIA group was higher. This indicated that diarrhea leads to more complex metabolites in the DIA group. The results of the PLS-DA analysis were consistent with the above results. After screening for differential metabolites, the results showed in a volcano map (Figure 9). There were 651 differential metabolites, including 472 for positive and 179 for negative, and 373 metabolites were significantly upregulated, 278 metabolites were significantly downregulated, 194 metabolites were significantly downregulated, and 84 metabolites were significantly downregulated. The results were presented as KEGG pathways, and the main biochemical metabolic pathways and signal transduction pathways involved in differential metabolites could be determined. Results of the KEGG pathway were shown ( Figure 10). The results showed that the differential metabolites at the positive were mainly enriched in tryptophan metabolism (p-value = 0.013), phenylalanine, tyrosine and tryptophan biosynthesis (p-value = 0.06), cortisol synthesis and secretion (p-value = 0.06), Cushing's syndrome (p-value = 0.06). The differential metabolites of the negative were mainly enriched in bile secretion (p-value = 0.20), biosynthesis of acid (p-value = 0.24) and biosynthesis of unsaturated fatty acids (p-value = 0.26). KEGG pathway comparative analysis was performed on DEGs and DMs. The results of the KEGG pathway analysis (Figure 12) showed that the enrichment in the Arachidonic acid metabolism pathway was extremely significant (p-value < 0.01). Enrichment in protein digestion and absorption pathways was highly significant (p-value < 0.01). Discussion After being transferred to an antibiotic-free diet, we found that some rabbits developed typical diarrhea symptoms such as loss of appetite, dilute feces, and feces stench. The HE-stained intestinal tissue samples showed that damage in the colon was caused by an inflammatory response. Inflammation would lead to abnormal immune response and damage to intestinal epithelial mucosa, thus affecting their functions [11,23]. The diversity and evenness of microorganisms in the DIA group decreased significantly, while Bacteroidetes increased significantly and Firmicutes decreased significantly. The ratio of Firmicutes and Bacteroidetes could often reflect the health situation in the colon [24,25]. In healthy individuals, the index of Firmicutes/Bacteroidetes (F/B > 1) is usually high, and an imbalance in the F/B index can lead to an inflammatory response [26,27]. The same phenomenon was found in this study. Individuals with a more balanced index of Firmicutes/Bacteroidetes often have a better ability to absorb nutrients [28,29]. Bacteroidetes plays a role in assisting the absorption of sugars and lipids, which is a kind of Probiotics. However, Bacteroidetes accumulate in large quantities and become pathogenic bacteria [30,31]. In addition, Proteobacteria were significantly enriched in DIA. Studies have shown that there is no significant enrichment in healthy individuals [32]. Proteobacteria enrichment causes an inflammatory response that leads to the abnormal function of colon epithelial cells [33][34][35]. Thus, speculated that Bacteroidetes and Proteobacteria may be an important cause of colon inflammation. Because there is a reciprocal balance between microbes and hosts in healthy individuals, changing the feed formula to adjust the balance of microbial communities mainly composed of Firmicutes and Bacteroidetes can be a way to prevent diarrhea and maintain animal health [26,36]. We found that a large number of significantly upregulated DEGs were enriched in the IL-17 signaling pathway. The interleukin-17 (IL-17) cytokine family, mostly produced by Th17 cells, plays a major role in colon inflammation in mice and humans [37,38]. IL-17a expression has been implicated in many immune diseases and inflammatory responses [39]. IL-17A could activate on a variety of cell types, and the high expression of IL-17A could directly promote the production of pro-inflammatory molecules, so the expression of IL-17A is strictly regulated by the IL-17 signaling pathway [40,41]. S100A9, MMP1, CXCL8, and S100A8 DEGs in the IL-17 signaling pathway were also found and significantly upregulated. Meanwhile, significantly downregulated DEGs were concentrated in protein digestion and absorption, ECM-receptor interaction, and focal adhesion. Moreover, the three pathways were very important and responsible for maintaining intestinal epithelial cell structure and immune response, including some anti-inflammatory functions. Studies showed that inflammation would break the homeostasis of ECM and weaken its ability to be anti-inflammatory, which will further lead to lesions [42,43]. The main function of focal adhesion is to induce neutrophils to relieve inflammation, and neutrophils play a major role in regulating inflammation and tissue repair. Low expression of focal adhesion pathways could lead to neutrophil dysfunction and greatly reduce the inhibitory effect of inflammation [44,45]. After differential enrichment analysis of DMs showed that tryptophan metabolism and bile secretion was the most significant. The metabolites in tryptophan metabolism included indole-3-acetic acid, L-tryptophan, Indole, L-kynurenine, etc. Trp is a biologically essential amino acid that plays an important role in the growth, development, and reproduction of mammals. Moreover, it was an important precursor for the synthesis of metabolites used for neurotransmitters involved in immune and inflammatory responses [46,47]. Studies showed that indole-3-acetic acid, L-tryptophan, and indole could effectively improve the damage of inflammation to the colon. Moreover, l-tryptophan could also play a role in the NF-κB signaling pathway, which could block transcription and activation of proinflammatory cytokines, and other studies showed that it can reduce the risk of colorectal cancer [48]. Metabolites enriched in the bile pathway were associated with regulating carbohydrate and lipid absorption and energy metabolism [49]. Bile acids (BA) could promote the absorption of vitamins in the intestine and improve the immune function of the body to maintain individual health [50,51]. Lithocholic acid and deoxycholic acid could regulate bile levels and hepatoenteric circulation by activating the Farnesoid X receptor (FXR). One of the characteristics of colonic inflammation is also the decreased expression level of FXR, which increases the pro-inflammatory appearance and the generation of oxidative stress by changing the BA receptor level. It even affects liver cells [52,53]. Meanwhile, lithocholic acid has the function of protecting the epithelial barrier, which plays an important role in inhibiting inflammation [53,54]. It was found that BA-related metabolites were significantly downregulated, and the end of the correlation analysis showed that Bacteroidetes were negatively correlated with these metabolites. Therefore, it was concluded that the enrichment of Bacteroidetes reduced the expression level of the bile pathway, which led to the decrease in digestion and absorption level of colon, weakened its inflammatory inhibition function, and further aggravated the inflammatory reaction [54,55]. Individual metabolic health is closely related to microbial levels. In this study, 4morpholinobenzoic acid, Diacetoxyscirpenol, and Bacteroides are positively correlated (p-value < 0.01) and were enriched in related pathways. In conclusion, we speculated that the enrichment of Bacteroidetes was the main cause of colon inflammation. On the one hand, protein digestion and absorption, ECM-receptor interaction, and focal adhesion were affected, resulting in downregulated expression of these pathways, which weakened nutrient absorption, immune function, and anti-inflammatory effect. These reasons lead to structural damage to the colon epithelium. On the other hand, the IL-17 signaling pathway and other pro-inflammatory pathways were activated, further exacerbating the inflammatory response. Conclusions The result showed that the abundance of Bacteroidetes and Proteobacteria significantly increased (p-value < 0.05), while Firmicutes significantly decreased (p-value < 0.05). The pathway of bile secretion and tryptophan metabolism were enriched with the most DMs, which impaired the absorption and energy conversion functions of the colon, and affected its immune and anti-inflammatory functions. The high expression of the IL-17 signaling pathway aggravates the inflammatory response. We speculated that the main cause might be the chain reaction caused by the imbalance of Bacteroidetes and Firmicutes. This leads to an inflammatory response and damage to the epithelial structure of the colon. Moreover, nutrient absorption, anti-inflammatory, and other functions were impaired. Under the common action of these factors, the rabbits showed typical diarrhea symptoms. Studies showed that proper adjustment of feed formula to restore microbial ecological balance in the colon may effectively prevent diarrhea. However, its specific mechanism needs further research and verification. The study provided an idea for feed development and a theoretical basis for maintaining intestinal tract fitness in rabbits. Author Contributions: J.W., conception of the research; S.X., H.F., K.Z., G.C. and Y.L. collected data and conducted the research; Y.C. wrote the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: This study was funded by high-quality and characteristic rabbit breeding materials and method innovation and new variety breeding (breeding research project), key R & D project of Sichuan Province (2021YFYZ0033), and national rabbit industry technology system meat rabbit variety improvement (CARS-43-A-2). Institutional Review Board Statement: The authors confirm that this study was performed in accordance with the Guidelines of Good Experimental Practices adopted by the Institute of Animal Science of the Sichuan Agricultural University, Chengdu, China. All experimental protocols involving animals were approved by the Animal Care and Use Committee for Biological Studies, Sichuan Province, China (DKY-B2019302083). Informed Consent Statement: Not applicable. Data Availability Statement: All the figures and tables used to support the results of this study are included. Conflicts of Interest: No conflict of interest exists in this manuscript.
2022-06-11T15:07:34.585Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "7e23bdd9db9495adf272ff85134c3dd3fb4d2cb1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/12/12/1497/pdf?version=1654702654", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0eb04c9db10b92b2c5232218b7e57e1d5ba6040", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
6879155
pes2o/s2orc
v3-fos-license
Silicon Improves Maize Photosynthesis in Saline-Alkaline Soils The research aimed to determine the effects of Si application on photosynthetic characteristics of maize on saline-alkaline soil, including photosynthetic rate (P n), stomatal conductance (g s), transpiration rate (E), and intercellular CO2 concentration (C i) of maize in the field with five levels (0, 45, 90, 150, and 225 kg·ha−1) of Si supplying. Experimental results showed that the values of P n, g s, and C i of maize were significantly enhanced while the values of E of maize were dramatically decreased by certain doses of silicon fertilizers, which meant that Si application with proper doses significantly increased photosynthetic efficiency of maize in different growth stages under stressing environment of saline-alkaline soil. The optimal dose of Si application in this experiment was 150 kg·ha−1 Si. It indicated that increase in maize photosynthesis under saline-alkaline stress took place by Si application with proper doses, which is helpful to improve growth and yield of maize. Introduction Salinity and sodicity toxicities are worldwide agricultural and ecoenvironmental problems; it is estimated that there is approximately 27 million hectares of salinised soils in China's coastal and inland areas [1]. The saline and sodic soils, among which about 23% of the cultivated lands are saline and 37% are sodic, cover about 10% of total arable lands worldwide [2]. This kind of soil is widespread in arid and semiarid regions of the world and causes severe environmental and agricultural problems [3]. Soil salinity and sodicity, which seriously affect the stages of germination, seedling growth and vigour, vegetative growth, flowering, and fruit set of crops [4], adversely affect crop production in different regions, especially in arid and semiarid regions of the world [5,6]. Supplement silicon, which is much cheaper than other methods to minimize salinity and sodicity, such as reclamation, water, and drainage, is an alternative way for overcoming the negative effects of salinity and sodicity on the plant growth and yield [7]. The beneficial effect of silicon is more evident under stress conditions; this is because silicon is able to protect plants from multiple abiotic and biotic stresses [8]. Silicon can alleviate the adverse effects of salt stress on plants by increasing cell membrane integrity and stability through its ability to stimulate the plants' antioxidant system [9]. Silicon application can moderate the salinity and sodicity stress in plants and plays a multitude of roles in plant existence and crop performance, and silicon is deposited in leaves leading towards decreased transpiration and hence dilutes salts accumulated in saline environment [10]. In general, graminaceous plants accumulate much more silicon in their tissues than other species [11]. Maize (Zea mays L.) is reported as salt susceptible [12]. This study aimed to investigate the effects of silicon fertilizer, which was conducted in field tests, on photosynthetic characteristics of maize under the condition of saline-alkaline soil in Northeast China. Measurement of Gas Exchange Parameters. The photosynthetic characteristics of gas exchange parameters, , , , and of the top second fully expanded leaf at the four growth stages in the field, were measured with a portable open flow gas exchange system LI-6400 (LI-COR Inc., USA) between 9:00 am and 11:00 am. The photosynthetically active radiation was 2000 mol⋅m −2 ⋅s −1 , CO 2 concentration was 350 mol⋅mol −1 , and leaf temperature was 25 ∘ C [13][14][15]. Statistical Analysis. All the data were analyzed with oneway analysis of variance (ANOVA) procedures using SPSS Version 17.0 for Windows. The differences between means were compared by Duncan's test at 0.05 significance level. Results Observed from Tables 2, 3, and 4, changes in parameters of net photosynthetic rate ( ), transpiration rate ( ), and stomatal conductance ( ) show that the values measured of these parameters from big trumpet stage to milk stage increased first from the beginning stage, big trumpet stage, and reached the peak values at silking stage, after which these values decreased as maize growing. According to the results of Table 5, there is a similar changing pattern among these Si application treatments T1, T2, T3, T4, and T5, where the values of intercellular CO 2 concentration ( ) under the five Si treatments continued decreasing from big trumpet stage, got the lowest values of at grain filling stage, and then slowly increased at milk stage. Net Photosynthetic Rate ( ). Results showed that the parameter photosynthetic rate ( ) in leaves of maize was affected significantly by Si application treatments and was significantly decreased in later growth stages ( Table 2). From big trumpet stage to milk stage, the values of under the treatments of T3, T4, and T5 (90 kg⋅ha −1 Si, 150 kg⋅ha −1 Si, and 225 kg⋅ha −1 Si) in the same growth stage were significantly ( ≤ 0.05) higher than those under the treatment of T1 and T2 (0 kg⋅ha −1 and 45 kg⋅ha −1 Si). When the content of Si application reached 90 kg⋅ha −1 Si, the value of increased significantly ( ≤ 0.05) with the increased dose of Si application. Low level of Si application (T2, 45 kg⋅ha −1 Si) did not change significantly and high levels of Si application (T4, 150 kg⋅ha −1 Si, and T5, 225 kg⋅ha −1 Si) increased significantly ( ≤ 0.05), but there were no significant differences between the treatments of 150 kg⋅ha −1 Si and 225 kg⋅ha −1 Si dose. Transpiration Rate ( ). During each stage from big trumpet stage to milk stage, transpiration rate ( ) (Table 3) got higher values under the treatments of T1 (without Si application) and T2 (45 kg⋅ha −1 Si) than those under the treatments of T3, T4, and T5. There were no significant differences between T1 and T2. The values of significantly ( ≤ 0.05) decreased with the increase of Si applying dose from 90 kg⋅ha −1 Si in each growth stage. Comparing the values of by Si application of T3, T4, and T5 with those by T1, it is shown that, during big trumpet stage, the former decreased 16.28%, 20.32%, and 20.79%, respectively, compared to those of the latter; during silking stage, the former decreased 9.44%, 15.63%, and 16.75%, respectively, compared to those of the latter; during grain filling stage, the former decreased 11.38%, 21.12%, and 22.89%, respectively, compared to those of the latter; during milk stage, the former decreased 18.33%, 29.70%, and 31.32%, compared to those of the latter. There were no significant differences between the treatments of T1 and T2. So during the four studied stages, the value of of maize began to decreased significantly ( < 0.05) when the dose of Si application got to the amount of 90 kg⋅ha −1 , after which the values of decreased dramatically with the increased dose of Si application; there was no significant difference between the treatments of doses 150 kg⋅ha −1 and 225 kg⋅ha −1 . Stomatal Conductance ( ). According to the data (Table 4) of big trumpet stage and silking stage, it is shown that the values of stomatal conductance ( ) of maize were increased significantly ( ≤ 0.05) at higher Si levels of T4 and T5 (150 kg⋅ha −1 and 225 kg⋅ha −1 ) as compared to lower levels of Si application of T1, T2, and T3 (0 kg⋅ha −1 , 45 kg⋅ha −1 , and 90 kg⋅ha −1 ). There were no significant differences between the treatments of T4 and T5 as well as among the treatments of T1, T2, and T3. The data (Table 4) showed that, at grain filling stage and milking stage, the values of of maize significantly ( < 0.05) increased by Si application at levels of 90 kg⋅ha −1 (T3), 150 kg⋅ha −1 (T4), and 225 kg⋅ha −1 (T5); the values of of maize were significantly enhanced with the increased dose of Si application of 90 kg⋅ha −1 ; there were significant differences between T3 and T4, and there was no significant 4 The Scientific World Journal difference between T4 and T5. So at big trumpet stage and silking stage, the value of of maize began to increase significantly ( < 0.05) when the dose of Si application got to the amount of 150 kg⋅ha −1 ; there was no significant difference in the values of of maize between the treatments of doses 150 kg⋅ha −1 and 225 kg⋅ha −1 ; at grain filling stage and milk stage, the value of of maize began to increase significantly ( < 0.05) when the dose of Si application got to the amount of 90 kg⋅ha −1 , after which the values of of maize were enhanced significantly with the increased dose of Si application; there was no significant difference between the treatments of doses 150 kg⋅ha −1 and 225 kg⋅ha −1 . Intercellular CO 2 Concentration ( ). From big trumpet stage to milk stage, the data ( Table 5) showed that the values of intercellular CO 2 concentration ( ) of maize with the Si treatments of T3, T4, and T5 were significantly ( ≤ 0.05) higher than those with T1 and T2; there was no significant difference between the treatments of T1 and T2. The values of began to increase significantly ( ≤ 0.05) when the dose of Si application got to the amount of 90 kg⋅ha −1 , after which the values of were dramatically enhanced with the increased dose of Si application; there were significant ( ≤ 0.05) differences among the treatments of T3, T4, and T5. During big trumpet stage, comparing the values of by Si application of T3, T4, and T5 with those by T1, it is shown that the former increased 13.1%, 17.1%, and 26.3%, respectively, compared to those of the latter; during silking stage, the former increased 16.6%, 20.6%, and 30.6%, respectively, compared to those of the latter; during grain filling stage, the former increased 14.1%, 26.5%, and 35.6%, respectively, compared to those of the latter; during milk stage, the former increased 13.5%, 25.5%, and 33.6% compared to those of the latter. There was no significant difference between the treatments of T1 and T2. So in these four studied stages, the value of of maize began to increase significantly ( < 0.05) when the dose of Si application got to the amount of 90 kg⋅ha −1 , after which the values of increased dramatically with the increase of dose of Si applying. Discussion Under abiotic stresses such as toxicity, salinity, and lodging, silicon is reported to improve the growth of many kinds of higher plants [16]. Silicon can improve the growth of plants under salinity stress [17]. Exogenously applied Si significantly increased photosynthetic efficiency ( ), stomatal conductance ( ) and increased internal CO 2 concentration ( ) in maize under saline conditions [18,19]. Our results showed that, under the condition of saline-alkaline soil, the values of , , and of maize leaves were significantly enhanced by Si application and that of was decreased with the increase of Si supplied; similar improvement was reported in crops of strawberry, maize, Chinese cabbage, and rice [20][21][22][23]. The main mechanisms of improving crops growth by silicon lie in its functions of stimulation of photosynthesis, reduction of plant transpiration rate, and enhancement of tissue strength [24]. Our researches showed that, in the four studied growth stages, the values of by Si application were significantly enhanced compared with those with no Si application. Similar results were reported that addition of Si can enhance the photochemical efficiency of plants under salt stress [25]. Photosynthetic capacities of crops treated by Si application can be improved because the size of chloroplasts is enlarged and the number of grana in leaves is increased [26]. Researches on crops of barely (Hordeum vulgare L.), rice (Oryza sativa L.), sugarcane (Saccharum officinarum L.), and wheat (Triticum aestivum L.) showed that silicon deposited in leaves is helpful to improve the potential and efficiency of photosynthesis by opening angle of leaves, decreasing selfshading, and keeping the leaf erect [27], which play important roles in increasing of crops. According to our research, during the studied growth stages, the value of of maize began to decrease significantly ( < 0.05) at the dose of 90 kg⋅ha −1 Si application, and above this dose of Si application the values of decreased dramatically with the increased dose of Si application. By Si application, plant's internal water stress can be reduced; therefore, salt stress can be withstood as the rate of transpiration being influenced by the amount of Si gel associated with the cellulose in the cell walls of epidermal cells [28]. Similar results showed that water loss in maize can by reduced by Si application because Si can change the morphological structures of leaf epidermal cell [29]. Si is deposited in leaves leading towards decreased transpiration and hence dilutes salts accumulated in saline environment [30]. Unnecessary water loss can be limited through the epidermis, which is double layer, and silica combines with cellulose in the epidermal cells of leaf blade [31]. Another reason to explain the decrease of is that the stomata opening can be influenced by Si [32]. Salt-stressed plants supplied with Si showed values of WUE 17% greater than those of salinized plants which were not supplied with Si by reducing the transpiration [30]. In our research, the value of of maize began to increase significantly when the dose of Si application got to the amount of 150 kg⋅ha −1 at big trumpet stage and silking stage; the value of of maize began to increase significantly when the dose of Si application got to the amount of 90 kg⋅ha −1 , after which the values of of maize were enhanced significantly with the increased dose of Si application at grain filling stage and milk stage. Similar reports on rice (Oryza sativa L.) are that Si application can enhance the stomatal conductance of rice plants subjected to salt stress, which shows silicate can reduce Na uptake via decline in the transpiration rate [30]. Silicon added to the saline growth medium improved photosynthetic activity [18,19]. Si amendment enhanced the stomatal conductance of rice plants subjected to salt stress showing that silicate can reduce Na uptake via decline in the transpiration rate, which ultimately results into the reduction in growth and net photosynthesis [10]. It was also reported that can be increased by Si fertilizer, because the increased can regulate gas exchange, increase CO 2 uptake, and subsequently improve the capacity and efficiency of photosynthesis [18,19,[33][34][35][36][37][38][39]. In the four studied growth stages, the value of of maize began to increase significantly at the dose of 90 kg⋅ha −1 Si The Scientific World Journal 5 application, after which the values of increased dramatically with the increase of dose of Si application, which showed that photosynthetic efficiency of leaves can be enhanced by Si application [18]. Si applying can prevent the activities of photosynthetic enzymes in mesophyll cells from decreasing [34]. Si application reduces the transpiration rate to restrict the Na uptake; as a result, CO 2 intake is enhanced [10]. Under salt stress condition, the double membranes of chloroplasts disappeared, but membrane integrity was markedly improved in the salt treatment supplemented with Si [35]. The values of can be increased dramatically by Si application under salt stress, which is stressful environment with the accumulation of ROS, such as superoxide radicals (O 2 •− ), hydroxyl radicals (OH − ), and hydrogen peroxide (H 2 O 2 ), and the activity of defense system affected by salinity stress may be enhanced by Si application [10,19,[36][37][38][39][40][41][42][43]. Conclusion Silicon plays an important role to enhance photosynthesis ability and efficiency of plants under salinity stress [10]. The field research showed that values of photosynthetic rate ( ), stomatal conductance ( ), transpiration rate ( ), and intercellular CO 2 concentration ( ) of maize plants in salinealkaline soil were affected by Si application and salinity stress can be alleviated by Si application. In our research, the optimal dose of Si application on saline-alkaline soil was 150 kg⋅ha −1 , under which photosynthetic ability of maize was greatly increased at studied growth stages. So in this research, increase in maize photosynthesis under saline-alkaline stress took place by Si application with proper doses, with which it is helpful to improve growth and yield of cereal crops [29].
2018-04-03T03:34:09.691Z
2015-01-05T00:00:00.000
{ "year": 2015, "sha1": "12b389a3a0b514327cea992e4f4ba9709b6cb20e", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2015/245072.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e3e9072f90b4bd628bf08b5784c38e4a2e4c0a0d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
16185339
pes2o/s2orc
v3-fos-license
Histone H2AX phosphorylation as a measure of DNA double-strand breaks and a marker of environmental stress and disease activity in lupus Objective Defective or inefficient DNA double-strand break (DSB) repair results in failure to preserve genomic integrity leading to apoptotic cell death, a hallmark of systemic lupus erythematosus (SLE). Compelling evidence linked environmental factors that increase oxidative stress with SLE risk and the formation of DSBs. In this study, we sought to further explore genotoxic stress sensitivity in SLE by investigating DSB accumulation as a marker linking the effect of environmental stressors and the chromatin microenvironment. Methods DSBs were quantified in peripheral blood mononuclear cell subsets from patients with SLE, healthy controls, and patients with rheumatoid arthritis (RA) by measuring phosphorylated H2AX (phospho-H2AX) levels with flow cytometry. Phospho-H2AX levels were assessed in G0/G1, S and G2 cell-cycle phases using propidium iodide staining, and after oxidative stress using 0.5 µM hydrogen peroxide exposure for 0, 2, 5, 10, 30 and 60 min. Results DSB levels were significantly increased in CD4+ T cells, CD8+ T cells and monocytes in SLE compared with healthy controls (p=2.16×10−4, 1.68×10−3 and 4.74×10−3, respectively) and RA (p=1.05×10−3, 1.78×10−3 and 2.43×10−2, respectively). This increase in DSBs in SLE was independent of the cell-cycle phase, and correlated with disease activity. In CD4+ T cells, CD8+ T cells and monocytes, oxidative stress exposure induced significantly higher DSB accumulation in SLE compared with healthy controls (60 min; p=1.64×10−6, 8.11×10−7 and 2.04×10−3, respectively). Conclusions Our data indicate that SLE T cells and monocytes have increased baseline DSB levels and an increased sensitivity to acquiring DSBs in response to oxidative stress. Although the mechanism underlying DSB sensitivity in SLE requires further investigation, accumulation of DSB may serve a biomarker for disease activity in SLE and help explain increased apoptotic cell accumulation in this disease. INTRODUCTION Systemic lupus erythematosus (SLE or lupus) is a complex disease characterised by autoantibody production and the involvement of multiple organ systems. Although the pathogenesis of SLE remains unclear, there is a well-established genetic predisposition to the disease with ∼60 genetic risk loci confirmed to date. [1][2][3] Several environmental factors are also known to increase the risk for SLE, including silica dust exposure, mercury, ionising radiation, ultraviolet light and oxidative stress. 4 The overwhelmingly unifying factor among almost all environmental exposures known to induce SLE is the potential to be genotoxic by producing reactive oxygen intermediates and double-strand DNA breaks. Double-strand breaks (DSBs) represent a severe form of DNA damage that can cause structural instability, genetic mutations and cell apoptosis if the damage is not effectively repaired. 5 6 Defective DSB repair has recently been associated with SLE with evidence for inefficient or delayed repair of DSBs in Epstein-Barr virus (EBV)-transformed B-cell lines derived from patients with paediatric SLE. 7 In addition, both intrinsic and genotoxic stress-induced DSB accumulation were shown to be increased in peripheral blood KEY MESSAGES ▸ Double-strand DNA breaks are increased in lupus T cells and monocytes independent of the cell-cycle phase. ▸ Double-strand DNA breaks accumulation, assessed using flow cytometry quantifying phosphorylated histone H2AX, correlates with disease activity in lupus. ▸ Lupus T cells and monocytes are more sensitive to the genotoxic effect of oxidative stress compared with healthy controls. ▸ Impaired double-strand DNA break repair in lupus might contribute to increased apoptosis, a hallmark of the disease. mononuclear cells (PBMCs) from patients with lupus nephritis. 8 In this study, we sought to further investigate the relationship between SLE and DSB accumulation through in-depth analyses in primary CD4+ T cells, CD8+ T cells and monocytes. DSB levels were assessed by quantifying phosphorylation of serine 139 on H2AX ( phospho-H2AX), a well-established DSB biomarker, which is involved in amplifying the DNA damage response signalling cascade 9 10 and is one of the earliest, most robust cellular responses to DNA damage. 11 12 Disease-associated changes in phospho-H2AX levels were evaluated in CD4+ T cells, CD8+ T cells and monocytes from patients with SLE compared with healthy controls and patients with rheumatoid arthritis (RA) as disease controls. We also examined the relationship between phospho-H2AX levels and SLE disease activity. In each of the three cell subsets, we investigated the effectiveness of DSB repair in SLE by assessing phospo-H2AX levels in response to oxidative stress. Patients with SLE and matched controls Our study group comprised a total of 18 female patients with SLE, 15 female healthy controls and eight female patients with seropositive RA. Each patient with SLE fulfilled at least four of the American College of Rheumatology classification criteria for SLE. 13 In addition, none of the recruited patients had prior exposure to either cyclophosphamide or calcineurin inhibitors. All participants were recruited from the University of Michigan rheumatology clinics, and signed an informed consent approved by our institutional review board. The disease activity of each patient was assessed using the SLE disease activity index (SLEDAI), 14 by trained individuals in our Michigan Lupus Cohort. A summary of the demographic information, medications and SLEDAI scores for each patient included in this study is shown in table 1. Peripheral blood mononuclear cell isolation Fresh peripheral blood samples (80 mL) were collected from each study participant. Ficoll-Paque density gradient centrifugation (GE Healthcare Bio-Sciences AB, Uppsala, Sweden) was used to isolate PBMCs. After isolation, PBMCs were either stained for direct analysis by flow cytometry or used for the hydrogen peroxide treatment experiments prior to analysis. Flow cytometry and antibody staining PBMCs were stained with Pacific Blue anti-human CD3, APC anti-human CD4, PE anti-human CD8a and APC/ Cy7 anti-human CD14 (Bio Legend, San Diego, California, USA). The stained cells were then fixed and permeabilised using the Nuclear Factor Fixation and Permeabilization Buffer Set (BioLegend, San Diego, California, USA) as per the manufacturer's instructions. Intracellular staining was then performed using fluorescein isothiocyanate anti-phosphorylated (ser139) H2AX (clone 2F3; BioLegend). For the cell-cycle experiments, the PBMCs were additionally treated with 20 μg/mL RNase A (Sigma-Aldrich, St. Louis, Missouri, USA), and then stained with 40 μg/mL propidium iodide (BioLegend) as per manufacturer's instructions for 1 hour prior to analysis. Flow cytometry analysis was then performed using a MoFlow Astrios Flow Cytometer and Summit software V.6.2.3 (Beckman Coulter, Miami, Florida, USA). Phospho-H2AX levels are reported as median fluorescence intensity (MFI) values. Statistical analysis Phospho-H2AX levels were compared between SLE, healthy control and RA groups using unpaired Student's t tests that assumed equal variance between groups. The relationship between phospho-H2AX levels and SLEDAI was assessed by Pearson correlation analyses. For H 2 O 2 exposure time-course assays, phospho-H2AX levels were compared in each cell subset by multiple t tests that assumed equal variance, which were corrected for multiple testing using the Holm-Sidak method. Results were considered significant with a p<0.05, and all statistical analyses were performed using GraphPad Prism software V.6.07 (San Diego, California, USA). Baseline DSB levels are increased in patients with SLE We examined DSB accumulation in 14 patients with SLE, 10 healthy controls and eight patients with RA by measuring levels of phospho-H2AX, a rapid and sensitive marker for DSBs. 15 figure 1A). We then determined if the increased phospho-H2AX levels in SLE were independent of differences in cellcycle phases. We measured phospho-H2AX levels at G0/ G1, S and G2 cell-cycle phases in CD3+ T cells and monocytes that were isolated from a subset of eight patients with SLE and eight healthy controls. In SLE compared with healthy controls, there were significantly increased levels of phospho-H2AX in CD3+ T cells at each of the three cell-cycle phases (average MFI; G0/ G1: 40. Together, these results demonstrate that patients with SLE exhibit significantly increased phospho-H2AX levels in CD4+ T cells, CD8+ T cells and monocytes. Furthermore, we show that this SLE-associated difference is independent of cell-cycle phase. Oxidative stress from hydrogen peroxide increases DSB levels more rapidly in patients with SLE compared with healthy controls We tested whether patients with SLE have increased sensitivity to DSB accumulation in response to oxidative stress. In six patients with SLE and in six healthy ageand sex-matched controls, PBMC samples were isolated and treated with 0.5 µM hydrogen peroxide (H 2 O 2 ) for 0, 2, 5, 10, 30 or 60 min. We used 0.5 µM H 2 O 2 as this was the lowest concentration we tested in a preliminary experiment that increased phospho-H2AX levels above baseline. The exposure time points of 0, 2, 5, 10, 30 or 60 min were used to examine the trend between the initial increase of phospho-H2AX, which occurs rapidly following DNA damage, as well as the highest phospho-H2AX levels, which are reached 30-60 min after the damage. 16 It is also worth noting that phospho-H2AX levels decreased at the 60 min exposure time in each control cell type, whereas the levels continued to increase in each SLE cell type. Interestingly, monocytes from healthy controls showed limited change from baseline phospho-H2AX levels at each H 2 O 2 exposure time point, suggesting a relative resistance in monocytes to accumulating DSBs compared with autologous CD4+ or CD8+ T cells. DISCUSSION SLE is characterised by increased levels of cellular apoptosis, inefficient clearance of apoptotic cells, and the production of autoantibodies, which may arise from increased exposure to nuclear self-antigens derived from apoptotic debris. [17][18][19][20] This SLE-associated increase in apoptosis is thought to be contributed by defective DNA repair mechanisms. Indeed, rare variants associated with SLE were shown to impair protein function of RNase H2, an essential enzyme that removes misincorporated ribonucleotides during DNA replication. The ineffective DNA damage repair that ensued resulted in increased expression of interferon (IFN)-regulated genes and an enhanced type-I IFN response in fibroblasts from patients. 21 Furthermore, defective repair of DSBs has been associated with EBV-transformed B-cell lines in paediatric SLE and PBMCs in lupus nephritis. 7 8 More recently, a polymorphism in the DNA repair gene RAD51B has been associated with increased risk of SLE. 2 In this study, we explored DSB accumulation for the first time in SLE CD4+ T cells, CD8+ T cells and monocytes, compared with healthy controls and patients with RA. In addition, we measured intracellular DSB accumulation with detailed analyses of phospho-H2AX by flow cytometry, which can quantify DSBs with high specificity and sensitivity to genotoxicity (91% and 89%, respectively), while offering statistical superiority to other phospho-H2AX assays by analysing greater numbers of cells. 22 23 The anti-phospho-H2AX antibody used in this study specifically detects phosphorylation of H2AX at serine 139, and has been previously validated and repeatedly used to accurately assess H2AX phosphorylation. 21 24 25 For each of the analysed cell types, we revealed significantly increased DSB accumulation in patients with SLE compared with healthy controls. Further, DSB accumulation was significantly higher in SLE compared with RA, an inflammatory disease control. It remains to be seen, however, how this increase in DSBs in SLE compares with other conditions known to be associated with increased oxidative stress, such as sepsis and radiation therapy. As phospho-H2AX levels are known to be decreased in G1 compared with both S and G2 phases, 26 27 we also assessed DSB accumulation at G0/G1, S and G2 phases to show significant DSB accumulation at each cell-cycle phase in SLE T cells and at G0/G1 and G2 phases in SLE monocytes. In combination, these findings suggest that increased DSB accumulation in SLE immune cells is independent of lineage and cell cycle. Our analysis of the relationship between disease activity and DSB levels in patients with SLE showed a significant positive correlation in CD4+ T cells and CD8+ T cells, independent of the cell-cycle phases. When the SLEDAI scores were calculated without criteria for low complement binding and increased anti-dsDNA antibodies, the correlation was also significant in CD4+ T cells, CD8+ T cells and monocytes, independent of cell cycle. These data might support a proof of concept for the potential use of phospho-H2AX as a biomarker of disease activity in SLE. Although the utility of this biomarker will require replication efforts with a larger cohort for validation, the benefits of using phospho-H2AX as a biomarker for early detection, prognosis and treatment efficacy have been already described in cancer. 28 29 Previous studies have shown that SLE lymphocytes and neutrophils are sensitive to oxidative stress. 30 31 In this study, we assessed the DNA damage response to oxidative stress in specific immune cell types using a timecourse treatment of H 2 O 2 . In CD4+ and CD8+ T cells from healthy individuals, our data showed that phospho-H2AX levels did not increase further with Figure 4 Phospho-H2AX levels in CD4+ T cells, CD8+ T cells and monocytes (A, B and C, respectively) are compared between patients with SLE and healthy controls in response to hydrogen peroxide exposure (0.5 µM) at 0, 2, 5, 10, 30 and 60 min (n=6). Phospho-H2AX levels were measured by flow cytometry, and are provided as MFIs (mean±SEM). HC, healthy controls; MFI, median fluorescence intensity; phospho-H2AX, phosphorylated H2AX; SLE, systemic lupus erythematosus. H 2 O 2 exposures longer than 10 min. In addition, phospho-H2AX levels of healthy monocytes did not increase appreciably at any of the evaluated exposure times, indicating that monocytes may have an alternate or enhanced ability to handle oxidative stress. However, SLE phospho-H2AX levels continued to increase with H 2 O 2 exposure times for each of the three immune cell subsets. These data suggest that SLE is associated with defective DSB repair or an intrinsically higher susceptibility for DNA damage in CD4+ T cells, CD8+ T cells and monocytes. Taken together, these data provide evidence for defective repair of endogenous and oxidative stress-induced DSBs in SLE CD4+ T cells, CD8+ T cells and monocytes. In addition, DSB levels correlate with disease activity, and suggest further investigation of phospho-H2AX as a disease biomarker linking environmental exposures to the chromatin microenvironment in SLE. Importantly, our findings add to our current understanding of SLE, and offer further evidence for the role of aberrant DSB repair in disease pathogenesis.
2018-04-03T03:03:23.625Z
2016-04-01T00:00:00.000
{ "year": 2016, "sha1": "85f6e8a4194cba1088e17ac78d4d7b7a6bc5cffe", "oa_license": "CCBYNC", "oa_url": "https://lupus.bmj.com/content/lupusscimed/3/1/e000148.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85f6e8a4194cba1088e17ac78d4d7b7a6bc5cffe", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9494709
pes2o/s2orc
v3-fos-license
Cervix carcinoma is associated with an up-regulation and nuclear localization of the dual-specificity protein phosphatase VHR Background The 21-kDa Vaccinia virus VH1-related (VHR) dual-specific protein phosphatase (encoded by the DUSP3 gene) plays a critical role in cell cycle progression and is itself regulated during the cell cycle. We have previously demonstrated using RNA interference that cells lacking VHR arrest in the G1 and G2 phases of the cell cycle and show signs of beginning of cell senescence. Methods In this report, we evaluated successfully the expression levels of VHR protein in 62 hysterectomy or conization specimens showing the various (pre) neoplastic cervical epithelial lesions and 35 additional cases of hysterectomy performed for non-cervical pathologies, from patients under 50 years of age. We used a tissue microarray and IHC technique to evaluate the expression of the VHR phosphatase. Immunofluorescence staining under confocal microscopy, Western blotting and RT-PCR methods were used to investigate the localization and expression levels of VHR. Results We report that VHR is upregulated in (pre) neoplastic lesions (squamous intraepithelial lesions; SILs) of the uterine cervix mainly in high grade SIL (H-SIL) compared to normal exocervix. In the invasive cancer, VHR is also highly expressed with nuclear localization in the majority of cells compared to normal tissue where VHR is always in the cytoplasm. We also report that this phosphatase is highly expressed in several cervix cancer cell lines such as HeLa, SiHa, CaSki, C33 and HT3 compared to primary keratinocytes. The immunofluorescence technique under confocal microscopy shows that VHR has a cytoplasmic localization in primary keratinocytes, while it localizes in both cytoplasm and nucleus of the cancer cell lines investigated. We report that the up-regulation of this phosphatase is mainly due to its post-translational stabilization in the cancer cell lines compared to primary keratinocytes rather than increases in the transcription of DUSP3 locus. Conclusion These results together suggest that VHR can be considered as a new marker for cancer progression in cervix carcinoma and potential new target for anticancer therapy. Background The human genome contains 61 genes for Vaccinia virus H1-like, or 'dual-specific' protein phosphatases (DUSPs) [1], most of which have poorly understood functions. Many of these genes encode phosphatase that dephosphorylate the mitogen-activated protein kinases (e.g. MKP1/DUSP1, PAC1/DUSP2, MKP3/DUSP6, etc) or regulate the cell cycle (e.g. CDC14A and CDC14B). A group of 19 of these phosphatases consists only of a catalytic domain and have molecular weights of 18 -26 kDa. One of them is the 185-amino acid residues Vaccinia H1related (VHR), encoded by the DUSP3 gene [2], which dephosphorylates and thereby inactivates the mitogenactivated protein kinases Erk and Jnk in vivo. We have recently reported that the level of VHR fluctuate during the cell cycle: in early G1, VHR is barely detectable and then it increases to reach a peak before mitosis. Furthermore, the elimination of VHR by RNA interference resulted in cell cycle arrest in G1/S and G2/M [3,4]. This effect of VHR knock-down was counteracted by downmodulation of the levels of Erk and Jnk or by modest levels of Mek and Jnk inhibitors. Based on these data we proposed that VHR is important for cell cycle progression because it tempers Erk and Jnk during the S and G2/M phases of the cell cycle, where excessive activity of these kinases can activate cell cycle check points. The permissive role of VHR in cell cycle regulation prompted us to ask if VHR levels are perturbed in cancer cells. Uterine cervical cancer is an important health problem [5]. Human papilloma virus (HPV) is strongly implicated as a causative agent in the etiology of this type of cancer [6], which is preceded by well-characterized squamous intraepithelial lesions (SIL), only 10-15% of which will progress to malignancy [7]. However, HPV infection alone is not sufficient to induce cancer development. The role of the immune response in controlling HPV infection and subsequent development of SIL is indirectly established by the increased frequency of HPV-associated lesions in patients with depressed cell-mediated immunity. Indeed, our laboratory has demonstrated that HPV infection is associated with an abnormal cytokine production and diminished APC density function in the normal transformation zone (TZ) where the majority of SIL occurs [8]. How HPV causes premalignant and malignant disease appears to be a consequence of the signaling pathways that are stimulated or repressed in cells to enable the virus to replicate. Major targets of the viral proteins E6, E7 and E5 include respectively p53, proteins encoded by the retinoblastoma gene family and the MAPK activity [9]. Our laboratory has also showed that functional components of the NF-kB signaling pathway are up-regulated and sequestered in the cytoplasm of human papillomavirus 16 (HPV16) transformed cell lines leading to a reduced activity of NF-κB [10]. Mutations in p53 gene have been also reported [11]. It is clear that more investigations on signaling pathways are required to better understand the tumorigenesis associated with HPV infections. Case selections and microarray construction Following approval from our institutional review board, a tissue microarray (TMA) was constructed with 62 hysterectomy or conization specimens showing the various (pre) neoplastic cervical epithelial lesions and 35 additional cases of hysterectomy performed for non-cervical pathologies, from patients under 50 years of age. The cases were retrieved from the archived Tissue Bank at Liège University (Liège Biothèque: BTULg) and represented biopsies diagnosed between the years 1998 and 2006. These cases were selected on the basis of availability of at least one evaluable tissue representative hematoxylin and eosin-stained slide and a paraffin block. For each case, slides were reviewed to select a representative area. The corresponding spot on the associated paraffin block was then cored and placed on a tissue microarrayer (Beecher Instruments, Sun Prairie, WI, USA). The TMA blocks were constructed in doublets (2 spots for each diagnostic entity) using 1 mm tissue cores (Alphelys, Plaisir, France). The whole series supplied a total number of 194 spots including duplicate of 11 L-SIL, 18 H-SIL, 12 SCC, 12 primary cervical adenocarcinoma (ADC) and 9 adenocarcinoma in situ (AIS). From the cases without cervical lesions, duplicate of 16 ectocervix and 19 endocervix tisues were arrayed. The finalized arrays were then cut into 5 μm-thick sections and mounted on glass slides. In addition to the formalin-paraffin embedded tissues that were used for the TMA preparation, additional frozen cervical biopsy specimens were retrieved from the Tumor Bank of Liege University (BTULg). These biopsies included 10 high-grade squamous intraepithelial lesions (HSIL), 10 invasive squamous cell carcinoma (SCC) and 10 paired normal exocervical tissues from the same patients. These biopsies were used to perform immunofluorescence studies under confocal microscopy. Immunohistochemistry and immunofluorescence The tissue microarray slides were stained with antibodies against VHR (Clone 237020 dilution 1:2500, R&D Systems, Minneapolis, MN) using a standard avidin-biotin complex method. Tissue microarray slides were deparaffinized with xylene, graded alcohol then rehydrated with distilled water. Endogenous peroxidase activity was blocked by placing the slides in 0.5% hydrogen peroxidase/methanol for 10 minutes followed by a tap water rinse. Background staining was reduced by incubating slides in 0.3% bovine serum albumin/Tris-buffered saline. Antigen retrieval entailed placing the slides in a pressure cooker with an antigen unmasking solution (0.01 M citrate buffer, pH 6.0) for 1 minute. Slides were subsequently incubated with the primary (4°C overnight), then biotinylated secondary antibodies and streptavidin-biotin peroxidase. 3'3' diaminobenzidine (DAB) was used as chromogen and sections were counterstained with hematoxylin. Immunofluorescence staining on the cervix biopsies was performed with a monoclonal antibody directed against VHR (Clone 24 dilution 1:2500, BD, San Diego, CA). The primary antibodies were revealed with Alexa-488 conjugated secondary antibody together with TOTO-3 to stain nuclei. The sections were mounted and viewed under a confocal laser scanning microscopy TCS SP2 (Leica TCS SP2, Van Hopplynus, Belgium). For immunofluorescence and immunohistochemistry on cells in culture, the cells were grown on poly-L-lysine coated coverslips and fixed with 4% formaldehyde. The fixed cells were permeabilized with 0.3% of Triton X-100/ PBS buffer then stained either with anti-VHR mAb (4 μg/ ml) (BD-transduction laboratories) or with p16 antibody (NeoMarkers). After 3 washes, the primary Ab was revealed with an Alexa-488 conjugated secondary Ab together with TOTO-3 (Invitrogen) to stain nuclei and visualized under confocal microscopy. For immunohistochemistry, the anti-VHR antibody was revealed using an HRP secondary detection kit (Universal LSABTM 2 KIT/ HRP, Rabbit/Mouse, DakoCytomation). The stained cells were mounted and visualized on light microscopy. Scoring of immunohistochemical staining The VHR immunostaining was scored semi-quantitatively. For staining intensity, 0 represented samples in which VHR nuclear and/or cytoplasmic staining was undetectable, whereas 1+, 2+, and 3+ denoted samples with low, moderate, and strong staining, respectively. For staining extent, in normal ectocervix and the various grades of SIL, 1+ represented samples in which VHR expression was detectable in the lower 1/3 of the epithelium whereas 2+ denoted samples in which the lower 2/3 of the epithelium showed detectable VHR expression and 3+ represented those in which the immunoreactive cells reached the upper epithelial 1/3. For the extent of staining in SCC, normal endocervix, AIS and ADC, 1+ represented samples in which VHR expression was detectable in up to 33% of the epithelium whereas 2+ denoted samples in which 33-66% of the epithelium showed detectable VHR expression and 3+ represented those in which more than 66% of the cells were stained. In order to provide a global score for each case, the results obtained with the two scales were multiplied, yielding a single scale with steps of 0 to 9. The microarrays were scored by 2 independent observers and discrepancies were resolved during a consensus session. To externally validate the staining patterns observed in the TMA, full representative tissue sections of 10 SCC were randomly selected, stained with VHR and scored using the same system as used with the microarray. Cell Lines and primary keratinocytes Five different cell lines derived from cervix cancer were used in this study: HeLa, SiHa and CaSki (all positive for HPV) and C33 and HT3 (HPV negative). Immortalized human foreskin keratinocytes stably transfected with E6 and E7 (E6/E7) were previously described [12] and kindly provided by F. Rosl (Heidelberg, Germany). The cells were grown in DMEM medium (Dulbecco's modified Eagle's medium; ICN; Flow Laboratories) complemented with 10% heated inactivated fetal calf serum (FCS), 30 units/ ml of penicillin, 30 μg/ml of streptomycin and 2 mM of Lglutamine. Primary keratinocytes (KN) were prepared from hysterectomies. Fragments were plunged in a solution containing gentamycin, fungizon and anti-mycoplasm. These fragments were cut in smaller pieces, and then incubated in trypsin-EDTA (Invitrogen) at 37°C under agitation for 1-2 hours. The epithelium was scraped and cells were recovered in FCS. After centrifugation, the cells were resuspended in K-SFM medium (Serum Free Media; Invitrogen) complemented with EGF (0.1 ng/ml), pituitary hormone (20-30 μg/ml) and gentamycin (5 μg/ml). Cell lysates and Immunoblotting Cells were lysed in 20 mM Tris-HCl at pH 7.5, 150 mM NaCl, 5 mM EDTA containing 1% NP-40, 1 mM Na 3 VO 4 , 10 μg/ml aproptinin and leupeptin, 100 μg/ml soybean trypsin inhibitor and 1 mM phenylmethylsulfhonyl fluoride, incubated on ice for 30 min then centrifuged at 20,000 g for 20 min. The proteins were then resolved by SDS-PAGE and transferred onto nitrocellulose membrane. The membranes were immunoblotted with optimal dilutions of monoclonal primary antibodies, followed by an HRP conjugated anti-mouse secondary Ab. The blots were developed by enhanced chemiluminescence (ECL kit, Amersham) according to the manufacturer's instructions. RNA preparation and RT-PCR Total RNA was extracted from primary keratinocytes and the different cell lines using the High Pure RNA Isolation Kit (Roche Diagnostics, Germany) according to the procedures supplied by the manufacturer. Reverse transcription was performed using the RT-PCR kit (Applied Biosystem, Foster City, CA). The PCR reaction was performed using VHR specific primers (5'-ATGTCGGGCTCGTTCGAGCTC-3' and 5'-CTAGGGTTTCAACTTCCCCTC-3') and normal-ized with HPRT (5'-GTTGGATACAGGCCAGACTTT-GTTG-3' and 5'-GATTCAACTTGCGCTCATCTTAGGC-3'). VHR expression and localization in cervix biopsies VHR expression at the protein level was studied using a tissue microarray mounted in normal exocervix (n = 16), low-grade SILs (n = 11), high-grade SILs (n = 18), invasive SCCs (n = 12), normal endocervix (n = 19), adenocarcinoma (n = 12) and adenocarcinoma in situ (n = 9). Semiquantitative evaluation of VHR staining is shown in figure 1. The VHR score was statistically higher in H-SILs and SCCs (p < 0.0001 and p < 0.05 respectively) compared to normal exocervical epithelium (Fig 1, upper panel). The VHR score in ADC and AIS was also significantly higher in ADC and AIS compared to normal endocervix (p < 0.05 and p < 0.0001 respectively) ( Fig. 1 lower panel). The VHR immunoreactivity was very low in the (para) basal cell layers of the normal squamous epithelium (Fig. 2A, panel a) whereas an intense staining was observed in H-SIL and SCC ( Fig. 2A, panels c and d). In contrast to normal epithelium, VHR was both nuclear and cytoplasmic in H-SIL and SCC ( Fig. 2A, panels c and d). High immunoreactivity of VHR was also observed in ADC and AIS but no nuclear staining was detected in these two categories of cervix cancer ( Fig. 2A, panels e, f and g). In order to confirm the localization differences of VHR in cancer biopsies versus exocervix epithelium, we performed immunofluorescence analysis under laser scan-ning confocal microscopy. Figure 2B shows that VHR phosphatase is barely detectable and localizes exclusively in the cytosol of the keratinocytes of the normal exocervix. In H-SIL, VHR is highly expressed and has both nuclear and cytoplasmic localization in several cells of the epithelium. However, in SCC, VHR is highly expressed compared to the exocervix of the same patient, with mainly nuclear and perinuclear localization (Fig 2B). VHR in cervix cancer cell lines By immunocytochemistry, the levels of VHR were much higher in the cervix cancer cell lines compared to the primary keratinocytes (Fig. 3A). The presence or absence of HPV in the cells did not affect VHR levels. While VHR was excluded from the nucleus in primary keratinocytes, it was partly nuclear in the cervix cancer cell lines. These observations were confirmed by immunofluorescence staining and confocal microscopy (Fig. 3B). VHR was overexpressed in all the cell lines used compared to primary keratinocytes and was localized in both cytoplasm and nucleus in the cervix cancer cell lines, while it was barely detectable and never nuclear in the primary keratinocytes (Fig. 3B). The degree of VHR overexpression was estimated by immunoblotting and densitometric VHR/Actin ratio to be 1.5 to 2.7 higher in the cervix cancer cells compared to normal cells (Fig. 4). Semi-quantitative evaluation of the expression of VHR phosphatase in a neoplastic and pre-neoplastic cervical lesions Figure 1 Semi-quantitative evaluation of the expression of VHR phosphatase in a neoplastic and pre-neoplastic cervical lesions. A group of 62 hysterectomy or conization specimens showing the various (pre) neoplastic cervical epithelial lesions (L-SIL (n = 11), H-SIL (n = 18), SCC (n = 12), ADC (n = 12) and AIS (n = 9)) and 35 cases of hysterectomy performed for noncervical pathologies, from patients under 50 years of age was evaluated for the expression of VHR protein using IHC method on a tissue microarray. The score for VHR staining (staining intensity × staining extent) is presented on the y axis. Each point represents one patient and the average expression level is indicated by a bar. NEx, normal exocervix; NEnd, normal endocervix; L-SIL, low-grade SIL; H-SIL, high-grade SIL; ADC, Adenocarcinoma; AIS, adenocarcinoma in situ. * p < 0.05 and ***p < 0.0001. In endocervical adenocarcinoma (f) and AIS (g) there is strong cytoplasmic staining of VHR. Brown color represents VHR specific staining and blue is the staining for nuclei. B/: Immunofluorescent staining, under laser scanning confocal miscroscopy, of cervical biopsies showing a low VHR immunoreactivity and absence of nuclear staining in exocervical epithelium and high VHR expression, nuclear and peri-nuclear localisation in H-SIL and SCC. Toto-3 (blue) was used to stain the nuclei and merged to VHR staining (green). VHR half life in cervix cancer cell lines and primary keratinocytes To elucidate the molecular mechanism responsible for the increased amounts of VHR protein in cervix cancer, we first measured VHR mRNA in the cervix cancer cell lines compared to primary keratinocytes by semi-quantitative RT-PCR. These analyses showed that mRNA levels were indistinguishable between the five cell lines and three different samples of primary keratinocytes derived from hysterectomies (Fig. 5A). These data suggest that the VHR expression and localization in cervix cell lines and primary keratinocytes increased amount of VHR protein is not due to increased transcription of DUSP3 locus or stabilization of the VHR mRNA, but more likely caused by increased translation or decreased degradation of VHR protein in the cancer cells. We examined next the VHR protein turnover by CHX chase and western blot analysis. Primary keratinocytes, HeLa cells, C33 and E6E7 transformed cell lines were incubated with CHX (50 μg/ml) for 1, 2, 4, 6 and 8 hours. Cell lysates were prepared and VHR level versus actin were analyzed by Western blot. After 2 hours of CHX treatment, the level of the expressed VHR decreased by 50% in primary keratinocytes while it did not changed in HeLa, C33 or E6E7 cells up to 8 hours after CHX treatment (Fig 5B). These results demonstrate that VHR half life in primary keratinocytes is shorter than 2 hours and longer than 8 hours in the cancer cells tested. Discussion The development and progression of cervical carcinoma is dependent on both genetic and epigenetic events, including alterations in the cell cycle machinery at various checkpoints. Indeed, cervical carcinoma is associated with aberrant regulation of cyclins D1 and E [13], p16 [14], p21 and p27 [15]. In the present work, we describe that the new discovered cell cycle regulator, the dual-specificity phosphatase VHR [3], is increased at the protein level in five different cervix cancer cell lines positive (HeLa, CaSki and SiHa) or negative (C33 and HT3) for HPV compared to primary keratinocytes prepared from hysterectomies. Thus, VHR expression is not related to the high risk human papillomavirus. Importantly, the elevated levels of VHR are not an artefact of cell culture. In primary cervix cancer biopsies, VHR is highly expressed in several cells of the epithelium of all Quantification of VHR protein level the H-SIL analyzed in this study as well as in the SCCs, ADC and AIS. The number of intensely stained cells increased markedly in SCC cases. The staining for VHR and p16 in serial sections showed that all the VHR-positive cells were also positive for p16 (not shown), which is considered as a marker of cervical (pre)cancer cells [12]. This is not surprising since VHR is barely detectable in G1 phase cells, but gradually increases during the progression of the cells to G2/M phase [3]. Thus, VHR, like p16, is a marker of cells in cycle (S, G2 and M phases). These results together suggest that VHR can be considered as a new marker for cancer progression in cervix carcinoma. We also found that VHR is a cytosolic protein in primary keratinocytes, but localizes both in the cytoplasm and nucleus in cervix cancer cells. VHR does not contain recognizable nuclear localization or nuclear export sequences, but is small enough to passively diffuse into the nucleus. Thus, its exclusion from the nucleus in primary keratinocytes likely is an active process. Preliminary results from our laboratory show that VHR associates with cyclins (unpublished observation), perhaps explaining its retention in the nuclear or cytosolic compartments in normal versus malignant cells. Thus, it appears that VHR location may be connected to cell cycle-dependent transport of cyclins or dysregulation of cyclins in cancer [16]. What may be purpose of elevated VHR in cervix cancer? Based on our previous findings [3,4], we believe that VHR is important for cell cycle progression because it tempers becoming too active in S through G2, while still allowing them to drive G1 progression. In support of this view, it was recently reported that both Erk and Jnk are less active in cervix cancer cells than in premalignant lesions [18]. Finally, because VHR is regulated during cell cycle progression and because its suppression by RNAi [3] halts cellular proliferation and induces cellular senescence, and, more importantly, because of its overexpression in the cervix cancer, we propose that VHR may be a good target for anticancer therapy. An exciting possibility will be that cervix cancer cells that have adapted to high levels of VHR expression will be sensitive to VHR small inhibitors and that untransformed cells are much less dependent on VHR for proliferation and survival. We have indeed developed novel multidentate small molecule inhibitors of VHR that inhibit its enzymatic activity at nanomolar concentrations in vitro, and are active at low micromolar concentrations on several cell lines and primary cells. We have tested these inhibitors on HeLa and primary keratinocytes and demonstrate that they halt HeLa cells proliferation while they have no effect on primary keratinocytes (Tautz L, submitted). Conclusion VHR is an important cell cycle regulator. Loss of this phosphatase causes senescence and prolonged hyperactivation of Erk and Jnk pathways. Together with our new finding reported in this manuscript, these results might ultimately lead in the near future, to a pharmacological approach to inducing senescence in tumor cells, which would be a radical approach to treating cancer.
2014-10-01T00:00:00.000Z
2008-05-27T00:00:00.000
{ "year": 2008, "sha1": "d22eb7cf5f0c7b95646ab8d7897ea28bd083fa47", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-8-147", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d22eb7cf5f0c7b95646ab8d7897ea28bd083fa47", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
242192194
pes2o/s2orc
v3-fos-license
Inversion of Temperature and Humidity Profile of Microwave Radiometer Based on BP Network In this paper, the inversion method of atmospheric temperature and humidity profiles via ground-based microwave radiometer is studied. Using the three-layer BP neural network inversion algorithm, four BP neural network models (temperature and humidity models with and without cloud information) are established using L-band radiosonde data obtained from the Atmospheric Exploration base of the China Meteorological Administration from July 2018 to June 2019. Microwave radiometer level 1 data and cloud radar data from July to September 2019 are used to evaluate the model. The four models are compared with the measured sounding data, and the inversion accuracy and the influence of cloud information on the inversion are subsequently analyzed. The results show the following: the average errors of temperature and humidity profiles for the model without cloud information are 1.18°C and 11.7%, while the average errors of temperature and humidity profiles for the model with cloud information are 0.71°C and 6.09%. Compared with the profiles that lack cloud information, the RMSE of most altitudes is reduced to some extent after cloud information is added, which is particularly obvious at layers where cloud is present. Introduction Temperature and humidity are very important indicators in the field of climate research, as they can directly reflect the heat and water vapor conditions in the atmosphere and have a clear impact on the accuracy of meteorological products. Grasping real-time changes in the temperature and humidity profile is of great significance for satellite positioning, warning, artificial weather modification and other such meteorological activities. It is thus particularly important to accurately detect atmospheric indicators in real time in order to obtain temperature and humidity profile information. Previously, traditional weather detection methods have primarily used sounding balloons, sounding rockets, satellite remote sensing methods and other technical means to measure elements in the atmosphere and obtain the temperature and humidity profiles over time. In 1985, Liu [1] began to use VHF radar to detect the observational data of the atmospheric structure and inverted the temperature profile; in 1990, Wang [2] used the improved empirical orthogonal function (EOF) expansion method and the simulated radiant temperature values of six O25mm channels in the satellite Advanced Microwave Sounder (AMSU) to perform experiments that inverted the vertical distribution of atmospheric temperature, with good results; moreover, in 2003, Wu et al. [3] investigated satellite detection techniques for both infrared hyperspectral and inversion methods using existing airborne and satellite-borne hyperspectral data, focusing on the inversion method of the atmospheric infrared detector AIRS, and summarized the process of a standardized inversion method; in 2010, Dong et al. [4] analyzed the main features of the Fengyun-3A (FY-3A) meteorological satellite and identified the various applications that can be made to its observational data processing to generate the distribution of changes in atmospheric temperature and humidity, and provided new ideas; in 2017, Zhou [5] compared and analyzed the models of temperature and humidity profiles inverted by FY-4 hyperspectral infrared vertical detector (GIIRS) and Metop-A hyperspectral infrared vertical detector (Iasi). The analysis results show that the inversion accuracy of atmospheric temperature by GIIRS is inferior to that of Iasi in the upper layer, Other aspects are superior to Iasi; in 2019, Guan et al. [6] studied the variational inversion method of the atmospheric temperature and water vapor mixing ratio profile based on Metop-A/Iasi infrared hyperspectral data. The experimental results show that the atmospheric temperature and water vapor mixing ratio profile can be detected with high precision through the use of Metop-A/Iasi infrared hyperspectral data based on the one-dimensional variational method. However, the characteristically low spatial and temporal resolution of sounding balloons cannot meet the development requirements of modern meteorology, while remote sensing satellites will suffer due to cloud cover and poor detection effect at low-altitude latitudes. Therefore, many experts and scholars have committed themselves to the study of ground-based remote sensing technology for atmospheric detection. Among these techniques, microwave radiometer equipment based on ground-based remote sensing technology is relatively mature. Microwave radiometers have been widely used in atmospheric temperature and humidity profile detection, and are also complementary with other sounding equipment data; this has produced good results, including cloud radar, etc. However, the principle behind microwave radiometers is to obtain the brightness temperature of the atmosphere in order to invert the temperature and humidity profile by receiving the atmospheric thermal radiation. However, this detection principle also means that the equipment has certain limitations. It has been found that different regions, seasons, climates, quality control algorithms and inversion algorithm models will have differing impacts on the inversion effect of the microwave radiometer; among these, cloud weather factors have the most significant impact on the inversion effect of the microwave radiometer. Based on the above analysis, this paper will apply the cloud information measured via millimeter wave cloud radar to the inversion process, build two sets of inversion algorithm models on the basis of the BP neural network algorithm, distinguish between the two depending on whether or not cloud information is added, and use sounding data as the standard. By comparing the prediction results of the two models with the actual situation, the influence of the cloud cover information on the inversion of the atmospheric temperature and humidity profile can be analyzed. The remainder of this paper is structured as follows. The second part primarily introduces the related work conducted by predecessors in this field. The third section introduces the data and algorithms used in this paper. The fourth section compares and analyzes the experimental results. Finally, the fifth section summarizes the experiments and improved results presented in this paper. Related Work Microwave radiometer is not a very new type of equipment; as early as the 1960s, related scholars began research into and accuracy correction of microwave radiometers. In 1969, Westwater et al. [7] began to explore ground-based microwave remote sensing, successfully developed microwave radiometers for k-wave end (water vapor microwave absorption peak area) and V-band (oxygen microwave absorption peak area), and inverted the atmospheric temperature profile. In 1985, Chedin et al. [8] explored the atmospheric transmission process, proposed the pattern recognition method, selected the approximate optimal solution from multiple groups of data, and then used the Bayes algorithm to invert the temperature and humidity profile. In 1986, Wang et al. [9] used a microwave radiometer to detect the atmospheric temperature and thereby improved the inversion accuracy of the temperature profile. In 2011, Liu et al. [10] analyzed the accuracy of the temperature profile data measured via ground-based microwave radiometer an observatory in southern suburban Beijing. In 2012, Wang et al. [11] set up training samples to train neural network models for different types of weather and different seasons. Their results showed that the neural network model's calculation accuracy is significantly better than the network algorithm built into the ground-based microwave radiometer. In 2013, Sanchez et al. [12] proposed a plan for quality control along the height of the MP-3000A ground-based microwave radiometer in order to obtain a more accurate atmospheric profile. In 2014, Tan et al. [13] used an airborne microwave radiometer to analyze the influence of different combinations of channels, observation error, platform flight altitude and other factors on inversion performance. In 2015, Li et al. [14] compared and analyzed the secondary data of a microwave radiometer with the historical sounding data, corrected the deviation of the data and obtained a result that is closer to the sounding data. In 2018, Mao et al. [15] directly compared and analyzed the first-level brightness temperature data observed by the microwave radiometer, investigating the detection accuracy of the microwave radiometer in different types of weather and different seasons. In 2019, Wu et al. [16] compared the accuracy errors of specific radiometers horizontally and further analyzed the internal and external errors of inversion errors for a mp3000a microwave radiometer. In 2020, Xu et al. [17] compared the accuracy errors of certain radiometers with those of small UAVs. The temperature and humidity data inverted from the simultaneous observations of the radiosonde and microwave radiometer further prove the microwave radiometer's performance; it appears that the detection and inversion ability is improved through the application of this method. Since the inception of the microwave radiometer, numerous experts and scholars have applied it to the inversion of temperature and humidity profiles, and have carried out research into the algorithm to improve the inversion performance. Common methods include the Kalman filter method, optimal estimation method, Newton iteration method, statistical regression method, empirical orthogonal function method, control least square method, best extension method, estimation theory methods, artificial kernel function method, Monte Carlo method, neural network method, etc. [18]. The optimal estimation method has several advantages including algorithmic simplicity, high accuracy, and ability to show abnormal changes. However, it depends on historical data, which requires a large amount of calculation and is thus incapable of meeting the real-time requirements for obtaining temperature and humidity profiles. The Monte Carlo method does not rely on historical data, but still encounters the same fatal problem as the optimal estimation method (i.e., a long operation time). For its part, the Kalman filter method has strong adaptability, is suitable for real-time processing, and can provide an estimation error variance while making an estimate; however, an accurate filtering model needs to be established, and the problem of filtering divergence still remains. The statistical regression algorithm has high accuracy and fast speed, can detect abnormal changes, and is stable overall; however, the model needs to be established in advance. The neural network algorithm has the same advantages as statistical regression, but also has the "black box" effect; therefore, it does not require building a model in advance. Finally, it achieved narrow victory in the competition between algorithms, and is thus widely used in the microwave radiometer inversion process. In 2011, Yang et al. [19] used a neural network algorithm to invert the temperature and humidity profile on sunny days, proving the extraordinary nonlinear fitting ability of the neural network algorithm, which can be used to invert the temperature and humidity profile. In 2013, Huang et al. [20] used the neural network and multiple linear regression algorithms to invert the temperature and humidity profile under the same experimental sample conditions, obtaining results showing that the neural network algorithm achieves a better inversion effect. In 2018, after conducting quality control and correction of the original brightness temperature data observed by the microwave radiometer, Bao et al. [21] utilized the neural network algorithm to invert the atmospheric temperature and humidity profile, obtaining good results. Data Selection The data used in this paper are primarily sounding data, microwave radiometer data and millimeter-wave cloud radar data. Among these, the sounding data is L-band sounding data obtained twice per day in the meteorological station in a southern suburb of Beijing, China from July 2018 to June 2019. The microwave radiometer data is obtained from the Airda-HTG3 14 channel microwave radiometer installed at the Atmospheric Exploration and Test base of the China Meteorological Bureau. Among them, the K-band (22~32 GHz) and V-band (51~59 GHz) each have seven channels, the center frequencies of which are 22 GHz. While there are four kinds of data output, we only need to select Level 1 (first-level quality control) data to obtain the brightness temperature data. The data range is from January 2019 to September 2019; the millimeter-wave cloud radar data is from July 2019. As of September 2019, the Ka-band 35 GHz all-solid-state vertical-pointing Doppler weather radar of the Beijing Meteorological Station has a measuring height of 12 km, a vertical spatial resolution of 30 m, and an adjustable time resolution of 1~60 s. Inversion Method In theory, the neural network algorithm can approach any complex nonlinear relationship, and it is not necessary to design a highly complex inversion algorithm. It has thus been successfully used in the field of atmospheric parameter profile inversion and is a relatively mature and widely used inversion algorithm. In this paper, four sets of inversion models are proposed: model A1 and model A2, with no cloud information added in the input layer, and model B1 and model B2, with six cloud information nodes added in the input layer on the basis of model A. The four sets of models all use a three-layer feedforward BP neural network. The hyperbolic tangent sigmoid transfer function tansig is selected from the input layer to the hidden layer, while the linear transfer function purelin is selected from the hidden layer to the output layer, enabling continuous function mapping to be realized with arbitrary precision. There are 17 nodes in the Model A input layer (without cloud information). The first 14 nodes are the measured brightness temperature values of 14 channels of the microwave radiometer, while the last three nodes are the ground temperature, relative humidity and pressure. The number of output layer nodes is set to 83: that is, the temperature or relative humidity at the 83 height layers, divided from the ground to the height of 10 km. The specific layering method is as follows: 0-500 meters every 25 meters, 500-2000 meters every time there is a floor of 50 meters, and a floor of 250 meters every 2000-10000 meters. Due to the high-speed requirement for atmospheric profile inversion in practical applications and the singular nature of the data samples used in this paper, a single hidden-layer structure is utilized. The number of hidden layer nodes is calculated according to the formula proposed in Gao [22]. In Eq. (1), n denotes the number of hidden layer nodes, a is the number of input layer nodes, and b refers to the number of output layer nodes. According to calculations, Model A has 40 hidden layer nodes. There are 23 input layer nodes of Model B for adding cloud information: the first 17 are consistent with Model A, while the last six are the information of three layers of clouds. If clouds are present at that time, we fill in the three layers of clouds from low to high, the height of the cloud base, and the thickness of the cloud layer; if there is no cloud, or fewer than three layers of clouds, the value is 0. The output layer nodes are the same as in Model A, with 83 nodes. The number of hidden layer nodes is calculated according to formula (1) and determined to be 42. Sample Construction Building a BP neural network model for inverting temperature and humidity profiles requires a large amount of microwave radiometer data to be available for use as samples. However, due to the limited amount of measured microwave radiometer data and cloud radar data, this paper opts to use L-band sounding data as sample data. The first fourteen nodes of the input data for model training are the brightness temperature values, which are not included in the radiosonde data. This paper utilizes the radiation transmission mode to simulate the brightness temperature data of the microwave radiometer that corresponds to the sounding data. First, the radiosonde data is taken as the input data, while the MonoRTM radiation transmission mode is used to simulate and calculate the Level 1 brightness temperature data of the microwave radiometer. The final three nodes of the input data are the surface temperature, humidity and pressure values; in the sounding data, these are the temperature, humidity and pressure values corresponding to the bottom height. In Model B (with cloud information added), the input node has six additional cloud layer data points. We determine whether the relative humidity of each height layer reaches 85% in order to gauge whether it is entering or exiting the cloud, then calculate the sample corresponding to the three layers of bottom cloud height and cloud thickness as the sample input of the last six nodes. [23,24]. The output data of the model training is the temperature and humidity values that correspond to the 83 altitude layers. The appropriate temperature and humidity values can be obtained by interpolating the sounding data. The samples utilized in the model training contain sounding data obtained at China Meteorological Administration's atmospheric sounding test base from July 2018 to June 2019, with a total of 700 valid data points. During the training process, 70% of the samples are used as the training set, with the remaining 30% used as the test set. The model quality is evaluated by means of cross-validation. Brightness Temperature Correction The brightness temperature is the actual measured value of the microwave radiometer. The accuracy of the brightness temperature data directly affects the accuracy of the inversion result. Due to the differences between the microwave radiometer equipment and the detection effect, a difference still remains between the measured brightness temperature and the simulated brightness temperature [25]. This experiment uses sounding data to simulate the brightness temperature during the model training process, while the actual measured brightness temperature is used during the inversion. The difference between the two causes bias in the inversion results. Therefore, the actual brightness temperature measured by the microwave radiometer is used to invert the temperature and humidity profile. Before line is inverted, it is necessary to correct the deviation of the brightness temperature value. Using the measured and simulated brightness temperature from January to June 2019, the linear fitting of the measured and simulated brightness temperatures is performed on each channel. Finally, the corrected relationship expression of the measured brightness temperature is obtained as shown in Tab. 1; here, Y is the brightness temperature value after deviation correction, which X denotes the actual brightness temperature measured by the microwave radiometer. Model Training In this experiment, sounding data from July 2018 to June 2019 are used to train the model. For the model without cloud information, the simulated brightness temperature, ground temperature and humidity pressure data are used as input, while the corresponding observation time sounding temperature and humidity profile are used as the output to train the neural network. Moreover, for the model with cloud information, six nodes (the cloud base height and cloud thickness of three clouds) are added to the input node of the former, after which the model is trained in the same way. Limitations of the Algorithm The accuracy of the BP neural network inversion model studied in this paper can be easily affected by the quality of the training samples; a good model can only be trained using good-quality, representative training samples. However, due to the existence of instrumentation errors in the sounding balloon and the impact of wind on the observations, the temperature and humidity profile observed by the sounding instrument and the microwave radiometer detection of the temperature and humidity profile's spatial location do not correspond; this will produce inversion errors, which represents a limitation of the inversion algorithm in this paper. Scheme In this paper, we train four inversion models (model A1, model A2, model B1, model B2) according to whether or not cloud parameters are among the input factors. Depending on whether there are clouds (sunny days or cloudy days) within the range of 0-10 km, the test data are divided into eight experimental schemes. The two inversion models adopt different schemes to create a contrast comparison, then verify the actual effects of the two inversion models. These details are presented in Tab. 2. Metric Root mean square error is a statistic that describes the degree of data dispersion. The root mean square error (RMSE) is calculated as follows: where,ŷ i indicates the prediction results of the i-th observation time, y i indicates the real sounding data of the i-th observation time, and N indicates the number of observation times. It can be known from the Eq. 1 that the accuracy of the prediction is represented by calculating the RMSE value between the predicted results and the real condition. Results The experiment in this paper uses sounding data as the standard, employing case analysis and statistical analysis to compare the inversion results of the two models with the sounding data. First, temperature and humidity profiles are drawn through the prediction of specific cases in order to visually demonstrate the model's predictive ability; subsequently, we perform statistical analysis on the prediction results of the test samples, then calculate the RMSE value on each level to obtain a scientific prediction score curve. Fig. 1 presents the temperature profiles of the actual sounding data at 7:15 (UTC + 8) on September 8, 2019, along with the temperature profiles of model A1 and model B1 inversion obtained from the microwave radiometer LV1 product at that time. The weather at this observation time was sunny. As can be seen from the figure, the temperature profile inverted by the two models is essentially the same as the overall trend of the sounding data temperature profile. However, the model B1 curve with cloud information more closely resembles the actual curve; particularly at the heights of 4-5 km and 8-10 km, the predicted value of model A1 is relatively high, while the prediction result of model B1 can better reflect the actual vertical distribution of the atmospheric temperature. Fig. 2 presents the actual sounding data temperature profile at 13:15 (UTC+8) on July 21, 2019, along with the temperature profile comparison of Models A1 and B1 after the inversion of the microwave radiometer LV1 product at that time. The weather at observation time was cloudy. As can be seen from the graph, owing to the cloudy weather, the inverse effect of the temperature profile was affected to some extent. The temperature profiles inverted by the two models were both inferior to the inversion effect on sunny days in Fig. 1; at 0-6 km height, the prediction results of these two models are similar. At the height of 6-10 km, however, model B1 can still maintain a more accurate prediction result, while model A1 obviously deviates from the actual result. The error between model A1 and the actual curve thus increases rapidly, with the difference being largest at 10 km, reaching 11°C. Fig. 3 presents the root mean square error (RMSE) plots for the sunny day samples from July 1, 2019 to September 10, 2019. Here, temperature values from the actual sounding data are used as the standard to calculate the RMSE of the temperature results from the Model A1 and Model B1 inversions compared to the actual values at each altitude level. As can be seen from the figure, the prediction effects of both models can reach a good level. The average RMSE of model A1 is 0.62°C, while that of model B1 is 0.45°C. In terms of altitude level, the prediction results of the two models are similar at the 0-2 km altitude; at this altitude, the mean value of the RMSE difference between the two is only 0.08°C, while at altitudes above 2 km, the RMSE of Model A1 is significantly higher than that of Model B1. The average error of the former reaches 0.82°C, which is higher than that of the latter (0.52°C). As the height increases, the difference between the RMSE of the two undergoes a gradual increase, and the maximum difference reaches 0.84°C. This phenomenon is essentially consistent with the case in Fig. 1. It can be seen that, at low altitude, the prediction gap between the two models is not large. As the altitude increases, moreover, the prediction effect of model B1 with cloud information is still stable, which is significantly improved relative to model A1 without cloud information. Fig. 4 presents the root mean square error (RMSE) of cloudy samples from July 1, 2019 to September 10, 2019. Based on the temperature value of the actual sounding data, the root mean square error (RMSE) of the temperature results inverted by models A1 and B1 with the actual value calculated at each altitude level. As can be seen from the figure, compared with the sunny samples in Fig. 3, the error increases significantly when the weather is cloudy. The average RMSE of model A1 reaches 1.74°C, while the average RMSE of model B1 reaches 0.97°C, which is generally higher than is the case for sunny weather. This is because the appearance of clouds affects the observed brightness temperature of the microwave radiometer; this means that the inversion effect of cloud sky time will exhibit a greater deviation, which is consistent with the case in Fig. 2. It can further be seen that, under cloudy weather conditions, the inversion of temperature profiles becomes more difficult, with the prediction results of model A1 (without cloud information) deviating greatly from the actual values; by contrast, the model B1 (with added cloud information) is relatively stable and achieves obvious improvements. The addition of cloud information can thus effectively improve the inversion accuracy of the microwave radiometer's atmospheric temperature profile under cloud conditions. Fig. 5 presents the actual sounding data humidity profile at 13:15 (UTC+8) on August 29, 2019, along with the humidity profile comparison of Models A2 and B2 after inversion from the microwave radiometer LV1 at that time. At this observation time, the weather was sunny. As can be seen from the figure, the overall trend of the two humidity profiles inverted by the model in the low-altitude segment of 0-4 km is relatively close to the humidity profile of the sounding data. However, the inversion curve of Model B2 with added cloud information is closer to the actual curve, Moreover, in the high-altitude part above 5 km, the predicted value of Model A2 is generally lower by a factor of nearly 10%, while the maximum error is 20%. Compared with Model A2, the curve of B2 is significantly improved, more accurate, and yields prediction results that can better reflect the actual vertical distribution of atmospheric humidity. Fig. 6 presents the humidity profile of the actual sounding data at 07:15 (UTC + 8) on July 20, 2019, along with the humidity profile inverted by models A2 and B2 from the microwave radiometer LV1 product at that time. Weather conditions at this observation time were cloudy. The existence of the cloud layer has a substantial influence on the model's inversion prediction effect. At this time, there are clouds at 0.2-1.9 km and 9.4-10 km. It can be seen from the figure that a significant difference exists between the prediction results of model A2 and the actual value in the cloud layer, while the model B2 (with cloud information) achieves obvious improvements, and its prediction results are similar to the actual value. At other cloud-free heights, the prediction results of the two models are similar to those of the sunny weather in Fig. 5, while the inversion curves of models A2 and B2 are close to the actual sounding curve.4.3.2 Accuracy verification of humidity inversion results. Fig. 7 presents the root mean square error chart of sunny weather samples from July 1, 2019 to September 10, 2019. Based on the humidity value of the actual sounding data, the root mean square error of humidity results inverted by models A2 and B2 with the actual value calculated at each altitude level. As can be seen from the figure, the RMSE of model B2 is significantly lower than that of model A2: the average RMSE of model A2 is 9.8%, and that of model B2 is 4.6%. For each altitude level, the RMSE of model A2 (without cloud information) increased rapidly from 3% to 13% within 0-2 km, while the RMSE of model B2 was stable at about 4%, and the highest RMSE was only 5.1%. At an altitude of 2-4 km, the RMSE of both models increased, with the highest RMSE of 16.2% for model A2 and 9.2% for model B2; the effect of the latter was better. At a height of 4-6 km, the prediction results of the two models are close, with average RMSE of 11.3% and 8.4% respectively. At altitudes above 6km, moreover, the RMSE of Model A2 is again significantly higher than that of Model B2, while the average error of the former reaches 10.8%, which is much higher than the latter's 4.6%. As the altitude increases, the RMSE of the two gradually decreases. At 10 km, the RMSE of Model A2 is 6.3% and that of Model B2 is 2.7%. This phenomenon is consistent with the individual case in Fig. 5. At the medium altitude level, it can be seen that the prediction effects of the two models are similar, while at low and high altitudes, the prediction effect of Model B2 (with added cloud information) remains stable; this is a significant improvement compared with the model without cloud information. Fig. 8 presents the root mean square error chart of cloudy samples from July 1, 2019 to September 10, 2019. Based on the humidity value of the actual sounding data, the root mean square error of the humidity results inverted by model A2 and model B2 is calculated with the actual value at each altitude level. As can be seen from the figure, compared with the sunny samples in Fig. 7, due to the influence of clouds on humidity, RMSE in cloudy conditions is significantly increased. The average RMSE of model A2 reaches 13.6%, while that of model B2 reaches 8.9%, which is generally higher than that under sunny conditions. Through case analysis and statistical analysis, it can be seen that when clouds are present, the inversion difficulty of the humidity profile increases, and the prediction result of model A2 (without cloud information) deviates from the actual value; moreover, the inversion error of cloud layer and cloud height will evidently increase. B2 (the model with cloud information added) can suppress this error and obtain relatively stable and real prediction results, which is helpful in improving the inversion accuracy of a microwave radiometer under cloud conditions. Conclusion In this paper, four network models for the inversion of atmospheric temperature and humidity profiles using microwave radiometer brightness temperature data are trained using sounding data obtained by the China Meteorological Administration from July 2018 to June 2019, combined with the MonoRTM radiation transfer model and BP neural network inversion algorithm. The model is evaluated and tested using Level 1 microwave radiometer data and millimeter wave cloud radar data from July 2019 to September 2019. The sounding data are used as the reference standard. After comparing several groups of experiments, the following conclusions can be drawn. In the entire troposphere, the inversion results of the two temperature models are found to be in good agreement with the measured values in clear weather. The average errors between the two models and the measured values are only 0.62°C and 0.45°C respectively, with these values increasing to 1.74°C and 0.97°C under cloudy conditions. Compared with sunny days, the inversion is more difficult and the effect is worse under cloudy conditions. The error of model A1 (without cloud information) increases by 1.12°C under these conditions, while that of model B1 (with cloud information) increases by 0.52°C, representing an improvement compared with model A1. This phenomenon is in line with expectations. Due to the influence of the cloud layer, the instability of the atmospheric temperature increases and the difficulty of inversion also increases. However, model B1 (with cloud information) is better able to adapt to this change. The inversion of the humidity profile is more difficult relative to the temperature profile. The results show that the variation trend of the two humidity models is similar to that of the sounding measured humidity profiles, but the accuracy is not high: the average errors between the two models and the measured values are 9.8% and 4.6% respectively. Similar to the inversion results of the temperature profile, the accuracy of the humidity profile will decrease under cloudy conditions, while the average error between the two models and the measured value increases to 13.6% and 7.58%. On the whole, model B2 (with cloud information) improves the accuracy of humidity profile inversion. Model A2 (without cloud information) exhibits a significant decline in layers with cloud, while model B2 remains stable; the latter model can thus effectively improve the prediction accuracy, with an average error reduction of 4%~5%.The results show that the BP neural network can both feasibly and effectively invert the atmospheric temperature and humidity profiles. The temperature and humidity profiles obtained via inversion exhibit a good prediction trend and prediction accuracy. In addition, adding cloud information in the process of inverting atmospheric temperature and humidity profiles helps to improve the inversion accuracy and prediction effect, particularly in layers with cloud. Due to the lack of microwave radiometer and cloud radar data, as well as the imperfection of data quality control methods, the number of validation samples is small and invalid data points are present, which may affect the inversion results. In future research and testing work, we will increase the number of samples, improving the methods of data quality control for microwave radiometer and cloud radar data, thereby improving the accuracy of the inversion of atmospheric temperature and humidity profiles. Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
2021-08-27T17:09:30.654Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a836c4e156c83b84ba28acefb5d37bb32edc4764", "oa_license": "CCBY", "oa_url": "https://file.techscience.com/ueditor/files/iasc/TSP_IASC-29-3/TSP_IASC_18496/TSP_IASC_18496.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "a6d527c8b099c7dabde57b9298d68c25734fae91", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
258349686
pes2o/s2orc
v3-fos-license
Optimization of Process Parameters in Laser Powder Bed Fusion of SS 316L Parts Using Artificial Neural Networks : Additive manufacturing is rapidly evolving and revolutionizing the fabrication of complex metal components with tunable properties. Machine learning and neural networks have emerged as powerful tools for process–property optimization in additive manufacturing. These techniques work well for the prediction of a single property but their applicability in optimizing multiple properties is limited. In the present work, an exclusive neural network is developed to demonstrate the potential of a single neural network in optimizing multiple part properties. The model is used to identify the optimal process parameter values for laser power, scan speed, and hatch spacing for the required surface roughness, relative density, microhardness, and dimensional accuracy in stainless steel parts. In-house-generated experimental data are used to train the model. The model has seven neurons in the hidden layer, which are selected using hyperparameter optimization. K-fold cross-validation is performed to ensure the robustness of the model, which results in a mean squared error of 0.0578 and R 2 score of 0.59. The developed model is then used to predict the optimal process parameters corresponding to the user-required part properties. The model serves as a significant pre-processing step to identify the best parameters before printing, thus saving time and costs for repeated part fabrication. The study provides more insights into the usage of a single artificial neural network for the optimization of multiple properties of printed metal parts. Introduction Additive manufacturing (AM) is an emerging field that has been a topic of interest to many researchers in metal processing as it offers flexibility in the designing and fabrication of intricate geometries.AM uses data from computer-aided design (CAD) software to fabricate precise shapes layer-by-layer.Laser powder bed fusion (LPBF) or selective laser melting (SLM) is an attractive manufacturing technique for aerospace, automotive, and biomedical applications [1] due to its ability to produce complex geometries, energy efficiency, and minimal waste [2].Despite the flexibility, LPBF has limitations in process repeatability [3,4], surface quality, and dimensional accuracy, which are critical for the part performance.The complex nature of LPBF, involving multiscale and multiphysics [5], makes the comprehensive understanding of the processing-structureproperties-performance (PSPP) challenging.Process parameters in LPBF can be classified as preprocessing, in-process, and postprocess [6].The in-process parameters involve laser power, scan speed, hatch spacing, and layer thickness, which are of great interest when PSPP modeling is considered.Numerous studies have reported the influence of process parameters on the mechanical properties and the need for process optimization to obtain a desired property [7][8][9][10][11][12]. Many studies have performed the optimization of SLM process parameters by applying statistical techniques such as the design of experiments (DoE) [13], response surface methodology (RSM) [14], and Taguchi design methods [15].DoE is used to plan, conduct, and analyze the experiments to identify the relationships between the variables.The goal of DoE is to find the most important factor that affects the part performance.RSM is a combination of statistical and mathematical techniques that aims at finding an optimal combination of inputs that will produce the best outcome.It involves the fitting of a response surface to the experimental data to model the relationship.Strano et al. [16] developed a new mathematical model to predict the surface roughness at sloping angles using SLM process parameters.Cao et al. [17] introduced a surrogate model to predict the surface roughness and dimensional accuracy by integrating the whale optimization algorithm and kriging model.Despite modeling the input-output relationship for the best input parameters, the statistical methods lack in establishing a combined effect of the input parameters. With the advent of machine learning (ML), the use of computational data techniques significantly increased in process parameter optimization.Artificial neural networks (ANN) are a subset of deep learning algorithms that can analyze a large amount of data and identify the patterns and relationship between the input variables and the output.The development of pre-built libraries and frameworks, continuous advancements in techniques, and ease of deployment made ML algorithms versatile tools in solving a wide range of problems.In additive manufacturing, ML algorithms have been used for tasks such as topology optimization [18,19], in-situ process monitoring [20][21][22], and process parameter optimization [23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39].ML algorithms have been employed in understanding the PSPP relationships through exploring the feasible process design space and its optimization, which is challenging through experimentation [23].One of the earliest works using neural networks in process parameter optimization was reported by Shen et al. [24].They developed an ANN model to predict the part density for a nylon-12 polymer and suggested that the training of the network can be accelerated using batch training.Wang et al. [25] modeled the effects of process parameters on the density of HBI (composite of polystyrene) parts prepared by SLM using ANN and demonstrated the capability of neural networks in modeling the PSPP relationship.Rong-Ji et al. [26] extended the modeling using a neural network combined with genetic algorithm (GA) to identify the optimal process parameters, such as layer thickness, hatch spacing, laser power, scanning speed, work surrounding temperature, interval time, and scanning mode, to obtain minimal shrinkage in HBI alloys.Nguyen et al. [27] developed an optimization system to find optimal process parameters for Ti-6Al-4V in SLM with an as-built part density close to 98%.Lo et al. [28] employed ANNs coupled with numerical simulations to create process maps relating the melt pool temperature and melt pool depth to hatch spacing and scan length for SS 316L and verified their optimality through validation experiments.Srinivasan et al. [29] developed a procedure coupling the physics-based process modeling with ML and optimization to find a suitable AM processing space for Ti-6Al-4V alloys.Many researchers have extended the use of ML algorithms for the prediction of one or two properties [30][31][32] and compared the performance of different ML algorithms, such as support vector machines (SVM), random forest (RF), k-nearest neighbors (KNN), and XGBoost [33][34][35][36][37][38][39]. Most of the studies modeled the effect of the process parameters individually on the part properties.Chia et al. [40] demonstrated that AM is effectively a multi-objective optimization problem and the key issue in multi-objective optimization is the lack of accurate PSPP models that describe the response outputs as a function of the input variable.There is a need to obtain an optimal process window for multiple properties of as-built parts without restricting this to a combination of one or two.Fe-Perdomo et al. [41] analyzed various ML approaches for surface roughness and other mechanical properties such as hardness, tensile strength, and relative density.They developed different ML models with hyperparameter optimization for each property, which is tedious and time-consuming.Many studies listed above are restricted to the combination of one or two output properties while modeling the effect of process parameters, thus indicating a need for extension to multiple part properties.In this study, we develop an exclusive ANN model to optimize process parameters such as laser power, scan speed, and hatch spacing for multiple part properties such as relative density, surface roughness, microhardness, and dimensional error with a single neural network.The developed model is then used to predict the above properties for a feasible design space of processing parameters. Experimental Setup For this study, 23 sets of process parameters are selected based on a literature review [42] to fabricate the specimens.Table 1 shows the selected values of laser power, scan speed, and hatch spacing for each set.However, the layer thickness was maintained constant at 30 µm and the energy density for each set was calculated as where E v (J/mm 3 ) is the volumetric energy density (VED), LP (W) is the laser power, SS (mm/s) is the scan speed, HS (µm) is the hatch spacing, and LT (µm) is the layer thickness.[43].The printer had a workspace of 125 mm × 125 mm × 125 mm and used a 400 W laser heat source.Each specimen's dimensions were 8 mm × 8 mm × 6 mm. A wire electric discharge machine was used to separate the printed specimens from the build plate and the parts were sonicated in isopropyl alcohol to remove any lose unmelted powder.The relative density of the specimens was calculated using the Archimedes principle as reported by Guzman et al. [44] using a precision balance.For surface roughness measurement, an optical microscope (Keyence Corporation of America, Itasca, IL, USA) was used.To ensure consistent measurements, a sample holder was used to align the edges for each specimen.Area surface roughness (S a ) was measured four times and the mean Metals 2023, 13, 842 4 of 14 values are reported for each specimen.Figure 1 shows the optical microscopy images of two representative samples (No. 4 and 10) and the corresponding roughness maps used for the calculation of roughness values.The un-melted powder particles can be clearly seen on the surfaces of samples.Similar images were recorded for all samples and the surface roughness values were computed.The microstructure of 3D-printed stainless steel has been extensively studied as a function of the processing conditions [1,45,46]. A wire electric discharge machine was used to separate the printed specimens from the build plate and the parts were sonicated in isopropyl alcohol to remove any lose unmelted powder.The relative density of the specimens was calculated using the Archimedes principle as reported by Guzman et al. [44] using a precision balance.For surface roughness measurement, an optical microscope (Keyence Corporation of America, Itasca, IL, USA) was used.To ensure consistent measurements, a sample holder was used to align the edges for each specimen.Area surface roughness (Sa) was measured four times and the mean values are reported for each specimen.Figure 1 shows the optical microscopy images of two representative samples (No. 4 and 10) and the corresponding roughness maps used for the calculation of roughness values.The un-melted powder particles can be clearly seen on the surfaces of samples.Similar images were recorded for all samples and the surface roughness values were computed.The microstructure of 3D-printed stainless steel has been extensively studied as a function of the processing conditions [1,45,46].A Vickers microhardness tester (Pace Technologies, Tucson, AZ, USA) was used to measure the microhardness values of as-built specimens.Dimensional error for each specimen was measured using a micrometer (Mitutoyo America Corporation, Aurora, IL, USA).The absolute difference between the measured and the designed value (8 mm) was measured three times and mean values are reported in Table 1. Figure 2 shows the scatter plots of each property with respect to VED.All the results with corresponding process parameters were tabulated and saved in a csv file for feeding into the ANN model.A Vickers microhardness tester (Pace Technologies, Tucson, AZ, USA) was used to measure the microhardness values of as-built specimens.Dimensional error for each specimen was measured using a micrometer (Mitutoyo America Corporation, Aurora, IL, USA).The absolute difference between the measured and the designed value (8 mm) was measured three times and mean values are reported in Table 1. Figure 2 shows the scatter plots of each property with respect to VED.All the results with corresponding process parameters were tabulated and saved in a csv file for feeding into the ANN model. Artificial Neural Network Artificial neural networks (ANNs) are deep learning algorithms modeled after the structure and function of the human brain.ANNs consist of interconnected nodes, called neurons, which work together to process information, recognize patterns and relationships in input data, and make predictions or decisions.In this research, an ANN with a single hidden layer was developed using the Keras library in the tensorflow framework [47].The architecture of the ANN is shown in Figure 3.The input layer consists of 3 neurons for our inputs: laser power, scan speed, and hatch spacing.The output layer consists of 4 neurons, which give the output for surface roughness, microhardness, relative density, and dimensional error.The data fed to the neural network are within different ranges and Metals 2023, 13, 842 5 of 14 are normalized to simplify the training process.Therefore, all inputs and outputs to the network are transformed using the transformation where X t is the transformed value, X is the original value, and X min and X max are the minimum and maximum values in that class.The ANN consists of 7 neurons in the hidden layer, which are found using parameter optimization.The activation function applies transformation to the output from neurons before propagating it to the next layer.The Rectified Linear Unit (ReLU) activation function is chosen as it is widely used to predict continuous variables as in regression [48].It can be expressed as During the training of the ANN, the mean squared error (MSE) is used as a loss function.It is one of the most used error metrics in regression problems.The loss function is used with the Adam optimizer algorithm [49].The loss function is calculated using where E loss is the loss, ŷ is the predicted value, y is the actual value, and N is the total number of samples. Artificial Neural Network Artificial neural networks (ANNs) are deep learning algorithms modeled after the structure and function of the human brain.ANNs consist of interconnected nodes, called neurons, which work together to process information, recognize patterns and relationships in input data, and make predictions or decisions.In this research, an ANN with a single hidden layer was developed using the Keras library in the tensorflow tion.It is one of the most used error metrics in regression problems.The loss function is used with the Adam optimizer algorithm [49].The loss function is calculated using where is the loss, is the predicted value, is the actual value, and N is the total number of samples.The error metrics used in the study are the MSE and R 2 score.The MSE for the model is obtained during the training and testing of the network as it is used as a loss function.The R 2 w is a measure of the fitting of a model.It is a robust metric in evaluating the performance of any statistical model.For better performance, any model should produce a lower MSE with an R 2 score close to 1.The MSE can never be negative as it is a square of deviations, but the R 2 can be negative, indicating a poor fit to the data.The R 2 score is calculated using the equation The error metrics used in the study are the MSE and R 2 score.The MSE for the model is obtained during the training and testing of the network as it is used as a loss function.The R 2 w is a measure of the fitting of a model.It is a robust metric in evaluating the performance of any statistical model.For better performance, any model should produce a lower MSE with an R 2 score close to 1.The MSE can never be negative as it is a square of deviations, but the R 2 can be negative, indicating a poor fit to the data.The R 2 score is calculated using the equation Hyperparameter Optimization and K-Fold Cross-Validation Hyperparameter optimization refers to the process of tuning the parameters of a neural network model to achieve optimal performance.This is done by experimenting with different combinations of hyperparameters, such as the number of hidden layers, the number of neurons per layer, the learning rate, and the regularization term, to find the best set of values for a particular problem.In this study, the hyperparameter optimization is done for the number of neurons in the hidden layer. K-fold cross-validation, on the other hand, is a model evaluation technique that helps to prevent overfitting.In k-fold cross-validation, the original dataset is divided into k smaller subsets, or folds, and the model is trained on k-1 of these folds and evaluated on the remaining one.This process is repeated k times, with each fold used as the evaluation set once.The performance across all k-folds is then averaged as an estimate of the model's performance on unseen data [50].Seven-fold cross-validation (7 folds give 80% as training data and 20% testing data for every fold) is applied in the current study to produce a more robust model. By combining hyperparameter optimization with k-fold cross-validation, researchers can more accurately evaluate the performance of a neural network and ensure that it generalizes well to new data. Process Parameter Optimization Algorithm For process parameter optimization, an algorithm is developed as shown in Figure 4. First, an ANN with a single hidden layer is developed and trained using the experimental data.The performance of the ANN is evaluated.The ANN is then used to make predictions on generated test data created by combining different levels of processing parameters where the laser power range is from 150 to 290 W, scan speed from 650 to 890 mm/s, and hatch spacing from 111 to 129 µm.Based on user-required properties, the predictions of relative density, surface roughness, microhardness, and dimensional error from the ANN are filtered and are compared with the user requirements.The index of the satisfied data is noted from the prediction set and the optimal process parameter set is found by indexing the generated test data set with the index obtained. Hyperparameter Optimization and K-Fold Cross-Validation The hyperparameter selected for the current study is the number of neurons in the hidden layer.The neurons are varied from 4 to 14 and are cross-validated using sevenfold cross validation.Figure 5 shows the hyperparameter optimization and seven-fold cross-validation results for the model.In Figure 5a, the MSE values during training and validation are reported for each set of hyperparameters.As the number of neurons increased, the training loss decreased, indicating that the model was able to learn the training data by reducing the error.However, if closely examine the validation error, it was Hyperparameter Optimization and K-Fold Cross-Validation The hyperparameter selected for the current study is the number of neurons in the hidden layer.The neurons are varied from 4 to 14 and are cross-validated using seven-fold cross validation.Figure 5 shows the hyperparameter optimization and seven-fold cross-validation results for the model.In Figure 5a, the MSE values during training and validation are reported for each set of hyperparameters.As the number of neurons increased, the training loss decreased, indicating that the model was able to learn the training data by reducing the error.However, if closely examine the validation error, it was reduced initially until seven neurons and then started to increase.This indicates that our model is overfitting the training data after seven neurons.An overfit model has a low training error as it memorizes the training data and a high validation error as it behaves poorly on the validation set.The R 2 score obtained was 0.594 at seven neurons, indicating this as the optimal number of neurons in the hidden layer. Performance of the ANN Once the model is hyperparameter-optimized along with cross-validation, the model is saved to preserve the weights.From the experimental data, 20% is randomly selected to test the model.Predictions are made for relative density, roughness, dimensional error, and microhardness, and their corresponding comparisons with actual values are reported in Figure 6.From Figure 6a, only for sample 3, the predicted value of relative density closely matches with the actual value.The remaining three samples showed a deviation from their actual density values.For relative densities of more than 99%, the ANN model predicted closer values than the densities that were less than 98% and this could be attributed to the limited availability of process parameter data points corresponding to densities less than 98%. Figure 6b shows the predictions for surface roughness, indicating close predicted values to the actual values.sharp increase and slowly start to decrease, which indicates that the model is learning the new data in the new fold.The same process is seen in subsequent folds, where the model's training and validation errors reduce and converge at the end.However, at the end of the third and fifth folds, the validation error was more than the training error, indicating overfitting of the network.Despite overfitting in two folds, in the remaining folds of validation, the error between the training and the validation is negligible and the error can be averaged to produce an acceptable value.The average MSE of the cross-validation is found to be 0.058, indicating the fair performance of the model on the experimental data.Overfitting can be addressed using techniques such as early stopping, dropout, and regularization [27]. Performance of the ANN Once the model is hyperparameter-optimized along with cross-validation, the model is saved to preserve the weights.From the experimental data, 20% is randomly selected to test the model.Predictions are made for relative density, roughness, dimensional error, and microhardness, and their corresponding comparisons with actual values are reported in Figure 6.From Figure 6a, only for sample 3, the predicted value of relative density closely matches with the actual value.The remaining three samples showed a deviation from their actual density values.For relative densities of more than 99%, the ANN model predicted closer values than the densities that were less than 98% and this could be attributed to the limited availability of process parameter data points corresponding to densities less than 98%. Figure 6b shows the predictions for surface roughness, indicating close predicted values to the actual values.6d.Only sample 2 had a predicted value close to the actual value from [42].The rest of them had deviations from the actual values.However, the deviations below 253 are smaller and those above 254 are much higher.Overall, the model accurately predicted the surface roughness and gave decent predictions in dimensional error.Relative densities above 99% and microhardness values below 253 had less deviations. Optimization of Process Parameters and Performance The developed exclusive neural network is used to predict the optimal process parameters by making predictions on the generated test data.The model parameters for the ANN are saved after cross-validation and the same model is used to make the predictions on the generated test data.To assess the performance of our model, the predicted data are compared with experimental data from the literature.Table 2 shows the comparison of the predicted results from the model and the literature data.For relative density, the deviation (%) was much lower than the experimental value in the literature [35], indicating that the model can predict relative densities close to 99% accurately, as mentioned earlier. The predicted values of surface roughness had a deviation of 2.68% from the experimental literature value [51] and can be fairly used to estimate the property.Microhardness prediction had a 3.07% deviation from the literature [11].As discussed in the previous section, microhardness values of more than 254 HV had significant deviations, indicating a need 6d.Only sample 2 had a predicted value close to the actual value from [42].The rest of them had deviations from the actual values.However, the deviations below 253 are smaller and those above 254 are much higher.Overall, the model accurately predicted the surface roughness and gave decent predictions in dimensional error.Relative densities above 99% and microhardness values below 253 had less deviations. Optimization of Process Parameters and Performance The developed exclusive neural network is used to predict the optimal process parameters by making predictions on the generated test data.The model parameters for the ANN are saved after cross-validation and the same model is used to make the predictions on the generated test data.To assess the performance of our model, the predicted data are compared with experimental data from the literature.Table 2 shows the comparison of the predicted results from the model and the literature data.For relative density, the deviation (%) was much lower than the experimental value in the literature [35], indicating that the model can predict relative densities close to 99% accurately, as mentioned earlier. The predicted values of surface roughness had a deviation of 2.68% from the experimental literature value [51] and can be fairly used to estimate the property.Microhardness prediction had a 3.07% deviation from the literature [11].As discussed in the previous section, microhardness values of more than 254 HV had significant deviations, indicating a need for more data in this region.The predictions from the model are saved and plots are generated for the required parameters to understand their effects and obtain the optimal processing parameters.The user requirement is set to have an as-built part relative density of more than 99% with acceptable ranges for surface roughness, dimensional error, and microhardness.The predicted data are filtered using the above user requirements.The index of filtered data satisfying the user requirements is indexed in the generated input data set for the optimal process parameters.Hatch spacing was held at 127.5 µm and a variation in laser power and scan speed was considered to generate the optimal processing window.The contour plots (Figure 7) show the effects of the processing parameters on the individual property.The optimal processing window is found by plotting the contour lines of all user-required properties on a single plot.Figure 8 shows the optimal processing region (shaded area) that satisfies all the user requirements.This region satisfies the user requirements for a part density more than 99%, roughness less than 10.5 µm, dimensional error less than 20 µm, and microhardness more than 260 HV. Conclusions To find the optimal process parameters for user-required part properties in laser powder bed fusion, an optimization model based on experimental data is developed using artificial neural networks.An exclusive neural network is developed to optimize the laser power, scan speed, and hatch spacing with the desired relative density, surface roughness, dimensional error, and microhardness.It was found that the developed model resulted in a decent R 2 score of 0.59.The predicted values were compared to the experimental values in the literature and indicated a close match.The results demonstrate the ability of an exclusive neural network in modeling the process parameter-property relationships for multiple properties.The developed model can find the optimal processing parameters that satisfy the user requirement for customized part properties.Thus, it reduces the preprocessing time and cost significantly.The following conclusions are drawn based on the current study. 1. Hyperparameter optimization and cross-validation are crucial steps in developing a robust prediction model.The combination can reduce the model loss and enhance the performance on unseen data.2. Neural networks are highly sensitive to the training data.To have comparable performance for every property, training data must contain inclusive data points within the range.Having less data in the given range would affect the performance of the predictions.3. Finding the optimal parameters for the laser powder bed fusion process requires an understanding of the combined effect of the process parameters on the part properties.ANN is a powerful tool in modeling the combined relationship and obtaining the optimal process parameters in the given range of data. Conclusions To find the optimal process parameters for user-required part properties in laser powder bed fusion, an optimization model based on experimental data is developed using artificial neural networks.An exclusive neural network is developed to optimize the laser power, scan speed, and hatch spacing with the desired relative density, surface roughness, dimensional error, and microhardness.It was found that the developed model resulted in a decent R 2 score of 0.59.The predicted values were compared to the experimental values in the literature and indicated a close match.The results demonstrate the ability of an exclusive neural network in modeling the process parameter-property relationships for multiple properties.The developed model can find the optimal processing parameters that satisfy the user requirement for customized part properties.Thus, it reduces the preprocessing time and cost significantly.The following conclusions are drawn based on the current study. 1. Hyperparameter optimization and cross-validation are crucial steps in developing a robust prediction model.The combination can reduce the model loss and enhance the performance on unseen data. 2. Neural networks are highly sensitive to the training data.To have comparable performance for every property, training data must contain inclusive data points within the range.Having less data in the given range would affect the performance of the predictions. 3. Finding the optimal parameters for the laser powder bed fusion process requires an understanding of the combined effect of the process parameters on the part properties.ANN is a powerful tool in modeling the combined relationship and obtaining the optimal process parameters in the given range of data. Figure 1 . Figure 1.Optical microscopy images and surface roughness maps of (a) sample 4 and (b) sample 10. Figure 1 . Figure 1.Optical microscopy images and surface roughness maps of (a) sample 4 and (b) sample 10. Figure 3 . Figure 3. Artificial neural network with a single hidden layer. Figure 3 . Figure 3. Artificial neural network with a single hidden layer. Figure 4 . Figure 4. Algorithm for obtaining optimal process parameters. Figure 4 . Figure 4. Algorithm for obtaining optimal process parameters. Metals 2023 , 16 Figure 5 . Figure 5. (a) MSE values during training and validation with respect to number of neurons in hidden layer for hyperparameter optimization; (b) 7-fold cross-validation. Figure 5b shows the Figure5bshows the seven-fold cross-validation of the developed ANN model.In each fold, the model is trained on the training set, which reduces the training error gradually with epochs.The validation error is initially high as it is unseen data and gradually reduces as the model learns the relationship from the training data and minimizes the error.When a new fold of data is introduced to the model, both training and validation errors show a sharp increase and slowly start to decrease, which indicates that the model is learning the new data in the new fold.The same process is seen in subsequent folds, where the model's training and validation errors reduce and converge at the end.However, at the end of the third and fifth folds, the validation error was more than the training error, indicating overfitting of the network.Despite overfitting in two folds, in the remaining folds of validation, the error between the training and the validation is negligible and the error can be averaged to produce an acceptable value.The average MSE of the crossvalidation is found to be 0.058, indicating the fair performance of the model on the experimental data.Overfitting can be addressed using techniques such as early stopping, dropout, and regularization[27]. Figure 5 . Figure 5. (a) MSE values during training and validation with respect to number of neurons in hidden layer for hyperparameter optimization; (b) 7-fold cross-validation. Figure Figure5bshows the seven-fold cross-validation of the developed ANN model.In each fold, the model is trained on the training set, which reduces the training error gradually with epochs.The validation error is initially high as it is unseen data and gradually reduces as the model learns the relationship from the training data and minimizes the error.When a new fold of data is introduced to the model, both training and validation errors show a sharp increase and slowly start to decrease, which indicates that the model is learning the new data in the new fold.The same process is seen in subsequent folds, where the model's training and validation errors reduce and converge at the end.However, at the end of the third and fifth folds, the validation error was more than the training error, indicating overfitting of the network.Despite overfitting in two folds, in the remaining folds of validation, the error between the training and the validation is negligible and the error can be averaged to produce an acceptable value.The average MSE of the cross-validation is found to be 0.058, indicating the fair performance of the model on the experimental data.Overfitting can be addressed using techniques such as early stopping, dropout, and regularization[27]. Figure 6 . Figure 6.Model performance for (a) relative density, (b) surface roughness, (c) dimensional error, (d) microhardness.Triangles represent the actual values and circles represent the predicted values. Figure Figure 6c compares the values of dimensional error.The predicted values are close to the measured values for three samples.Sample 3 had a significant deviation from the actual value.Microhardness values are plotted in Figure 6d.Only sample 2 had a predicted value close to the actual value from[42].The rest of them had deviations from the actual values.However, the deviations below 253 are smaller and those above 254 are much higher.Overall, the model accurately predicted the surface roughness and gave decent predictions in dimensional error.Relative densities above 99% and microhardness values below 253 had less deviations. Figure 6 . Figure 6.Model performance for (a) relative density, (b) surface roughness, (c) dimensional error, (d) microhardness.Triangles represent the actual values and circles represent the predicted values. Figure Figure 6c compares the values of dimensional error.The predicted values are close to the measured values for three samples.Sample 3 had a significant deviation from the actual value.Microhardness values are plotted in Figure6d.Only sample 2 had a predicted value close to the actual value from[42].The rest of them had deviations from the actual values.However, the deviations below 253 are smaller and those above 254 are much higher.Overall, the model accurately predicted the surface roughness and gave decent predictions in dimensional error.Relative densities above 99% and microhardness values below 253 had less deviations. Figure 8 . Figure 8. Contour plots showing the effect of laser power and scan speed on the required properties with optimal processing region (in orange). Author Contributions: Conceptualization, S.T., S.H.J., and B.B.R.; methodology, S.T.; formal analysis, S.T., S.H.J., and B.B.R.; resources, G.K.; writing-original draft preparation, S.T.; writing-review and editing, S.T., S.H.J., and B.B.R.; supervision, G.K.; project administration, G.K.; funding acquisition, G.K. All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by a University of Texas System STARs award. Figure 8 . Figure 8. Contour plots showing the effect of laser power and scan speed on the required properties with optimal processing region (in orange). Table 1 . List of process parameters and corresponding experimental values. Table 2 . Comparison of model predictions and experimental values in literature.
2023-04-27T15:03:55.644Z
2023-04-25T00:00:00.000
{ "year": 2023, "sha1": "95300221c93dd6be8df7bb6b26fae9d17583a283", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/13/5/842/pdf?version=1682407672", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b8f71b87fc5fb98e5f0fcf8d0962a2b4135b5029", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
53292338
pes2o/s2orc
v3-fos-license
How are IF-Conditional Statements Fixed Through Peer CodeReview? SUMMARY Peer code review is key to ensuring the absence of software defects. To reduce review costs, software developers adopt code convention checking tools that automatically identify maintainability issues in source code. However, these tools do not always address the maintainability issue for a particular project. The goal of this study is to understand how code review fixes conditional statement issues, which are the most frequent changes in software development. We conduct an empirical study to understand if -statement changes through code review. Using review requests in the Qt and OpenStack projects, we analyze changes of the if -conditional statements that are (1) requested to be reviewed, and are (2) revised through code review. We find the most frequently changed symbols are “ ( ) ”, “ . ”, and “ ! ”. We also find project-specific fixing patterns for improving code readability by association rule mining. For example “ ! ” operator is frequently replaced with a function call. These rules are useful for improving a coding convention checker tailored for the projects. Introduction Peer code review, a manual inspection of code changes by developers who do not create them, is a well-established practice to ensure the absence of software defects. Many open source software (OSS) and commercial projects have adopted the peer code review. Code review requires much time [1]. Code review requires about 50% of the overall software development resources [2]. For example, patch authors tend to spend a long time revising their own patches due to technical issues [1]. Indeed, 75% of discussions in code review are about maintainability issues [3], [4]. To reduce the review cost, software developers adopt code convention checking tools that automatically identify general maintainability issues in source code. However, these tools just cover the general issues [5], in particular, programming languages. To completely detect maintainability issues, completely, reviewers may need to conduct code reviews manually with their own eyes. The goal of our study is to understand how patch authors fix maintainability issues based on reviewers' feedback. To understand maintainability issues, we focus on if-statement changes. Previous studies reported that since a change in if-statements frequently occur [6], [7], improving if-statement readability is important [8]. Tan et al. [9] also reported that binary operators in the conditional expression are more frequent changes in a programming contest. While if-statements are considered important, little is known about how if-statements are fixed in practical software development. First, as a preliminary study, we identify frequently changed symbols of the if-conditional statements in submitted patches (Sect. 4). Secondly, we discover tacit fixing patterns between submitted patches and merged patches by using association rule mining (Sect. 5). As a case study, we target 69,325 patches in the Qt and 60,197 patches in the OpenStack project. This paper is an extension of our previous study [10] in two ways. First, we analyze all symbols in programming languages (e.g., Arithmetic, Logical or Relational operators and String or Number literal). Second, the previous study obtained only Qt * project written in C++ language. This paper includes OpenStack * * project written in Python to obtain language-independent results. This paper is structured as follows. Section 2 describes the background to our study. Section 3 introduces our target if-statement changes. Section 4 details an empirical study to analyze the changes in code review requests, and Sect. 5 presents our analysis of the changes through code reviewing. Section 6 considers the validity of our empirical study. Section 7 introduces related works. Section 8 concludes our study and discusses future works. Background Various dedicated tools exist for managing the peer code review process. Gerrit Code Review * * * and ReviewBoard * * * * are commonly used by OSS practitioners to receive the lightweight reviews. Technically, these code review tools are used for patch submission triggers, automatic tests and manual reviewing to decide whether or not a patch should be integrated into a version control system. Figure 1 shows an overview of the code review process in Gerrit Code Review which our target Qt and OpenStack projects that large OSS projects use as a code review man- A patch author submits a patch to Gerrit Code Review. We define the submitted patch as Patch 1 . 2. The reviewers verify Patch 1 . They send feedback and ask to revise the patch if it has any issues. 3. The patch author revises Patch 1 and submits the revised patch as Patch 2 . The revision process may be repeated n times. We define the last patch as Patch n . 4. Once the patch author completely addresses the concerns of the reviewers, the patch will be integrated into the project repository. The validity of code review has been demonstrated by many prior studies [11]- [15]. Raymond et al. [16] discussed how code review is able to detect crucial issues in largescale code before release. These prior studies show the relationship of software defects after release, anti-patterns in software design and security vulnerability issues. While code review is effective in improving the quality of software artifacts, it requires a large amount of time and many human resources [2]. Rigby et al. [17] found that six large-scale OSS projects needed approximately one month to integrate a patch. Reviewers may disagree with one another and take even longer for discussion [18]. The process also requires identifying appropriate reviewers for each patch. Various methods are proposed to select appropriate reviewers based on the reviewer's experience [19]- [23] and complexity of code changes [1]. Most published code review studies focused on review processes or the reviewers' communications. They do not focus on source code changes through the review. We focus on source code changes especially if changes to clarify how isEmpty ( ) ) a submitted patches were revised, because if-statements are the most frequently changed [6], [7], [9]. Motivating Example: Conditional Statements Fixed through Peer Code Review This section introduces if-statements fixed through code review. We investigate the changes between Patch 1 and Patch n in Fig. 1. This research targets the Qt and OpenStack projects. Qt is a cross-platform application framework, and OpenStack is a software platform for cloud computing, respectively. Table 1 shows the numbers of reviews, the numbers of ifstatements changed in submitted patches (Patch 1 ), and the numbers of if-statements fixed through code review. We sample 380 patches from the original Qt review dataset and then manually read them to identify typical fixing patterns. The sample size was used to obtain a proportion of patterns within the 5% bounds of the proportion with a 95% confidential level. Listings 1 through 3 show concrete examples of if fixing patterns between Patch 1 and Patch n obtained from the Qt project. In Listing 1, a logical AND operator ("&&") is deleted by splitting the condition into two if-statements † . In Listing 2, a pair of a logical NOT operator ("!") and a logical AND operator is replaced with a logical OR operator ("||") and an additional NOT operator † . In Listing 3, equality operators ("!=" and "==") are replaced with function calls † † . This manual reading shows that code review often changes operators in conditional expressions of ifstatements. Based on this observation, we characterize fixing patterns of if-statements using the numbers of symbols changed through code review. Section 4 details a preliminary study to analyze the changes in Patch 1 , and Sect. 5 finds fixing patterns between Patch 1 and Patch n . Preliminary Study: Source Code Changes in the Review Request This section counts symbols changed in Patch 1 to understand symbols frequently used in if-statements. Approach This study collects changes of if-statements in Patch 1 obtained from the review management system. Each patch is represented by a unified diff format. Although all versions of source code are available in the source code repositories of the projects, we do not use the original source code in order to shorten the time to collect analysis data. We analyze if-statements whose conditional expression is written in a changed line. We excluded conditional expressions across multiple lines from analysis; this filtering is not a strong limitation since multi-line if-statements are included in only 425 (0.6%) patches of the Qt project and 553 patches (0.9%) of the OpenStack project. Our dataset includes 5,778 (Qt) and 7,725 (OpenStack) if-statements changed in the submitted patches. Also, each reviews have 4 (Median) changes in Qt project and 4 (Median) changes in OpenStack Project between Patch 1 and Patch n . We count the number of each symbol in the condition of each if-statement changed in Patch 1 . We employ ANTLR [24] with its official grammars for C++ and Python † † † to recognize reserved words and symbols used in conditional expressions. Note that the analyzed ifstatements include else-if-statements(C++) and elifstatements(Python) in addition to regular if-statements. Figure 2 describes the process of data extraction. To analyze the frequency in the co-occurrence of symbols, we apply closed frequent itemset mining. The mining provides a Support metric for a set of items. The metric represents the relative frequency with respect to the total number of transactions, i.e. if-statements changed in the patches. We employ the arules package [25] as an implementation of the mining algorithm. We extract item sets whose size is at most five elements and whose Support score is equal to or greater than 0.01, since the mining results in a huge number of item sets. If an item set is a superset of another item set and has the same Support score, then the algorithm uses only the larger one. For example, a set {' 'LeftParen' ', ' 'RightParen' '} is extracted and its subset {' 'LeftParen' '} is filtered out, if the parentheses always appear as pairs. Figure 3 and Fig. 4 show the rate of symbols that appeared in more than 5% of all if-statements of Patch 1 . We extract 477 (Qt) and 162 (OpenStack) item sets obtained by frequent itemset mining whose Support score is greater than 0.01. Due to the limited space, Table 3 and Table 4 literal for string if a == "a": * (N/A) means the symbol is unavailable in the programming language of the project. * * the symbol will be used in Sect. 5 show frequent item sets whose Support score is greater than 0.10. For example, Id 1 in Table 3 shows "LeftParen", "RightParen" which means "(" and ")" are included at the same time in the if-statement. Parentheses are the most frequent symbols in the changed if-statements. Qt: Figure 3 shows that parentheses ("LeftParen" and "RightParen") are likely included in an if-statement. It should be noted that those numbers do not include the beginning and end parentheses for if-statements. While the parentheses often control the order of evaluation in a condi- tion, the parentheses also call functions representing some conditions. An example of such function call is shown in Listing 4 † . OpenStack: Figure 4 shows that parentheses also frequently appear in OpenStack. However, the frequency of parentheses is lower than Qt project, because some functions in C++ are defined as reserved words in Python standard library. For example, "std::find" function in C++ are semantically similar to "in" reserved words in Python. Also, a patch author might define functions instead of "is" to compare objects. Result "Dot" and "Not" are frequently used in the ifstatements. "Dot" is used for accessing a member of an object, and it is frequently used in both projects. "Not" is used to inverse logical result. In Table 3 and Table 4, approximately 30% if-statements used "Not" in two projects. The "Not" reserved word or symbol is often used to detect the fail of a function execution as in Listing 4 or to use the output of a function as in Listing 5 † † . Qt: In Table 3, the Support score of "Dot" with "Parentheses" are 30.1% (Id 4) of all's "Dot" 34.3 (Id 2). Qt project could have used "Dot" to call other object's function. OpenStack: While "Dot" is the most frequently changed symbol with if-statement changes, nearly half of if-statements used "Dot" symbols (Id 1 in Table 4). Analysis: Source Code Fixes after Review This section extracts symbol changes as a fixing pattern between Patch 1 and Patch n by using association rule mining. Approach This analysis describes how a patch author fixed ifstatements through the review. In this section, we identify † https://codereview.Qt-project.org/#/c/1368/1/src/plugins/ Qt4projectmanager/Qt-s60/symbianQtversion.cpp † † https://codereview.Qt-project.org/#/c/2481/1/src/declarative/ items/qsggridview.cpp for Qt and 3544 if for OpenStack statements that the patch author fixed from Patch 1 to Patch n . For example, Fig. 5 shows one "&&" and two "!" changes in Patch 1 . After reviewing Patch 1 to Patch n , the patch author added "||", "(" and ")" in Patch n , and deleted "&&" and "!". 3. Compare the difference in the number of symbols between Patch 1 and Patch n such as "And deleted" (And symbol(s) is deleted between Patch 1 and Patch n ), "Or added" (Or symbol(s) is added between Patch 1 and Patch n ), "Le f tParen added", "RightParen added" and "Not deleted" to understand changed contents. Using this dataset, we conducted an empirical study to understand the fixed symbols through code review using an association rule mining technique that is a popular method for the generation of usage rules [26], [27]. Association rule mining is a method to extract a relationship between two or more items as an association rule from a combination of a large number of items. The precondition and post-condition are called LHS (Left-Hand-Side) and RHS (Right-Hand-Side). We discover two kinds of rules with both changed symbols (e.g., step 2 in Fig. 5) and fixed symbols (e.g., step 3 in 1. Replaced symbols pairs (e.g. "And deleted ⇒ "Or added ) 2. Added symbols pairs (e.g. "And ⇒ "Or added )) We measure three evaluation scores from the association rule mining. They are Support, Confidence and Lift. Support of a rule is its relative frequency with respect to the total number of transactions in the history. Confidence is its relative frequency of the rule with respect to the number of historical transactions containing the antecedent LHS. Con f idence({LHS } ⇒ {RHS }) Lift measures the magnification of the data where precondition LHS and post-condition RHS exist in rules with post-condition RHS. Li f t({LHS } ⇒ {RHS }) For association rule mining, we use the arules package again. Since the association rule mining outputs many rules, we extract item sets with less than 6 items as well as the extracted item sets whose support score is greater than 0.01, confidence score is greater than 0.1, and Lift score is greater than 1.0. In our preliminary study, we found that more than 99% of "LeftParen" and "RightParen" pairs represent function calls. Similarly, "LeftBracket" and "RightBracket" pairs usually represent array access. Hence, we regard those pairs as "FunctionBrace" and "Bracket" in this analysis. Figures 6 and 7 show the rate of fixed symbols that appeared is more than 5% of all if-statement fixes. Table 5 and Table 6 show the extracted 7 rules and 31 rules for the fixing patterns from Qt and OpenStack by association rule mining. Result Observation 1: if-statement fixes frequently add or delete FunctionBrace through code review. Qt: 35% of if-statement are added (23%) or deleted (12%) FunctionBrace in Fig. 6. In our manual reading, we found that there are examples for excepting redundant function calls (Listings 6 † ) or including necessary function calls (Listings 7 † † ). OpenStack: From Fig. 7 added (20%) and deleted (20%) in FunctionBrace in Fig. 7. As we showed in our preliminary study, these findings are according to the python language in OpenStack such as the reserved words ("in" and "is") instead of the functions ("contain" and "equal"). Observation 2: Patch authors are likely to replace "Not" with "FunctionBrace" through code review in Qt project. Figure 6 shows the rate of "Not" is likely to be deleted (13% for "Not deleted") more than added (6% for "Not added") through code review. From Id 4 in Table 5, we found that "Not" is likely to be deleted to use "FunctionBrace" as in Listing 8 † . In this example, using the function instead of "Not" made it easier for the developer to understand object's type such as the array. Also, from Id 6 in Table 5, "FunctionBrace" is less likely to be deleted to use "Not". Hence, we found that the Qt project often used "FunctionBrace" instead of "Not" as one of the project specific rules. † https://codereview.Qt-project.org/#/c/2422/1..8/src/ declarative/items/qsgcanvas.cpp Observation 3: Patch authors are likely to delete "String" through code review in the OpenStack project. When using python, some patch authors often use "String" and "Bracket" to identify the object's dictionary status. However, we found in Id 5, 6 in Table 6, "String" or "Bracket" should be replaced with "Dot" to call the function in OpenStack as in Listing 9 † † or Listing 10 † † † . In Listing 9, the patch author improved readability by using "bytes .startswith()" and ".endswith()", especially expecting "String". In Listing 10, the patch author deleted "String" to reduce access to "values". Replacing "Dot" with "Bracket" not only improves maintainability but avoids the error. Hence, our approach can extract rules for not only maintenance but can contribute to improving † † https://review.openstack.org/#/c/60425/1.. 3 performance. Summary In summary, we found valuable code fixing patterns that can be an additional coding rule to reduce the costs of a reviewer. Also, we compared our fixing rules to each projects' coding rules (Qt † and OpenStack † † ). Our fixing rules are not included in these rules. Hence, the fixing rules that we identified are valuable for patch authors to reduce redundant fix. We can recommend to reviewers the symbols that are likely to be fixed in the submitted patch (Patch 1 ). • The Qt (C++) project is likely to replace "Not" with function calling through code review. "Not" is one of the most frequently used symbols in if-statement change submits; however, for readability, the Qt project uses a function call instead of "Not". Our study found such project-specific rules extracted from review data are unlike previous studies from integrated data [28]. Furthermore, our approach found the fixing patterns that do not have an effect on source code behavior. • The OpenStack (Python) project replaces reserved words with "Number" and "String" through code review Python has reserved words such as "in" and "not". These reserved words instead of used "0" and the dictionary to improve readability and to avoid the index errors. The concept of these project-specific rules are similar for improving readability and maintainability. However, the concrete rules are difference between programming languages or projects. Our approach extracted the projectspecific concrete rules that are not supported by current coding check tools like pylint † † † . † https://wiki.qt.io/Coding Conventions † † https://docs.openstack.org/hacking/latest/user/hacking.html † † † https://www.pylint.org/ Internal Validity A factor that potentially affects the internal validity of our study is that we extracted symbol changes in if-statement by syntactic analysis to extract fixed code patterns through code review. The changes are analyzed based on the patch files and the original source code to identify spots with ifstatements. However, to collect the original source code, we focused on patch files and we collected if-statements on each single line change. This methodology has no significant impact on our results since the number of if-statement changes across multiple lines is merely 425 patches of our target 69,325 patches (0.6%) and 553 patches out of Open-Stack project's 60,197 patches (0.9%). We collected not only if-condition statements, but also switch, and for and while condition statements. Switch, for and while condition statements do not appear more frequently than the if-statements in the code review as with integrated code changes [6]. Also, we compared Patch 1 and Patch n to detect source code changes in code review. Although the number of changed times (size of n) may be related to change contents, the analysis is out of scope of this paper. External Validity The project-specific nature of our dataset has many limits. This research conducted an empirical study using only the Qt and OpenStack project code review dataset. When we target the other projects, some findings of our study may be different. For example, other projects may use "<" or ">=" in Table 3 instead of ">" or "<=". Despite such variability, we contend that our approach should provide a framework for understanding individual rules or trend fixes in each project. Code Review Many researchers have conducted empirical studies to understand code review [3], [4], [11]- [15], [29], [30]. Unlike our focus most published code review studies focus on the review process or reviewers' communication. For example, Tsay et al. found those patch authors and reviewers often discuss and propose solutions with each other to fix patches [29]. In particular, Czerwonka et al. [30] found that 15% of the discussions for patch fixes are about functional issues while Mäntylä et al. [3] and Beller et al. [4] found that 75% of discussions for patch fixes are about software maintenance and 15% are about functional issues. These studies help us understand which issues should be solved in the code review process and our work focuses on how those issues are fixed. Towards this goal, we focused on source code changes through code review, specifically those involving if changes. Coding Conventions Code convention issues also relate to our study because some code reviews are refactoring based on coding convention [5], [31], [32]. Smit et al. [32] found that CheckStyle is useful for detecting whether or not source codes follow its coding rules. Also, some convention tools such as Pylint released by Thenault check the format of coding conventions. In addition, Allamanis et al. [33] have developed a tool to fix code conventions. However, to the best of our knowledge, little is known about how a patch author fixes conditional statement issues based on reviewers feedback. Conclusion This research conducted an empirical study on how ifstatements are fixed based on reviewers feedback. The results of our case study on the Qt and OpenStack project showed that in each specific fixing pattern that approximately 35% of the code is likely to be added or deleted parentheses through code review. The contribution of this study is the discovery of frequent patterns for fixing ifstatements through code review. We think this, in turn, may help to design an issue detection approach. Also, we created a coding convention checker that detects projectspecific rules. If a patch author detects the possibility of changing these symbols before the code review request, the reviewers might be able to spend more time on other additional review requests and thus same time and costs. In the future, we intend to propose a method to review and automatically fix a symbol in if-statements, such as a change impact analysis that can be conducted based on code review data with a history of integrated code changes.
2018-11-15T22:18:45.180Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "724a8b33a1b64f80d9874305eb2261abb9f658e1", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/transinf/E101.D/11/E101.D_2018EDP7004/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "724a8b33a1b64f80d9874305eb2261abb9f658e1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }