id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
231573137 | pes2o/s2orc | v3-fos-license | Refined Notions of Parameterized Enumeration Kernels with Applications to Matching Cut Enumeration
An enumeration kernel as defined by Creignou et al. [Theory Comput. Syst. 2017] for a parameterized enumeration problem consists of an algorithm that transforms each instance into one whose size is bounded by the parameter plus a solution-lifting algorithm that efficiently enumerates all solutions from the set of the solutions of the kernel. We propose to consider two new versions of enumeration kernels by asking that the solutions of the original instance can be enumerated in polynomial time or with polynomial delay from the kernel solutions. Using the NP-hard Matching Cut problem parameterized by structural parameters such as the vertex cover number or the cyclomatic number of the input graph, we show that the new enumeration kernels present a useful notion of data reduction for enumeration problems which allows to compactly represent the set of feasible solutions.
Introduction
The enumeration of all feasible solutions of a computational problem is a fundamental task in computer science. For the majority of enumeration problems, the number of feasible solutions can be exponential in the input size in the worst-case. The running time of enumeration algorithms is thus measured not only in terms of the input size n but also in terms of the output size. The two most-widely used definitions of efficient algorithms are polynomial output-sensitive algorithms where the running time is polynomial in terms of input and output size and polynomial-delay algorithms, where the algorithm spends only a polynomial running time between the output of consecutive solutions. Since in some enumeration problems, even the problem of deciding the existence of one solution is not solvable in polynomial time, it was proposed to allow FPT algorithms that have running time or delay f (k) · n O(1) for some problem-specific parameter k [9,11,12,14,34]. Naturally, FPT-enumeration algorithms are based on extensions of standard techniques in FPT algorithms such as bounded-depth search trees [11,12,14] or color coding [34].
An important technique for obtaining FPT algorithms for decision problems is kernelization [10,16,30], where the idea is to shrink the input instance in polynomial time to an equivalent instance whose size depends only on the parameter k. In fact, a parameterized problem admits an FPT algorithm if and only if it admits a kernelization. It seems particularly intriguing to use kernelization for enumeration problems as a small kernel can be seen as a compact representation of the set of feasible solutions. The first notion of kernelization in the context of enumeration problems were the full kernels defined by Damaschke [11]. Informally, a full kernel for an instance of an enumeration problem is a subinstance that contains all minimal solutions of size at most k. This definition is somewhat restrictive since it is tied to subset minimization problems parameterized by the solution size parameter k. Nevertheless, full kernels have been obtained for some problems [12,17,28,38].
To overcome the restrictions of full kernels, Creignou et al. [9] proposed enumeration kernels. Informally, an enumeration kernel for a parameterized enumeration problem is an algorithm that replaces the input instance by one whose size is bounded by the parameter and which has the property that the solutions of the original instance can be computed by listing the solutions of the kernel and using an efficient solution-lifting algorithm that outputs for each solution of the kernel a set of solutions of the original instance. In the definition of Creignou et al. [9], the solution-lifting algorithm may be an FPT-delay algorithm, that is, an algorithm with f (k)·n O (1) delay where n is the overall input size. We find that this time bound is too weak, because it essentially implies that every enumeration problem that can be solved with FPT-delay admits an enumeration kernel of constant size. Essentially, this means that the solution-lifting algorithm is so powerful that it can enumerate all solutions while ignoring the kernel. Motivated by this observation and the view of kernels as compact representations of the solution set, we modify the original definition of enumeration kernels [9].
Our results. We present two new notions of efficient enumeration kernels by replacing the demand for FPT-delay algorithms by a demand for polynomial-time enumeration algorithms or polynomial-delay algorithms, respectively. We call the two resulting notions of enumeration kernelization fully-polynomial enumeration kernels and polynomial-delay enumeration kernels. Our paper aims at showing that these two new definitions present a sweet spot between the notion of full kernels, which is too strict for some applications, and enumeration kernels, which are too lenient in some sense. We first show that the two new definitions capture the class of efficiently enumerable problems in the sense that a problem has a fully-polynomial (a polynomial-delay) enumeration kernel if and only if it has an FPT-enumeration algorithm (an FPT-delay enumeration algorithm). Moreover, the kernels have constant size if and only if the problems have polynomial-time (polynomial-delay) enumeration algorithms. Thus, the new definitions correspond to the case of problem kernels for decision problems, which are in FPT if and only if they have kernels and which can be solved in polynomial time if and only if they have kernels of constant size (see, e.g. [10,Chapter 2] or [16,Chapter 1]).
We then apply both types of kernelizations to the enumeration of matching cuts. A matching cut of a graph G is the set of edges M = E(A, B) for a partition {A, B} of V (G) forming a matching. We investigate the problems of enumerating all minimal, all maximal, or all matching cuts of a graph. We refer to these problems as Enum Minimal MC, Enum Maximal MC, and Enum MC, respectively. These matching cut problems constitute a very suitable study case for enumeration kernels, since it is NP-hard to decide whether a graph has a matching cut [7] and therefore, they do not admit polynomial output-sensitive algorithms. We consider all three problems with respect to structural parameterizations such as the vertex cover number, the modular width, or the cyclomatic number of the input graph. The choice of these parameters is motivated by the fact that neither problem admits an enumeration kernel of polynomial size for the more general structural parameterizations by the treewidth or cliquewidth up to some natural complexity assumptions (see Proposition 2). Table 1 summarizes the results.
To discuss some of our results and their implication for enumeration kernels in general more precisely, consider Enum MC, Enum Minimal MC, and Enum Maximal MC parameterized by the vertex cover number. We show that Enum Minimal MC admits a fully-polynomial enumeration kernel of polynomial size. As it can be seen that the problem has no full kernel, in particular, this implies that there are natural enumeration problems with a fully-polynomial enumeration kernel that do not admit a full kernel (not even one of super-polynomial size). Then, we show that Enum MC and Enum Maximal MC admit polynomial-delay enumeration kernels but have no fully-polynomial enumeration kernels. Thus, there are natural enumeration problems with polynomial-delay enumeration kernels that do not admit fully-polynomial enumeration kernels (not even one of super-polynomial size). Table 1: An overview of our results. Herein, 'kernel' means fully-polynomial enumeration kernel, 'delay-kernel' means polynomial-delay enumeration kernel and 'bi' means bijective enumeration kernel (a slight generalization of full kernels), a ( ) means that the lower bound assumes NP coNP/ poly, '?' means open status. The cyclomatic number is also known as the feedback edge number. We also prove a tight upper bound F (n + 1) − 1 for the maximum number of matching cuts of an n-vertex graph, where F (n) is the n-th Fibonacci number and show that all matching cuts can be enumerated in O * (F (n)) = O * (1.6181 n ) time (Theorem 4).
Related work. The current-best exact decision algorithm for Matching Cut, the problem of deciding whether a given graph G has a matching cut, has a running time of O(1.328 n ) where n is the number of vertices in G [27]. Faster exact algorithms can be obtained for the case when the minimum degree is large [25]. Matching Cut has FPT-algorithms for the maximum cut size k [23], the vertex cover number of G [29], and weaker parameters such as the twin-cover number [1] or the cluster vertex deletion number [27].
For an overview of enumeration algorithms, refer to the survey of Wasa [40]. A broader discussion of parameterized enumeration is given by Meier [35]. A different extension of enumeration kernels are advice enumeration kernels [2]. In these kernels, the solution-lifting algorithm does not need the whole input but only a possibly smaller advice. A further loosely connected extension of standard kernelization are lossy kernels which are used for optimization problems [32]; the common thread is that both definitions use a solution-lifting algorithm for recovering solutions of the original instance.
Graph notation. All graphs considered in this paper are finite undirected graphs without loops or multiple edges. We follow the standard graph-theoretic notation and terminology and refer to the book of Diestel [13] for basic definitions. For each of the graph problems considered in this paper, we let n = |V (G)| and m = |E(G)| denote the number of vertices and edges, respectively, of the input graph G if it does not create confusion. For a graph G and a subset X ⊆ V (G) of vertices, we write G[X] to denote the subgraph of G induced by X. For a set of vertices X, G − X denotes the graph obtained by deleting the vertices of X, that is, Similarly, for a set of edges A (an edge e, respectively), G − A (G − e, respectively) denotes the graph obtained by the deletion of the edges of A (the edge e, respectively). For a vertex v, we denote by N G (v) the (open) neighborhood of v, i.e., the set of vertices that are adjacent to v in G. We use N G [v] to denote the closed neighborhood and N G (X) = N G [X] \ X. For disjoint sets of vertices A and B of a graph G, E G (A, B) = {uv | u ∈ A, v ∈ B}. We may omit subscripts in the above notation if it does not create confusion. We use P n , C n , and K n to denote the n-vertex path, cycle, and complete graph, respectively. We write G + H to denote the disjoint union of G and H, and we use kG to denote the disjoint union of k copies of G.
In a graph G, a cut is a partition {A, B} of V (G), and we say that E G (A, B) is an edge cut. A matching is an edge set in which no two of the edges have a common end-vertex; note that we allow empty matchings. A matching cut is a (possibly empty) edge set being an edge cut and a matching. We underline that by our definition, a matching cut is a set of edges, as sometimes in the literature (see, e.g., [7,24]) a matching cut is defined as a partition {A, B} of the vertex set such that E(A, B) is a matching. While the two variants of the definitions are equivalent, say when the decision variant of the matching cut problem is considered, this is not the case in enumeration and counting when we deal with disconnected graphs. For example, the empty graph on n vertices has 2 n−1 − 1 partitions {A, B} which all correspond to exactly one matching cut in the sense of our definition, namely M = ∅. A matching cut M of G is (inclusion) minimal (maximal, respectively) if G has no matching cut M ⊂ M (M ⊃ M , respectively). Notice that a disconnected graph has exactly one minimal matching cut which is the empty set.
Organization of the paper. In Section 2, we introduce and discuss basic notions of enumeration kernelization. In Section 3, we show upper and lower bound for the maximum number of (minimal) matching cuts. In Section 4, we give enumeration kernels for the matching cut problems parameterized by the vertex cover number. Further, in Section 5, we consider the parameterization by the neighborhood diversity and modular width. We proceed in Section 6, by investigating the parameterization by the cyclomatic number (feedback edge number). In Section 7, we give bijective kernels for the parameterization by the clique partition number. We conclude in Section 8, by outlining some further directions of research in enumeration kernelization.
Parameterized Enumeration and Enumeration Kernels
We use the framework for parameterized enumeration proposed by Creignou et al. [9]. An enumeration problem (over a finite alphabet Σ) is a tuple Π = (L, Sol) such that (i) L ⊆ Σ * is a decidable language, (ii) Sol : Σ * → P(Σ * ) is a computable function such that for every x ∈ Σ * , Sol(x) is a finite set and Sol(x) = ∅ if and only if x ∈ L.
Here, P(A) is used to denote the powerset of a set A. A string x ∈ Σ * is an instance, and Sol(x) is the set of solutions to instance x. A parameterized enumeration problem is defined as a triple Π = (L, Sol, κ) such that (L, Sol) satisfy (i) and (ii) of the above definition, and (iii) κ : Σ * → N is a parameterization.
We say that k = κ(x) is a parameter. We define the parameterization as a function of an instance but it is standard to assume that the value of κ(x) is either simply given in x or can be computed in polynomial time from x. We follow this convention throughout the paper. An enumeration algorithm A for a parameterized enumeration problem Π is a deterministic algorithm that for every instance x, outputs exactly the elements of Sol(x) without duplicates, and terminates after a finite number of steps on every instance. The algorithm A is an FPT enumeration algorithm if it outputs all solutions in at most f (κ(x))p(|x|) steps for a computable function f (·) that depends only on the parameter and a polynomial p(·).
We also consider output-sensitive enumerations, and for this, we define delays. Let A be an enumeration algorithm for Π. For x ∈ L and 1 ≤ i ≤ |Sol(x)|, the i-th delay of A is the time between outputting the i-th and (i + 1)-th solutions in Sol(x). The 0-th delay is the precalculation time which is the time from the start of the computation until the output of the fist solution, and the |Sol(x)|-th delay is the postcalculation time which is the time after the last output and the termination of A (if Sol(x) = ∅, then the precalculation and postcalculation times are the same). It is said that A is a polynomial-delay algorithm, if all the delays are upper bounded by p(|x|) for a polynomial p(·). For a parameterized enumeration problem Π, A is an FPT-delay algorithm, if the delays are at most f (κ(x))p(|x|), where f (·) is a computable function and p(·) is a polynomial. Notice that every FPT enumeration algorithm A is also an FPT delay algorithm.
The key definition for us is the generalization of the standard notion of a kernel in Parameterized Complexity (see, e.g, [16]) for enumeration problems. Definition 1. Let Π = (L, Sol, κ) be a parameterized enumeration problem. A fully-polynomial enumeration kernel(ization) for Π is a pair of algorithms A and A with the following properties: (i) For every instance x of Π, A computes in time polynomial in |x| + κ(x) an instance y of Π such that |y| + κ(y) ≤ f (κ(x)) for a computable function f (·).
(ii) For every s ∈ Sol(y), A computes in time polynomial in |x| + |y| + κ(x) + κ(y) a nonempty set of solutions S s ⊆ Sol(x) such that {S s | s ∈ Sol(y)} is a partition of Sol(x).
Notice that by (ii), x ∈ L if and only if y ∈ L. We say that A is a kernelization algorithm and A is a solution-lifting algorithm. Informally, a solution-lifting algorithm takes as its input a solution for a "small" instance constructed by the kernelization algorithm and, having an access to the original input instance, outputs polynomially many solutions for the original instance, and by going over all the solutions to the small instance, we can generate all the solutions of the original instance without repetitions. We say that an enumeration kernel is bijective if A produces a unique solution to x, that is, it establishes a bijection between Sol(y) and Sol(x), that is, the compressed instance essentially has the same solutions as the input instance. In particular, full kernels [11] are the special case of bijective kernels where A is the identity. As it is standard, f (·) is the size of the kernel, and the kernel has polynomial size if f (·) is a polynomial.
We define polynomial-delay enumeration kernel(ization) in a similar way. The only difference is that (ii) is replaced by the condition (ii * ) For every s ∈ Sol(y), A computes with delay polynomial in |x| + |y| + κ(x) + κ(y) a set of solutions S s ⊆ Sol(x) such that {S s | s ∈ Sol(y)} is a partition of Sol(x).
It is straightforward to make the following observation.
Observation 1. Every bijective enumeration kernel is a fully-polynomial enumeration kernel; every fully-polynomial enumeration kernel is a polynomial-delay enumeration kernel.
Notice also that our definition of polynomial-delay enumeration kernel is different from the definition given by Creignou et al. [9]. In their definition, Creignou et al. [9] require that the solution-lifting algorithm A should list all the solutions in S s with FPT delay for the parameter κ(x). We believe that this condition is too weak. In particular, with this requirement, every parameterized enumeration problem, that has an FPT enumeration algorithm A * and such that the existence of at least one solution can be verified in polynomial time, has a trivial kernel of constant size. The kernelization algorithm can output any instance satisfying (i) and then we can use A * as a solution-lifting algorithm that essentially ignores the output of the kernelization algorithm. Note that for enumeration problems, we typically face the situation where the existence of at least one solution is not an issue. We argue that our definitions are natural by showing the following theorem.
Theorem 2. A parameterized enumeration problem Π has an FPT enumeration algorithm (an FPT delay algorithm) if and only if Π admits a fully-polynomial enumeration kernel (polynomialdelay enumeration kernel). Moreover, Π can be solved in polynomial time (with polynomial delay) if and only if Π admits a fully-polynomial enumeration kernel (a polynomial-delay enumeration kernel) of constant size.
Proof. The proof of the first claim is similar to the standard arguments for showing the equivalence between fixed-parameter tractability and the existence of a kernel (see, e.g. [10,Chapter 2] or [16,Chapter 1]). However dealing with enumeration problems requires some specific arguments. Let Π = (L, Sol, κ) be a parameterized enumeration problem.
In the forward direction, the claim is trivial. Recall that L is decidable and Sol(·) is a computable function by the definition. If Π admits a fully-polynomial enumeration kernel (a polynomial-delay enumeration kernel respectively), then we apply an arbitrary enumeration algorithm, which is known to exist since Sol(·) is computable, to the instance y produced by the kernelization algorithm. Then, for each s ∈ Sol(y), use the solution-lifting algorithm to list the solutions to the input instance.
For the opposite direction, assume that Π can be solved in f (κ(x)) · |x| c time (with f (κ(x)) · |x| c delay, respectively) for an instance x, where f (·) is a computable function and c is a positive constant. Since f (·) is computable, we assume that we have an algorithm F computing f (k) in g(k) time. We define h(k) = max{f (k), g(k)}.
We say that an instance x of Π is a trivial no-instance if x is an instance of minimum size with Sol(x) = ∅. We call x a minimum yes-instance if x is an instance of minimum size that has a solution. Notice that if Π has instances without solutions, then the size of a trivial no-instance is a constant that depends on Π only and such an instance can be computed in constant time. Similarly, if the problem has instances with solutions, then the size of a minimum yes-instance is constant and such an instance can be computed in constant time. We say that x is a trivial yes-instance if x is an instance with minimum size of Sol(x) that, subject to the first condition, has minimum size. Clearly, the size of a trivial yes-instance is a constant that depends only on Π. However, we may be unable to compute a trivial yes-instance.
Let x be an instance of Π and k = κ(x). We run the algorithm F to compute f (k) for at most n = |x| steps. If the algorithm failed to compute f (k) in n steps, we conclude that g(k) ≥ n. In this case, the kernelization algorithm outputs x. Then the solution-lifting algorithm just trivially outputs its input solutions. Notice that |x| ≤ g(k) ≤ h(k) in this case. Assume from now that F computed f (k) in at most n steps.
If |x| ≤ f (k), then the kernelization algorithm outputs the original instance x, and the solution-lifting algorithm trivially outputs its input solutions. Note that |x| ≤ f (k) ≤ h(k).
Finally, we suppose that f (k) < |x|. Observe that the enumeration algorithm runs in |x| c+1 time (with |x| c+1 delay, respectively) in this case, that is, the running time is polynomial. We use the enumeration algorithm to verify whether x has a solution. For this, notice that a polynomial-delay algorithm can be used to solve the decision problem; we just run it until it outputs a first solution (or reports that there are no solutions). If x has no solution, then Π has a trivial no-instance and the kernelization algorithm computes and outputs it. If x has a solution, then the kernelization algorithm computes a minimum yes-instance y in constant time. We use the enumeration algorithm to check whether |Sol(y)| ≤ |Sol(x)|. If this holds, then we set z = y. Otherwise, if |Sol(x)| < |Sol(y)|, we find an instance z of minimum size such that |Sol(z)| ≤ |Sol(x)|. Notice that this can be done in constant time, because the size of z is upper bounded by the size of a trivial yes-instance. Then we list the solutions of z in constant time and order them. For the i-th solution of z, the solution-lifting algorithm outputs the i-th solution of x produced by the enumeration algorithm, and for the last solution of z, the solution-lifting algorithm further runs the enumeration algorithm to output the remaining solutions. Since |Sol(z)| ≤ |Sol(x)|, the solution-lifting algorithm outputs a nonempty set of solutions for x for every solution of z.
It is easy to see that we obtain a fully-polynomial enumeration kernel of size O(h(κ(x)) (a polynomial-delay enumeration kernel, respectively).
For the second claim, the arguments are the same. If a problem admits a fully-polynomial (a polynomial-delay) enumeration kernel of constant size, then the solutions of the original instance can be listed in polynomial time (or with polynomial delay, respectively) by the solution-lifting algorithm called for the constant number of the solutions of the kernel. Conversely, if a problem can be solved in polynomial time (with polynomial delay, respectively), we can apply the above arguments assuming that f (k) (and, therefore, g(k)) is a constant.
In our paper, we consider structural parameterizations of Enum Minimal MC, Enum Maximal MC, and Enum MC by several graph parameters, and the majority of these parameterizations are stronger than the parameterization either by the treewidth or the cliquewidth of the input graph. Defining the treewidth (denoted by tw(G)) and cliquewidth (denoted by cw(G)) goes beyond of the scope of the current paper and we refer to [8] (see also, e.g., [10]). By the celebrated result of Bodlaender [3] (see also [10]), it is FPT in t to decide whether tw(G) ≤ t and to construct the corresponding tree-decomposition. No such algorithm is known for cliquewidth. However, for algorithmic purposes, it is usually sufficient to use the approximation algorithm of Oum and Seymour [37] (see also [36,10]). Observe that the property that a set of edges M of a graph G is a matching cut of G can be expressed in monadic second-order logic (MSOL); we refer to [8,10] for the definition of MSOL on graphs. Then the matching cuts (the minimal or maximal matching cuts) of a graph of treewidth at most t can be enumerated with FPT delay with respect to the parameter t by the celebrated meta theorem of Courcelle [8]. The same holds for the weaker parameterization by the cliquewidth of the input graph, because we can use MSOL formulas without quantifications over (sets of) edges: For a graph G, we pick a vertex in each connected component of G and label it. Let R be the set of labeled vertices. Then the enumeration of nonempty matching cuts is equivalent to the enumeration of all partitions {A, B} of V (G) such that (i) R ⊆ A and (ii) E(A, B) is a matching. Notice that condition (ii) can be written as follows: for every u 1 , u 2 ∈ A and v 1 , v 2 ∈ B, if u 1 is adjacent to v 1 and u 2 is adjacent to v 2 , then either u 1 = u 2 and v 1 = v 2 or u 1 = u 2 and v 1 = v 2 . Since the empty matching cut can be listed separately if it exists, we obtain that we can use MSOL formulations of the enumeration problems, where only quantifications over vertices and sets of vertices are used. Then the result of Courcelle [8] implies that Enum Minimal MC, Enum Maximal MC, and Enum MC can be solved with FPT delay when parameterized by the cliquewidth of the input graph. We summarize these observations in the following proposition. Proposition 1. Enum MC, Enum Minimal MC, and Enum Maximal MC on graphs of treewidth (cliquewidth) at most t can be solved with FPT delay when parameterized by t.
This proposition implies that Enum MC, Enum Minimal MC and Enum Maximal MC can be solved with FPT delay for all structural parameters whose values can be bounded from below by an increasing function of treewidth or cliquewidth. However, we are mainly interested in fully-polynomial or polynomial-delay enumeration kernelization. We conclude this section by pointing out that it is unlikely that Enum Minimal MC, Enum Maximal MC, and Enum MC admit polynomial-delay enumeration kernels of polynomial size for the treewidth or cliquewidth parameterizations. It was pointed out by Komusiewicz,Kratsch,and Le [27] that the decision version of the matching cut problem (that is, the problem asking whether a given graph G has a matching cut) does not admit a polynomial kernel when parameterized by the treewidth of the input graph unless NP ⊆ coNP/ poly. By the definition of a polynomial-delay enumeration kernel, this gives the following statement.
Proposition 2. Enum Minimal MC, Enum Maximal MC and Enum MC do not admit polynomial-delay enumeration kernels of polynomial size when parameterized by the treewidth (cliquewidth, respectively) of the input graph unless NP ⊆ coNP/ poly.
A Tight Upper Bound for the Maximum Number of Matching Cuts
In this section we provide a tight upper bound for the maximum number of matching cuts of an n-vertex graph. We complement this result by giving an exact enumeration algorithm for (minimal, maximal) matching cuts. Finally, we give some lower bounds for the maximum number of minimal and maximal matching cuts, respectively. Throughout this section, we use # mc (G) to denote the number of matching cuts of a graph G.
To give the upper bound, we use the classical Fibonacci numbers. For a positive integer n, we denote by F (n) the n-th Fibonacci number. Recall that F (1) = F (2) = 1, and for n ≥ 3, the Fibonacci numbers satisfy the recurrence F (n) = F (n − 1) + F (n − 2). Recall also that the n-th Fibonacci number can be expressed by the following closed formula: The following lemma about the Fibonacci numbers is going to be useful for us.
then the inequality is strict.
Proof. The proof is inductive. It is straightforward to verify the inequality for p, q ≤ 3. Notice that F (p)F (q) = F (p + q − 1) − 1 in these cases. Assume now that p ≥ 4. Then, by induction, as required.
To see the relations between the number of matching cuts and the Fibonacci number, we make the following observation.
Observation 3. For every positive integer n, the n-vertex path has F (n + 1) − 1 matching cuts.
We show that, in fact, F (n + 1) − 1 is an upper bound for the number of matching cuts of an n-vertex graph. First, we show this for trees.
By induction,
To see the second claim, note that if H is not a path, then either H − u or H − {u, v} is not a path. Then, by induction, the inequality in (1) is strict and, therefore, # match (H) < F (n+1)−1. This concludes the proof.
It is well-known that the treewidth of a tree is one (see, e.g., [10]). This observation together with Proposition 1 and Lemma 2 immediately imply the following lemma. It is also easy to construct a direct enumeration algorithm for trees. For example, one can consider the recursive branching algorithm that for an edge, first enumerates matching cuts containing this edge and then the matching cuts excluding the edge. Note that the running time in Lemma 3 can be written as O(1.6181 n ) to make the exponential dependence on n more clear.
Now we consider general graphs and show the following.
Theorem 4. An n-vertex graph has at most F (n + 1) − 1 matching cuts. The bound is tight and is achieved for paths. Moreover, if n ≥ 5, then an n-vertex graph G has F (n + 1) − 1 matching cuts if and only if G is a path. Furthermore, the matching cuts can be enumerated in O * (F (n)) time.
Proof. First, we consider connected graphs. Let G be a connected graph. Observe that if M is a matching cut of G, then the partition is a matching cut. Let T be an arbitrary spanning tree of G.
Now we claim that G has F (n + 1) − 1 matching cuts if and only if G is a path. Note that the spanning tree T is arbitrary. If G has a vertex of degree at least three, then T can be chosen in such a way that T is not a path. Then, by Lemma 2, # mc (G) ≤ # mc (T ) < F (n + 1) − 1. Assume that the maximum degree of G is at most two. Then G is either a path or a cycle. In the first case, Consider the path P = v 1 · · · v n spanning G. Note that every matching cut of G has at least two edges. This implies that there are matching cuts of P that do not correspond to any matching cuts of G. In particular, To enumerate the matching cuts of G, we consider a spanning tree T and enumerate the matching cuts of T using Lemma 3. Then for every matching cut is a matching cut of G and output M is this holds. This means that the matching cuts of a connected graph G can be enumerated in O * (F (n)) time. This completes the proof for connected graphs.
Assume that G is a disconnected graph with connected components G 1 , . . . , G k , k ≥ 2, having n 1 , . . . , n k vertices, respectively. Observe that M ⊆ E(G) is a matching cut of G if and only if M = k i=1 M i , where for every i ∈ {1, . . . , k}, either M i is a matching cut of G i or M i = ∅. Therefore, using the proved claim for connected graphs, we have that ( Applying Lemma 1 iteratively, we obtain that Combining (2) and (3), we have that By the proved claim for connected graphs, we have that the inequality (4) is strict if one of the connected components is not a path. By inequality (3), (4) is also strict if k ≥ 3. If k = 2 and n ≥ 5, then either n 1 ≥ 3 and n 2 ≥ 3. By Lemma 1, (3) is strict. Hence, if n ≥ 5, then # mc (G) < F (n + 1) − 1. This implies that if n ≥ 5, then # mc (G) = F (n + 1) − 1 if and only if G is a path.
Finally, observe that the matching cuts of G can be enumerated by listing the matching cuts of each connected component and combining them (assuming that these lists contain the empty set) to obtain the matching cuts of G. Equivalently, we can take the spanning forest H of G obtained by taking the union of spanning trees of G 1 , . . . , G k , respectively. Then we can list the matching cuts of H and output the matching cuts of G corresponding to them. In both cases, the running time is O * (F (n)).
Let us remark that if n ≤ 4, then besides paths P n , the graphs K p + K q for 1 ≤ p, q ≤ 2 such that n = p + q have F (n + 1) − 1 matching cuts.
Clearly, the upper bound for the maximum number of matching cuts given in Theorem 4 is an upper bound for the maximum number of minimal and maximal matching cuts. However, the number of minimal or maximal matching cuts may be significantly less than the number of all matching cuts. We conclude this section by stating the best lower bounds we know for the maximum number of maximal matching cuts and minimal matching cuts, respectively.
Our lower bound for the maximal matching cuts is achieved for disjoint unions of the cycles on 7 vertices. Proposition 3. The graph G = kC 7 with n = 7k vertices has 14 k = 14 n/7 ≥ 1.4579 n maximal matching cuts.
Proof. Suppose that G has connected components G 1 , . . . , G k such that G i has a matching cut for every i ∈ {1, . . . , k}.
Observe that C 7 has 14 maximal matching cuts formed by matchings with two edges. Therefore, G = kC 7 has 14 k maximal matching cuts. Since G has n = 7k vertices, 14 k = 14 n/7 ≥ 1.4579 n .
To achieve a lower bound for the maximum number of minimal matching cuts, we consider the graphs H k constructed as follows for a positive integer k.
• For every i ∈ {1, . . . , k}, construct two vertices u i and v i and a (u i , v i )-path of length 4.
• Make the vertices u 1 , . . . , u k pairwise adjacent, and do the same for v 1 , . . . , v k . Proposition 4. The number of minimal matching cuts of H k with n = 5k vertices is at least Proof. Consider a matching cut M composed by taking one edge of each (u i , v i )-path for i ∈ {1, . . . , k}. Clearly, M is a minimal matching cut of G. Observe that H k has 4 k minimal matching cuts of this form. Since H k has n = 5k vertices, 4 k = 4 n/5 ≥ 1.3195 n .
Enumeration Kernels for the Vertex Cover Number Parameterization
In this section, we consider the parameterization of the matching cut problems by the vertex cover number of the input graph. Notice that this parameterization is one of the most thoroughly investigated with respect to classical kernelization (see, e.g., the recent paper of Bougeret, Jansen, and Sau [6] for the currently most general results of this type). However, we are interested in enumeration kernels.
Recall that a set of vertices X ⊆ V (G) is a vertex cover of G if for every edge uv ∈ E(G), at least one of its end-vertices is in X, that is, V (G) \ X is an independent set. The vertex cover number of G, denoted by τ (G), is the minimum size of a vertex cover of G. Computing τ (G) is NP-hard but one can find a 2-approximation by taking the end-vertices of a maximal matching of G [22] (see also [26] for a better approximation) and this suffices for our purposes. Throughout this section, we assume that the parameter k = τ (G) is given together with the input graph. Note that for every graph G, tw(G) ≤ τ (G). Therefore, Enum MC, Enum Minimal MC, and Enum Maximal MC can be solved with FPT delay when parameterized by the vertex cover number by Proposition 1.
First, we describe the basic kernelization algorithm that is exploited for all the kernels in this subsection. Let G be a graph that has a vertex cover of size k. The case when G has no edges is trivial and will be considered separately. Assume from now that G has at least one edge and k ≥ 1.
We use the above-mentioned 2-approximation algorithm to find a vertex cover X of size at most 2k. Let I = V (G) \ X. Recall that I is an independent set. Denote by I 0 , I 1 , and I ≥2 the subsets of vertices of I of degree 0, 1, and at least 2, respectively. We use the following marking procedure to label some vertices of I. (ii) For every x ∈ X, mark an arbitrary vertex of N G (x) ∩ I 1 (if it exists).
(iii) For every two distinct vertices x, y ∈ X, select an arbitrary set of min{3, |(N G (x)∩N G (y))∩ I ≥2 |} vertices in I ≥2 that are adjacent to both x and y, and mark them for the pair {x, y}.
Note that a vertex of I ≥2 can be marked for distinct pairs of vertices of X. Denote by Z the set of marked vertices of I. Clearly, This completes the description of the basic kernelization algorithm that returns H. It is straightforward to see that H can be constructed in polynomial time.
It is easy to see that H does not keep the information about all matching cuts in G due to the deleted vertices. However, the crucial property is that H keeps all matching cuts of G = G − (I 0 ∪ I 1 ). Formally, we define H = H − (I 0 ∪ I 1 ) and show the following lemma. Proof. Suppose that M ⊆ E(G ) is a matching cut of G and assume that M = E G (A, B) for a partition {A, B} of V (G ). We show that M ⊆ E(H ). For the sake of contradiction, suppose that there is some edge uv ∈ M \ E(H ). This means that either u / ∈ V (H ) or v / ∈ V (H ). By symmetry, we can assume without loss of generality that u / ∈ V (H ). Then u ∈ I ≥2 \ Z, that is, u is an unmarked vertex of I ≥2 . Recall that every vertex of I ≥2 has degree at least two. Hence, u has a neighbor w = v. Because M is a matching cut, w ∈ A. Notice that w, v ∈ X. Because u is unmarked and adjacent to both w and v, there are three vertices z 1 , z 2 , z 3 ∈ Z that are marked for the pair {w, v}. Since either A or B contains at least two of the vertices z 1 , z 2 , z 3 , either w or v has at least two neighbors in B or A, respectively. This contradicts that M is a matching cut and we conclude that M ⊆ E(H ). Since H is an induced subgraph of G , M is a matching cut of H .
For the opposite direction, assume that M is a matching cut of H . Let M = E H (A, B) for a partition {A, B} of V (H ). We claim that for every v ∈ V (G ) \ V (H ), either v has all its neighbors in A or v has all its neighbors in B. The proof is by contradiction. Assume that v ∈ V (G ) \ V (H ) has a neighbor u ∈ A and a neighbor w ∈ B. Then v is an unmarked vertex of I ≥2 , and u, w ∈ X. We have that for the pair {u, w}, there are three marked vertices z 1 , z 2 , z 3 ∈ I ≥2 that are adjacent to both u and w. Since z 1 , z 2 , z 3 are marked, z 1 , z 2 , z 3 ∈ V (H ). In the same way as above, either A or B contains at least two of the vertices z 1 , z 2 , z 3 and this implies that either u or v has at least two neighbors in the opposite set of the partition. This contradicts the assumption that M is a matching cut of H . Since for every v ∈ V (G ) \ V (H ), either v has all its neighbors in A or v has all its neighbors in B, M is a cut of G , that is, M is a matching cut of G .
To see the relations between matching cuts of G and H, we define a special equivalence relation for the subsets of edges of G. For a vertex x ∈ X, let L x = {xy ∈ E(G) | y ∈ I 1 }, that is, L x is the set of pendant edges of G with exactly one end-vertex in the vertex cover. Observe that if L x = ∅, then there is x ∈ L x such that x ∈ E(H), because for every x ∈ X, a neighbor in I 1 is marked if it exists. We define L = x∈X L x . Notice that each matching cut of G contains at most one edge of every L x . We say that two sets of edges M 1 and M 2 are It is straightforward to verify that the introduced relation is indeed an equivalence relation. It is also easy to see that if M is a matching cut of G, then every M ⊆ E(G) equivalent to M is a matching cut. We show the following lemma. Proof. We prove the lemma for matching cuts.
For the forward direction, let M be a matching cut of G. We show that there is a matching cut M of H that is a matching cut of G equivalent to M . If M = ∅, then G is disconnected. Notice that, by the construction, H is disconnected as well. Hence, M = M is a matching cut of H. Clearly, M is equivalent to M . Assume that M = ∅. We construct M from M by the following operation: for every x ∈ X such that M ∩ L x = ∅, replace the unique edge of M in L x by x . By the definition, M is a matching cut that is equivalent to M . We show that M is a matching cut of H. Let For minimal and maximal matching cuts, the arguments are the same. It is sufficient to note that if M andM are matching cuts of G such that M ⊂M , then their equivalent matching cuts M andM of H constructed in the proof for the forward direction satisfy the same inclusion property M ⊂M .
We use Lemma 5 to obtain our kernelization results. For Enum Minimal MC, we show that the problem admits a fully-polynomial enumeration kernel, and we prove that Enum Maximal MC and Enum MC have polynomial-delay enumeration kernels. Proof. Let G be a graph with τ (G) = k. If G = K 1 , then the kernelization algorithm returns H = G 1 and the solution-lifting algorithm is trivial as G has no matching cuts. Assume that G has at least 2 vertices. If G has no edges, then the empty set is the unique matching cut of G.
Then the kernelization algorithm returns H = 2K 1 , and the solution-lifting algorithm outputs the empty set for the empty matching cut of H. Thus, we can assume without loss of generality that G has at least one edge and k ≥ 1.
We use the same basic kernelization algorithm that constructs H as described above and output H for all the problems. Recall that |V (H)| ≤ 6k 2 + k + 1. The kernels differ only in the solution-lifting algorithms. These algorithms exploit Lemma 5 and for every matching cut (minimal or maximal matching cut, respectively) M of H, they list the equivalent matching cuts of G. Lemma 5 guarantees that the families of matching cuts (minimal or maximal matching cuts, respectively) constructed for every matching cut of H compose the partition of the sets of matching cuts (minimal or maximal matching cuts, respectively) of G. This is exactly the property that is required by the definition of a fully-polynomial (polynomial-delay) enumeration kernel. To describe the algorithm, we use the notation defined in this section.
First, we consider Enum Minimal MC. Let M be a minimal matching cut of H. If M ∩L = ∅, then M is the unique matching cut of G that is equivalent to M , and our algorithm outputs M . Suppose that M ∩ L = ∅. Then by the minimality of M , M = { x } for some x ∈ X, because every edge of L is a matching cut. Then the sets {e} for every e ∈ L x are exactly the matching cuts equivalent to M . Clearly, we have at most n such matching cuts and they can be listed in linear time. This implies that condition (ii) of the definition of a fully-polynomial enumeration kernel is fulfilled. Thus, Enum Minimal MC has a fully-polynomial enumeration kernel with at most 6k 2 + k + 1 vertices.
Next, we consider Enum Maximal MC and Enum MC. The solution-lifting algorithms for these problems are the same. Let M be a (maximal) matching cut of H. Let also We use the recursive algorithm Enum Equivalent (see Algorithm 1) that takes as an input a matching S of G and W ⊆ Y and outputs the equivalent matching cuts M of G such that (i) S ⊆ M , (ii) M is equivalent to M , and (iii) the constructed matchings M differ only by some edges of the sets L x for x ∈ W . Initially, S = M 2 and W = Y .
To enumerate the matching cuts equivalent to M , we call Enum Equivalent(M 2 , Y ). We claim that Enum Equivalent(M 2 , Y ) enumerates the matching cuts of G that are equivalent to M with O(n) delay.
By the definition of the equivalence and Lemma 5, every matching cut M of G that is equivalent to M can be written as Then to see the correctness of Enum Equivalent, observe the following. If W = ∅, then the algorithm picks a vertex x ∈ W . Then for every edge e ∈ L x , it enumerates the matching cuts containing S and e. This means that our algorithm is, in fact, a standard backtracking enumeration algorithm (see [33]) and immediately implies that the algorithm lists all the required matching cuts exactly once. Since the depth of the recursion is at most n and the algorithm always outputs a matching cut for each leaf of the search tree, the delay is O(n). This completes the proof of the polynomial-delay enumeration kernel for Enum Maximal MC and Enum MC.
To conclude the proof of the theorem, let us remark that, formally, the solution-lifting algorithms described in the proof require X. However, in fact, we use only sets L x that can be computed in polynomial time for given G and H.
Notice that Theorem 5 is tight in the sense that Enum Maximal MC and Enum MC do not admit fully-polynomial enumeration kernels for the parameterization by the vertex cover number. To see this, let k be a positive integer and consider the n-vertex graph G, where n > k is divisible by k, that is the union of k stars K 1,p for p = n/k − 1. Clearly, τ (G) = k. We observe that G has p k = (n/k − 1) k maximal matching cut that are formed by picking one edge from each of the k stars. Similarly, G has (p + 1) k = (n/k) k matching cuts obtained by picking at most one edge from each star. In both cases, this means that the (maximal) matching cuts cannot be enumerated by an FPT algorithm. By Theorem 2, this rules out the existence of a fully-polynomial enumeration kernel.
By Theorems 5 and 2, we have that the minimal matching cuts of a graph G can be enumerated in 2 O(τ (G) 2 ) · n O(1) time by the applying the enumeration algorithm from Theorem 4 for H. Similarly, the (maximal) matching cuts can be listed with 2 O(τ (G) 2 ) · n O(1) delay. We show that this running time can be improved and the dependence on the vertex cover number can be made single exponential. Theorem 6. The minimal matching cuts of an n-vertex graph G can be enumerated in 2 O(τ (G)) · n O(1) time, and the (maximal) matching cuts of G can be enumerated with 2 O(τ (G)) · n O(1) delay.
Proof. Recall that our kernelization algorithm that is the same for all the problems, given a graph G with τ (G) = k, constructs the graph H which gives a fully-polynomial (polynomialdelay) enumeration kernel, by Theorem 5. By Definition 1, it is thus sufficient to show that H has 2 O(k) matching cuts that can be listed in 2 O(k) time. We do it using the structure of H following the notation introduced in the beginning of the section.
Notice that every matching cut of H can be written as for some Y ⊆ X and M 2 is either empty or a nonempty matching cut of H = H − (I 0 ∪ I 1 ). Observe that because |X| ≤ 2k, H has at most 2 2k sets of edges of the form { x | x ∈ Y } for some Y ⊆ X and all such sets can be listed in 2 O(k) time. Hence, it is sufficient to show that H has 2 O(k) matching cuts that can be enumerated in 2 O(k) time.
Let M be a nonempty matching cut of H and let {U, W } be a partition of V (H ) such that M = E(U, W ). Recall that each vertex v ∈ V (H ) that is not a vertex of X belongs to I ≥2 , that is, has at least two neighbors in X. Therefore, U ∩ X and W ∩ X are nonempty. Since the empty matching cut can be listed separately if it exists, it is sufficient to enumerate matching cuts of the form M = E(U, W ) with nonempty U ∩ X and W ∩ X. For this we consider all partitions {A, B} of X, and for each partition, enumerate matching cuts M = E(U, W ), where A ⊆ U and B ⊆ W . Since |X| ≤ 2k, there are at most 2 2k partitions {A, B} of X and they can be listed in 2 O(k) · n O(1) time.
Assume from now that a partition {A, B} of X is given. We construct a recursive branching algorithm Enumerate MC(A , B ), whose input consists of two disjoint sets A and B such that A ⊆ A and B ⊆ B , and the algorithm outputs all matching cuts of the form M = E(U, W ) with A ⊆ U and B ⊆ W . It is convenient for us to write down the algorithm as a series of steps and reduction and branching rules as it is common for exact algorithms (see, e.g., [15]). The algorithm maintains the set S of the end-vertices of the edges of E(A , B ) in X, and we implicitly assume that S is recomputed at each step if necessary. We say that S is the set of saturated vertices of X. Initially, S is the set of end-vertices of E (A, B).
is not a matching, then M cannot be extended to a matching cut and we can stop considering {A , B }.
Step 6.1. If E(A , B ) is not a matching, then quit.
If {A , B } is a partition of V (H ), then the algorithm finishes its work. Note that since the algorithm did not quit in the previous step, E (A , B ) is a matching.
Step 6.2. If {A , B } is a partition of V (H ) and M = E(A , B ) is a matching cut, then output M and quit.
From now, we can assume that there is v ∈ V (H )\(A ∪B ). Recall that by the construction of H , v ∈ I ≥2 , that is, v has at least two neighbors in X.
Clearly, if v ∈ V (H ) \ (A ∪ B ) has at least two neighbors in A , then v ∈ U for every matching cut E(U, W ) with A ⊆ U . This gives us the following reduction rule. MC(A , B ∪{v}), respectively).
If v has at least two neighbors in both A and B , we place v in A ; note that we stop in Step 6.1 afterwards. From now, we assume that the rule cannot be applied, that is, every vertex v ∈ V (H ) \ (A ∪ B ) has exactly one neighbor x in A and exactly one neighbor y in B . Notice that either vx or vy should be in a (potential) matching cut. This gives the following rules.
If both neighbors of v are saturated, we place v in A ; notice that we stop in Step 6.1 in Enumerate MC (A ∪ {v}, B ). The rule is safe, because every saturated vertex of X can be adjacent to only one edge of a matching cut. From now, we can assume that the neighbors of v are not saturated. In this case, we branch on two possibilities for v. This finishes the description of the algorithm; its correctness follows directly from the discussion of the reduction and branching rules. To evaluate the running time, note that for every recursive call of Enumerate MC in Branching Rule 6.5, we increase S by including either x or y into the set of saturated vertices of X. Since |S| ≤ |X| ≤ 2k, the depth of the search tree is upper bounded by 2k. Because we have two branches in Branching Rule 6.5, the number of leaves in the search tree is at most 2 2k . Observing that all the steps and rules can be executed in polynomial time, we obtain that the total running time is 2 O(k) using the standard analysis of the running time of recursive branching algorithm (see [15]).
Since the search tree has at most 2 2k leaves, the number of matching cuts produced by the algorithm from the given partition {A, B} of X is at most 2 2k . Because the number of partitions is at most 2 2k , we have that H has at most 2 O(k) matching cuts. Then the number of matching cuts of H is 2 O(k) . Combining Enumerate MC with the previous steps for the enumeration of the matching cuts, we conclude that the matching cuts of H can be listed in 2 O(k) time.
We complement Theorem 6 by the following lower bound that shows that exponential in τ (G) time for Enum Minimal MC is unavoidable.
Proposition 5.
There is an infinite family of graphs whose number of minimal matching cuts is Ω(2 τ (G) ).
Proof. Consider a graph consisting of k disjoint copies of P 3 s u i v i w i , 1 ≤ i ≤ k, and two additional vertices s and t such that s is adjacent to each u i , 1 ≤ i ≤ k, and t is adjacent to each w i , 1 ≤ i ≤ k. We call su i v i w i t, 1 ≤ i ≤ k, a 5-path of this graph. The vertex cover number of this graph is k + 2. Now consider the matching cuts which contain at most one edge of each 5-path. Such a matching cut M contains also at least one edge of each 5-path: otherwise, s and t are connected via some 5-path and, since all other vertices are connected to s or t, the graph is connected after the removal of M .
Thus, each matching cut M that contains exactly one edge of each 5-path is minimal. Now consider any index set I ⊆ {1, . . . , k} and observe that . . , k} \ I} is a minimal matching cut. Since there are 2 k possibilities for I, the number of minimal matching cuts is Ω(2 k ) = Ω(2 τ (G) ). A set of vertices X of a graph G is said to be a twin-cover of G if for every edge uv of G, at least one of the following holds: (i) u ∈ X or v ∈ X or (ii) u and v are true twins. The twin-cover number, denoted by tc(G), is the minimum size of a twin-cover. Notice that tc(G) ≤ τ (G) and tc(G) ≥ cw(G) + 2 for every G [19,21]. This means that the parameterization by the twincover number is weaker than the parameterization by the vertex cover number but stronger than the cliquewidth parameterization. It is convenient for us to consider a parameterization that is weaker than the twin-cover number. Let X = {X 1 , . . . , X r } be the partition of V (G) into the classes of true twins. Note that X can be computed in linear time using an algorithm for computing a modular decomposition [39]. Then we can define the true-twin quotient graph G with respect to X , that is, the graph with the node set X such that two classes of true twins X i and X j are adjacent in G if and only if the vertices of X i are adjacent to the vertices of X j in G. Then it can be seen that tc(G) ≥ τ (G). We prove that Enum Minimal MC admits a fully-polynomial enumeration kernel and Enum Maximal MC and Enum MC admit polynomial-delay enumeration kernels for the parameterization by the vertex cover number of the true-twin quotient graph of the input graph. In particular, this implies the same kernels for the twin-cover parameterization.
As the first step, we show the following corollary of Theorem 5. Proof. Let G =Ĝ + sK 2 and τ (G) ≤ k. The claim is an immediate corollary of Theorem 5 if For Enum Minimal MC, it is sufficient to observe that G is disconnected if s ≥ 1 and, therefore, the empty set is the unique minimal matching cut. Then the kernelization algorithm outputs 2K 1 and the solution-lifting algorithm outputs the empty set.
For Enum MC and Enum Maximal MC, let G =Ĝ + K 2 . Denote by e the unique edge of the copy of K 2 . Observe that τ (G ) = τ (Ĝ) + 1 ≤ k + 1. We apply Theorem 5 for G and obtain a polynomial-delay enumeration kernel H with O(k 2 ) vertices. It is easy to observe that every maximal matching cut of G contains e and every maximal matching cut of G contains all the edges of the s copies of K 2 . Then for Enum Maximal MC, we modify the solution-lifting algorithm as follows: for every maximal matching cut that the algorithm outputs for a maximal matching cut of H and the graph G , we construct the matching cut of G by replacing e in the matching cut by s edges of the s copies of K 2 in G. For Enum MC, the modification of the solution-lifting algorithm is a bit more complicated. Let M be a matching cut produced by the solution-lifting algorithm for a matching cut of H and the graph G . If e / ∈ M , then the modified solution-lifting algorithm just outputs M . Otherwise, if e ∈ M , the solution-lifting algorithm outputs the matching cuts (M \ {e}) ∪ L, where L is a nonempty subset of edges of the s copies of K 2 in G. Since all nonempty subsets of a finite set can be enumerated with polynomial delay by very basic combinatorial tools (see, e.g., [33]), the obtained modification of the solution-lifting algorithm is a solution-lifting algorithm for H and G.
Using Corollary 1, we prove the following theorem. Proof. Let G be a graph and let G be its true-twin quotient graph with τ (G) = k. Let also X = {X 1 , . . . , X r } be the partition of V (G) into the classes of true twins (initially, r = k).
Recall that X can be computed in linear time [39].
We apply the following series of reduction rules. All these rules use the property that if K is a clique of G of size at least three, then either K ⊆ A or K ⊆ B for every partition {A, B} of V (G) such that M = E(A, B) is a matching cut.
To see that the rule is safe, let X i be the clique obtained from X i by the rule for some i ∈ {1, . . . , r}, and denote by G the obtained graph. We claim that M is a matching cut of G if and only if M is a matching cut of G . Let M = E g (A, B), where {A, B} is a partition of V (G). We have that either X i ⊆ A or X i ⊆ B. By symmetry, we can assume without loss of generality that X i ⊆ A. Note that since |X i | ≥ 2, the vertices of N G (X i ) are in A. Otherwise, we would have a vertex v ∈ B with at least two neighbors in A. This implies that no edge of M is incident to a vertex of X i and, therefore, M ⊆ E(G ). Since G is an induced subgraph of G, M is a matching cut of G . For the opposite direction, the arguments are essentially the same. If {A , B } is a partition of V (G ) with M = E G (A , B ), then we can assume without loss of generality that X i ⊆ A . Then N G (X i ) ⊆ A . This implies that for A = A ∪ X i and B = B , E G (A, B) = E G (A , B ) = M , that is, M is a matching cut of G. Summarizing, we conclude that the enumeration of matching cuts (minimal or maximal matching cuts, respectively) for G is equivalent to their enumeration for G . This means that Reduction Rule 7.1 is safe.
We apply Reduction Rule 7.1 for all classes of true twins of size at least four. To simplify notation, we use G to denote the obtained graph and X 1 , . . . , X r is used to denote the obtained classes of true twins. We have that |X i | ≤ 3 for i ∈ {1, . . . , r}. We show that we can reduce the size of some classes even further. This is straightforward for classes X i of size three that induce connected components of G. To prove safeness, assume that the rule is applied for X i . Let G be the graph obtained by the deletion of X i and let y be the unique vertex of N G (X i ). We show that M is a matching cut of G if and only if M is a matching cut of G . Assume first that M = E G (A, B) is a matching cut of G for a partition {A, B} of V (G). Since X i ∪ {y} is a clique of size at least three, either X i ∪ {y} ⊆ A or X i ∪ {y} ⊆ B. By symmetry, we can assume without loss of generality that X i ∪ {y} ⊆ A. Then no edge of M is incident to a vertex of X i . This implies that M ⊆ E(G ). Since G is an induced subgraph of G, M is a matching cut of G . For the opposite direction, assume that M = E G (A , B ) is a matching cut of G for a partition {A , B } of V (G ). We can assume without loss of generality that y ∈ A . Then it is straightforward to see that M = E G (A ∪ X i , B ), that is, M is a matching cut of G. We obtain that Reduction Rule 7.3 is safe, because the enumeration of matching cuts (minimal or maximal matching cuts, respectively) for G is equivalent to their enumeration for G .
We apply Reduction Rules 7.2 and 7.3 for all classes X i satisfying their conditions. We use the same convention as before, and use G to denote the obtained graph and X 1 , . . . , X r is used to denote the obtained sets of true twins.
Finally, we reduce the size of some classes that have at least two neighbors. Recall that tc(G) = k. This means that τ (G) = k for the quotient graph G constructed for X = {X 1 , . . . , X r }. We compute G and use, say a 2-approximation algorithm [22], to find a vertex cover Z of size at most 2k. Let I = V (G) \ Z. Recall that I is an independent set. Reduction Rule 7.4. If there is i ∈ {1, . . . , r} such that X i ∈ I, |X i | ≥ 2, and |N G (X i )| ≥ 2, then delete arbitrary |X i | − 1 vertices of X i and make the vertices of N G (X i ) pairwise adjacent by adding edges.
To show that the rule is safe assume that X i ∈ I, |X i | ≥ 2, and |N G (X i )| = 2 for some i ∈ {1, . . . , r} and the rule is applied for X i . Denote by G the graph obtained by the application of the rule, and let x ∈ X i be the vertex of G . We claim that M is a matching cut of G if and only if M is a matching cut of G .
For the forward direction, let M = E G (A, B) be a matching cut of G for a partition {A, B} of V (G). For every y ∈ N G (X i ), Z y = X i ∪ {y} is a clique of size at least three in G. Therefore, either Z y ⊆ A or Z y ⊆ B. By symmetry, we can assume without loss of generality that Z y ⊆ A for all y ∈ N G (X i ), that is, N G [X i ] ⊆ A and, moreover, the edges of M are not incident to the vertices of X i . Therefore, M ⊆ E(G ) and the edges between the vertices of N G [X i ] that may be added by the rule have their end-vertices in A. This implies that M is a matching cut of G .
Assume now that M = E G (A , B ) is a matching cut of G for a partition {A , B } of V (G ). Assume without generality that x ∈ A. Let also K = N G [x]; note that K is a clique of G . Since |N G (X i )| ≥ 2, |K| ≥ 3. Hence, K ⊆ A. Notice also that the edges of M are not incident to x and are not edges of G ) and M is a matching cut of G. We conclude that the enumeration of matching cuts (minimal or maximal matching cuts, respectively) for G is equivalent to their enumeration for G . Therefore, Reduction Rule 7.4 is safe.
We apply Reduction Rule 7.4 for the classes in I exhaustively. Denote by G * the obtained graph and let X * 1 , . . . , X * r be the constructed classes of true twins. We use I * to denote the family of sets of true twins obtained from the sets of I. Note that the sets of Z are not modified by Reduction Rule 7.4.
We show the following claim summarizing the properties of the obtained sets of true twins.
Claim 7.1. For every X * i ∈ I * , either G[X * i ] is a connected component of G * and |X * i | = 2 or |X i | = 1, and for every X * i ∈ Z, |X * i | ≤ 3.
Proof of Claim 7.1. Let X * i ∈ I * . If G[X * i ] is a connected component of G, then because Reduction Rule 7.3 is not applicable, |X * i | ≤ 2. Assume that X * i ∈ I * is not the set of vertices of a connected component. Then N G * (X * i ) = ∅. If |N G * (X * i )| = 1, then |X * i | = 1, because Reduction Rule 7.3 cannot be applied. If |N G * (X * i )| ≥ 2, then |X * i | = 1, because Reduction Rule 7.4 is not applicable. In both cases |X * i | = 1 as required. Finally, if X * i ∈ Z, then |X i | ≤ 3 because of Reduction Rule 7.1.
Let G 1 , . . . , G s be the connected components of G * induced by the sets X i of size two, and letĜ = G * − s i=1 V (G i ), that is, G * =Ĝ + sK 2 . Claim 7.1 implies that every edge ofĜ has at least one end-vertex in a set X i for X i ∈ Z. Since |X i | ≤ 3 for each set X i ∈ Z, τ (Ĝ) ≤ 3|Z| ≤ 6k. Because Reduction Rules 7.1-7.4 are safe, the enumeration of matching cuts (minimal or maximal matching cuts, respectively) for the input graph is equivalent to their enumeration for G * =Ĝ + sK 2 . Because τ (Ĝ) ≤ 6k, we can apply Corollary 1. Since the initial partition V (G) into the twin classes can be computed in linear time and each of the Reduction Rules 7.1-7.4 can be applied in polynomial time, we conclude that Enum Minimal MC admits a fully-polynomial enumeration kernel and Enum MC and Enum Maximal MC admit polynomial-delay enumeration kernels with O(k 2 ) vertices.
Enumeration Kernels for the Neighborhood Diversity and Modular Width Parameterizations
The notion of the neighborhood diversity of a graph was introduced by Lampis [31] (see also [20]).
Recall that a set of vertices
either v is adjacent to each vertex of U or v is non-adjacent the vertices of U . The neighborhood decomposition of G is a partition of V (G) into modules such that every module is either a clique or an independent set. We call these modules clique or independent modules, respectively; note that a module of size one is both clique module and an independent module. The size of a decomposition is the number of modules. The neighborhood diversity of a graph G, denoted nd(G), is the minimum size of a neighborhood decomposition. Note (see, e.g., [31,39]) that the neighborhood diversity and the corresponding neighborhood decomposition can be computed in linear time. We show fully-polynomial (polynomial-delay) enumeration kernels for the matching cut problem parameterized by the neighborhood diversity. There are many similarities between the results in this subsection and Subsection 4. Hence, we will only sketch some proofs.
Let G be a graph and let k = nd(G). The case when G has no edges is trivial and can be easily considered separately. From now, we assume that G has at least one edge.
Consider a minimum-size neighborhood decomposition U = {U 1 , . . . , U k } of G. Let G be the quotient graph for U, that is, U is the set of nodes of G and two distinct nodes U i and U j are adjacent in G if and only if the vertices of modules U i and U j are adjacent in G. We call the elements of V (G) nodes to distinguish them form the vertices of G. We say that a module U i is trivial if U i is an independent module and U i is an isolated node of G. Notice that U can contain at most one trivial module. We call U i a pendent module if U i is an independent module of degree one in G such that its unique neighbor U j in G has size one; we say that U j is a subpendant module. Notice that each subpendant is adjacent to exactly one pendant module and the pendant modules are pairwise nonadjacent in G.
As in Section 4, our kernelization algorithm is the same for all the considered problems. First, we mark some vertices.
(i) If U contains a trivial module, then mark one its vertex.
(ii) For every pendant module U i , mark an arbitrary vertex of U i .
(iii) For ever module U i that is not trivial of pendant, mark arbitrary min{3, |U i |} vertices.
Let W be the set of marked vertices. Note that since we marked at most three vertices in each module, |W | ≤ 2k. We define H = G[W ] and our kernelization algorithm returns H.
To see the relations between matching cuts of G and H, we show the analog of Lemma 4. For this, denote by G the graph obtained by the deletion of the vertices of trivial and pendant modules. The graph H is obtained from H in the same way.
Lemma 6. A set of edges M ⊆ E(G ) is a matching cut of G if and only if M ⊆ E(H ) and
M is a matching cut of H .
Proof. Suppose that M ⊆ E(G ) is a matching cut of G and assume that M = E G (A, B) for a partition {A, B} of V (G ). We show that M ⊆ E(H ). For the sake of contradiction, assume that there is uv ∈ M with u ∈ A and v ∈ B such that uv / ∈ E(H ). This means that u / ∈ V (H ) or v / ∈ V (H ). By symmetry, we can assume without loss of generality that u / ∈ V (H ). Then there is i ∈ {1, . . . , k} such that u ∈ U i . Since U i is not trivial and not pendant, U i contains three marked vertices u 1 , u 2 , u 3 . Notice that v / ∈ U i , because, otherwise, U i would be a clique module and no matching cut can separate vertices of a clique of size at least 3. Observe also that u 1 , u 2 , u 3 ∈ B as, otherwise, v would have at least two neighbors in A. This implies that U i is an independent module, because u would have at least three neighbors in B otherwise. Suppose that u has a neighbor w = v in G. Then because M is a matching cut, w ∈ A. However, u 1 , u 2 , u 3 ∈ B are adjacent with w, because these vertices are in the same module with u. This contradiction implies that v is the unique neighbor of u in G. Hence, v is the unique neighbor of every vertex of U i . This means that U i is a pendant module and {v} is the corresponding subpendant module. However, G does not contain vertices of the pendant modules by the construction. This final contradiction proves that uv ∈ E(H ). Therefore, M ⊆ E(H ). Since H is an induced subgraph of G , M is a matching cut of H .
For the opposite direction, assume that M is a matching cut of H . Let M = E H (A, B) for a partition {A, B} of V (H ). We claim that for every v ∈ V (G ) \ V (H ), either v has all its neighbors in A or v has all its neighbors in B. For the sake of contradiction, assume that there is v ∈ V (G ) \ V (H ) that has a neighbor u ∈ A and a neighbor w ∈ B. Note that v ∈ U i for some i ∈ {1, . . . , k} and |U i | ≥ 4. Then U i contains three marked vertices and either at least two of these marked vertices of are in A or at least two of these marked vertices are in B. In the first case, w has at least two neighbors in A, and u has at least two neighbors in B in the second. In both cases, we have a contradiction with the assumption that M is a matching cut. Since for every v ∈ V (G ) \ V (H ), either v has all its neighbors in A or v has all its neighbors in B, M is a matching cut of G .
To proceed, we denote by X the set of vertices of the subpendant modules. By the definition of pendant and subpendant modules, for each x ∈ X, {x} is a subpendant module that is adjacent in G to a unique pendant module U . We define L x = {xu | u ∈ U }. Notice that for each x ∈ X, H contains a unique edge x ∈ L x , because exactly one vertex of U is marked. Let L = x∈X L x . Exactly as in Section 4, we say that two sets of edges M 1 and M 2 of G are equivalent if M 1 \ L = M 2 \ L and for every x ∈ X, |M 1 ∩ L x | = |M 2 ∩ L x |. Using exactly the same arguments as in the proof of Lemma 5, we show its analog. The lemma allows us to prove the main theorem of this section by the same arguments as in Theorem 5.
Theorem 8. Enum Minimal MC admits a fully-polynomial enumeration kernel and Enum MC and Enum Maximal MC admit polynomial-delay enumeration kernels with at most 3k vertices when parameterized by the neighborhood diversity k of the input graph.
Note that similarly to the parameterization by the vertex cover number, Enum Maximal MC and Enum MC do not admit fully-polynomial enumeration kernels for the parameterization by the neighborhood diversity as demonstrated by the example when G is the union of stars. Observe that the neighborhood diversity of the disjoint union of k stars K 1,n/k−1 is 2k.
Combining Theorems 8, 4 and 2, we obtain the following corollary.
Corollary 2. The minimal matching cuts of an n-vertex graph G can be enumerated in 2 O(nd(G)) · n O(1) time, and the (maximal) matching cuts of G can be enumerated with 2 O(nd(G)) ·n O(1) delay.
Observe that the neighborhood diversity of P n is n. By Observation 3, P n has F (n+1)−1 = F (nd(P ) + 1) − 1 matching cuts. This immediately implies that the exponential dependence on nd(G) in the running time for Enum Minimal MC is unavoidable.
We conclude this section by considering the parameterization by the modular width that is weaker than the neighborhood diversity parameterization but stronger than the cliquewidth parameterization.
The modular width of a graph G (see, e.g., [18]), denoted by mw(G), is the minimum positive integer k such that a graph isomorphic to G can be recursively constructed by the following operations: • Constructing a single vertex graph.
• The substitution operation with respect to some graphs Q with 2 ≤ r ≤ k vertices v 1 , . . . , v r applied to r disjoint graphs G 1 , . . . , G r of modular width at most k; the substitution operation, that generalizes the disjoint union and complete join, creates the graph G with The modular width of a graph G can be computed in polynomial (in fact, linear) time [18,39]. Notice that cw(G) − 1 ≤ mw(G) ≤ nd(G). We show that Enum Minimal MC admits a fully-polynomial enumeration kernel for the modular width parameterization.
Theorem 9. Enum Minimal MC admits a fully-polynomial enumeration kernel with at most 6k vertices when parameterized by the modular width k of the input graph.
Proof. Let G be a graph with mw(G) = k. If G is disconnected, then the empty set is a unique matching cut of G. Then the kernelization algorithm outputs H = 2K 1 . The solution-lifting algorithm is trivial in this case. The case G = K 1 is also trivial. Assume from now that G is a connected graph with at least two vertices. This implies that G is obtained by the substitution operation with respect to some connected graph Q with at most k vertices from graphs of modular width at most k. This means that G has a modular decomposition U = {U 1 , . . . , U r } for 2 ≤ r ≤ k, that is, a partition of V (G) into r modules. These modules can be computed in linear time [39]. For each i ∈ {1, . . . , r}, let X i ⊆ U i be the set of isolated vertices of G[U i ] and let Y i = U i \ X. We show the following claim. Because G is connected and has at least two modules, there is w ∈ V (G) \ U i that is adjacent to u and v. Because u, v, and w compose a triangle in G that cannot be separated by a matching cut, we conclude that either u, v, w ∈ A or u, v, w ∈ B. This implies that for every connected component H of H, either V (H ) ⊆ A or V (H ) ⊆ B. Now suppose that H and H are distinct components of H. Let uv ∈ E(H ) and u v ∈ E(H ). By the same arguments as before, there is w ∈ V (G) \ U i that is adjacent to u, v, u , and v . Since the triangles uvw and u v w are either in A or in B, we conclude that either We construct G from G by making each set Y i a clique by adding edges. By Claim 9.1, M is a matching cut of G if and only if M is a matching cut of G . Hence, the enumeration of minimal matching cuts of G is equivalent to the enumeration of minimal matching cuts of G . Because every X i is an independent set and every Y i is a clique, nd(G) ≤ 2r ≤ 2k. This allows to apply Theorem 8 that implies the existence of a fully-polynomial enumeration kernel for Enum Minimal MC with at most 6k vertices.
Notice that it is crucial for the fully-polynomial enumeration kernel for Enum Minimal MC parameterized by the modular width that the empty set is the unique matching cut of a disconnected graph. If we exclude empty matching cuts, then we can obtain the following kernelization conditional lower bound. Proposition 6. The problem of enumerating nonempty matching cuts (minimal or maximal matching cuts, respectively) does not admit a polynomial-delay enumeration kernel of polynomial size when parameterized by the modular width of the input graph unless NP ⊆ coNP/ poly.
Proof. As with Proposition 2, it is sufficient to show that the decision version of the matching cut problem does not admit a polynomial kernel when parameterized by the modular width of the input graph unless NP ⊆ coNP/ poly. Let G 1 , . . . , G t be disjoint graphs of modular width at most k ≥ 2. Let G be the disjoint union of G 1 , . . . , G t . By the definition of the modular width, mw(G) ≤ k. Clearly, G has a nonempty matching cut if and only if there is i ∈ {1, . . . , t} such that G i has a matching cut. Since deciding whether a graph has a nonempty matching cut is NP-complete [7], the results of Bodlaender et al. [4] imply that the decision problem does not admit a polynomial kernel unless NP ⊆ coNP/ poly. Proposition 6 indicates that it is unlikely that Enum MC and Enum Maximal MC have polynomial polynomial-delay enumeration kernels of polynomial size under the modular width parameterization. Notice, however, that Proposition 6 by itself does not imply a kernelization lower bound.
Enumeration Kernels for the Parameterization by the Feedback Edge Number
A set of edges X of a graph G is said to be a feedback edge set if G − S has no cycle, that is, G − S is a forest. The minimum size of a feedback edge set is called the feedback edge number or the cyclomatic number. We use fn(G) to denote the feedback edge number of a graph G. It is well-known (see, e.g., [13]) that if G is a graph with n vertices, m edges and r connected components, then fn(G) = m − n + r and a feedback edge set of minimum size can be found in linear time. Throughout this section, we assume that the input graph in an instance of Enum Minimal MC or Enum MC is given together with a feedback edge set. Equivalently, we may assume that kernelization and solution-lifting algorithms are supplied by the same algorithm computing a minimum feedback edge set. Then this algorithm computes exactly the same set for the given input graph. Note that tw(G) ≤ fn(G)+1, because a forest can be obtained from G by deleting an arbitrary end-vertex of each edge of a feedback edge set.
Our algorithms use the following folklore observation that we prove for completeness.
Observation 10. Let F be a forest. Let also n ≤1 be the number of vertices of degree at most one, n 2 be the number of vertices of degree two, and n ≥3 be the number of vertices of degree at least three. Then n ≥3 ≤ n ≤1 − 2.
Proof. Denote by n 0 the number of isolated vertices, and let n 1 be the number of vertices of degree one. Observe that |V (F )| = n 0 + n 1 + n 2 + n ≥3 , |E(F )| ≤ |V (F )| − 1 − n 0 , and Then In contrast to vertex cover number and neighborhood diversity, Enum Minimal MC does not admit a fully-polynomial enumeration kernel in case of the feedback edge number: let and k be positive integers and consider the graph H k, that is constructed as follows.
• For every i ∈ {1, . . . , k}, construct two vertices u i and v i and a (u i , v i )-path of length .
• Add edges to make each of u 1 , · · · , u k and v 1 , · · · , v k a path of length k − 1.
Observe that H k, has at least k minimal matching cuts composed by taking one edge from every (u i , v i )-path. Since H k, has n = k( + 1) vertices and fn(H k, ) = k − 1, the number of minimal matching cuts is at least . This immediately implies that the minimal matching cuts cannot be enumerated in FPT time. In particular, Enum Minimal MC cannot have a fully-polynomial enumeration kernel by Theorem 2. However, this problem and Enum MC admit polynomial-delay enumeration kernels. The kernels for the problems are similar but the kernel for Enum MC requires some technical details that do not appear in the kernel for Enum Minimal MC. Therefore, we consider Enum Minimal MC separately even if some parts of the proof of the following theorem will be repeated later.
Theorem 11. Enum Minimal MC admits a polynomial-delay enumeration kernel with O(k) vertices when parameterized by the feedback edge number k of the input graph.
Proof. Let G be a graph with fn(G) = k and a feedback edge set S of size k. If G is disconnected, then the empty set is the unique minimal matching cut. Accordingly, the kernelization algorithm returns 2K 1 and the solution-lifting algorithm outputs the empty set for the empty matching cut of 2K 1 . If G = K 1 , then the kernelization algorithm simply returns G. If G is a tree with at least one edge, then the kernelization algorithm returns K 2 . Then for the unique matching cut of K 2 , the solution-lifting algorithm outputs n − 1 minimal matching cuts of G composed by single edges. Clearly, this can be done in O(n) time. We assume from now that G is a connected graph distinct from a tree, that is, S = ∅.
If G has a vertex of degree one, then pick an arbitrary such vertex u * . Let e * be the edge incident to u * . We iteratively delete vertices of degree at most one distinct from u * . Denote by G the obtained graph. Notice that G has at most one vertex of degree one and if such a vertex exists, then this vertex is u * . Observe also that S is a minimum feedback edge set of G . Let T = G − S. Clearly T is a tree. Notice that T has at most 2|S| + 1 ≤ 2k + 1 leaves. By Observation 10, T has at most 2k − 1 vertices of degree at least three. Denote by X the set of vertices of T that either are end-vertices of the edges of S, or have degree one, or have degree at least three. Because every vertex of T of degree one is either u * or an end-vertex of some edge of S, we have that |X| ≤ 4k. By the construction, every vertex v of G of degree two is an inner vertex of an (x, y)-path P such that x, y ∈ X and the inner vertices of P are outside X. Moreover, for every two distinct x, y ∈ X, G has at most one (x, y)-path P xy with all its inner vertices outside X. We denote by P the set of all such paths. We say that an edge of P xy is the x-edge if it is incident to x and is the y-edge if is incident to y; the other edges are said to be middle edges of P xy . We say that P xy is long, if P xy has length at least four. Then we apply the following reduction rule exhaustively.
Reduction Rule 11.1. If there is a long path P xy ∈ P for some x, y ∈ X, then contract an arbitrary middle edge of P xy .
Denote by H the obtained graph. Denote by P the set of paths obtained from the paths of P; we use P xy to denote the path obtained from P xy ∈ P.
Our kernelization algorithm returns H together with S.
) is a forest. Moreover, by the construction, the vertices of degree at most one of this forest are in X. This implies that |P | ≤ |X| − 1. Because |X| ≤ 4k, |P | ≤ 4k − 1. Since P does not contain long paths, every path of P contains at most two inner vertices. Therefore, |V (H)| ≤ |X| + 2|P | ≤ 4k + 2(4k − 1) ≤ 10k. This means that H has the required size.
To construct the solution-lifting algorithm, we need some properties of minimal matching cuts of G. Observe that the set M of all minimal matching cuts of G can be partitioned into three (possibly empty) subsets M 1 , M 2 , and M 3 as follows.
• Every edge of E(G) \ E(G ) is a bridge of G and, therefore, forms a minimal matching cut of G. We define M 1 to be the set of these matching cuts, that is, • For every P xy ∈ P, every minimal matching cut contains at most two edges of P xy . Moreover, every two edges of P xy with distinct end-vertices form a minimal matching cut of G, unless the edges of P xy are bridges of G. We define M 2 = {{e 1 , e 2 } | {e 1 , e 2 } is a minimal matching cut of G such that e 1 , e 2 ∈ P xy for some P xy ∈ P}.
• The remaining minimal matching cuts compose M 3 . Notice that for every matching cut M ∈ M 3 , (i) M ⊆ E(G ) and M is a minimal matching cut of G , and (ii) for every P xy ∈ P, |M ∩ E(P xy )| ≤ 1.
We use this partition of M in our solution-lifting algorithm. For this, we define M to be the set of minimal matching cuts of H. We also consider the partition of M into M 2 and M 3 , where M 2 is the set of all minimal matching cuts of H formed by two edges of some P xy ∈ P and M 3 = M \ M 2 . Similarly to M 3 , we have that for every M ∈ M 3 , M is a minimal matching cuts of H and for every P xy ∈ P , |M ∩ E(P xy )| ≤ 1. Observe that, contrary to M, M is partitioned into two sets as M does not contain matching cuts corresponding to the cuts of M 1 . Notice that by our assumption the input graph is given together with S and S ⊆ E(H). This allows us to find u * and e * (if they exist) as u * is the unique vertex of degree one in G . Then we can recompute X and the sets of paths P and P of G and H, respectively, in polynomial time. Hence, we can assume that the solution-lifting algorithm has access to these sets.
First, we deal with M 1 . Notice that if M 1 = ∅, then G has at least one vertex of degree one. This means, that H contains u * and e * . Recall that u * is a vertex of degree one and e * is the edge incident to u * . Observe that {e * } is a minimal matching cut in both G and H, and {e * } ∈ M 3 and {e * } ∈ M 3 . Given the minimal matching cut {e * } of H, the solution-lifting algorithm outputs {e * } and then the minimal matching cuts of M 1 . Clearly, |M 1 | ≤ n and the elements of M 1 can be listed with constant delay.
Next, we consider M 2 . If M 2 = ∅, then there is P xy ∈ P of length at least three, such that {e 1 , e 2 } is a minimal matching cut for G for some e 1 , e 2 ∈ E(P xy ). Notice that the corresponding path P xy ∈ P in H has length three and the x-and y-edges of P xy form a minimal matching cut of H. Moreover, this is the unique minimal matching cut of H formed by two edges of P xy . Given a minimal matching cut {e 1 , e 2 } ∈ M 2 of H such that e 1 and e 2 are x-and y-edges of some path P xy ∈ P , the solution-lifting algorithm outputs the matchings {e 1 , e 2 }, where e 1 , e 2 ∈ E(P xy ). Notice that we have at most n 2 such matchings and they can be enumerated with polynomial delay. It is straightforward to verify that for every minimal matching cut of M 2 , the solution-lifting algorithm outputs a nonempty set of minimal matching cuts of G, the matching cuts listed for distinct element of M 2 are distinct, and the union of all produced minimum matching cuts over all elements of M 2 gives M 2 .
Finally, we consider M 3 . Recall that for every M ∈ M 3 , M is a minimal matching cut of G and |M ∩ E(P xy )| ≤ 1 for every P xy ∈ P. Recall that if G has a vertex of degree one, then the vertex u * and the edge e * are in G and H, and it holds that . {e * } is a minimal matching cut of both G and H that belongs to M 3 and M 3 . Note that {e * } is the unique matching cut in M 3 that is equivalent to {e * }. Recall that we already explained the output of the solution-lifting algorithm for {e * }. In particular, the algorithms outputs {e * } ∈ M 3 .
Assume that M ∈ M 3 is distinct from {e * } (or e * does not exist). The solution-lifting algorithm lists all minimal matching cuts M of M 3 that are equivalent to M . For this, we use the recursive branching algorithm Enum Equivalent that takes as an input a matching L of G and a path set R ⊆ P and outputs the equivalent matching cuts M of G such that (i) L ⊆ M , (ii) M is equivalent to M , and (iii) the constructed matchings M differ only by some edges of the paths P xy ∈ R. In other words, the algorithm extends the partial matching cut by adding edges from the path set R. To initiate the computations, we construct the initial matching L of G and the initial set of paths R ⊆ P as follows. First, we set L := L ∩ E(G[X]) and R = ∅. Then for each P xy ∈ P we do the following: • if the x-or y-edge e of P xy is in M , then set L = L ∪ {e}, • if the middle edge of P xy is in M , then set R := R ∪ {P xy }.
Observe that by the definition of equivalent matching cuts, a minimal matching cut M is equivalent to M if and only if M can be constructed from L by adding one middle edge of every path P xy ∈ R . Then calling Enum Equivalent(L , R ) solves the enumeration problem. As Algorithm 1 in the proof of Theorem 5, this algorithm is a standard backtracking enumeration algorithm. The depth of the recursion is upper-bounded by n. This implies that Enum Equivalent(L , R ) enumerates all minimal matching cuts M ∈ M 3 that are equivalent to M with polynomial delay.
Summarizing the considered cases, we obtain that if the edge e * exists, the solution-lifting algorithm enumerates all minimal matching cuts of M 1 and {e * }, and if e * does not exist, then M 1 is empty. Then for every minimal matching cut M ∈ M 2 , the solution-lifting algorithm enumerates the corresponding minimal matching cuts of M 2 ; the minimal matching cuts of G generated for distinct minimal matching cuts of H are distinct, and every minimal matching cut in M 2 is generated for some minimal matching cut from M 2 . Finally, for every minimal matching cut M ∈ M 3 distinct from {e * }, we enumerate equivalent minimal matching cuts of G. In this case we also have that the minimal matching cuts of G generated for distinct minimal matching cuts of H are distinct, and every minimal matching cut in M 3 is generated for some minimal matching cut from M 3 . We conclude that the solution-lifting algorithm satisfies condition (ii * ) of the definition of a polynomial-delay enumeration kernel.
For Enum MC, we need the following observation that follows from the results of Courcelle [8] similarly to Proposition 1. For this, we note that the results of [8] are shown for the extension of MSOL called counting monadic second-order logic (CMSOL). For every integer constants p ≥ 1 and q ≥ 0, CMSOL includes a predicate Card p,q (X) for a set variable X which tests whether |S| ≡ q (mod p). Also we can apply the results for labeled graphs whose vertices and/or edges have labels from a fixed finite set. Proof. Let G be a graph with fn(G) = k and an edge feedback set S of size k. It is convenient to consider the case when G is a forest separately. If G = K 1 , then the kernelization algorithm returns K 1 and the solution-lifting algorithm is trivial. If G has at least two vertices, then the kernelization algorithm returns 2K 1 that has the unique empty matching cut. Then, given this empty matching cut, the solution-lifting algorithm lists all matching cuts of F with polynomial delay using Observation 12 (or Proposition 1). We assume from now that G is not a forest. In particular, S = ∅.
If G has one or more connected component that are trees, we select an arbitrary vertex v * of these components. If G has a connected component that contains a vertex of degree one and is not a tree, then arbitrary select such a vertex u * of degree one and denote by e * be the edge incident to u * . Then we iteratively delete vertices of degree at most one distinct from u * and v * . Denote by G the obtained graph. Notice that G has at most one isolated vertex (the vertex v * ) and at most one vertex of degree one (the vertex u * ). Observe also that S is a minimum feedback edge set of G . Let T = G − S. Clearly T is a forest. Notice that T has at most 2|S| + 2 ≤ 2k + 2 vertices of degree at most one. By Observation 10, T has at most 2k vertices of degree at least three. Denote by X the set of vertices of T that either are end-vertices of the edges of S, or have degree one, or have degree at least three. Because every vertex of T of degree at most one is either u * , v * , or an end-vertex of some edge of S, we have that |X| ≤ 4k + 2.
In the same way as in the proof of Theorem 11, we have that every vertex v of G of degree two is an inner vertex of an (x, y)-path P such that x, y ∈ X and the inner vertices of P are outside X. Moreover, for every two distinct x, y ∈ X, G has at most one (x, y)-path P xy with all its inner vertices outside X. We denote by P the set of all such paths. We say that an edge of P xy is the x-edge is it is incident to x and is the y-edge if is incident to y. We say that an edge e of P xy is a second x-edge (a second y-edge, respectively) if e has a common end-vertex with the x-edge (with the y-edge, respectively). The edges that are distinct from the x-edge, the second x-edge, the y-edge and the second y-edge are called middle. We say that P xy is long, if P xy has length at least six; otherwise, P xy is short.
We exhaustively apply the following reduction rule.
Reduction Rule 13.1. If there is a long path P xy ∈ P for some x, y ∈ X, then contract an arbitrary middle edge of P xy .
Let H be the graph obtained from G by the exhaustive application of Reduction Rule 13.1. We also denote by P the set of paths obtained from the paths of P; we use P xy to denote the path obtained from P xy ∈ P.
Our kernelization algorithm returns H together with S.
To upper-bound the size of H, notice that H − E(G[X]) is a forest such that its vertices of degree at most one are in X. This implies that |P | ≤ |X| − 1 ≤ 4k + 1. Because H has no long paths, each path P xy ∈ P has at most four inner vertices, and the total number of inner vertices of all the paths of P is at most 4|P | ≤ 16k + 1. Then |V (H)| ≤ |X| + 4|P | ≤ 20k + 1 implying that H has the required size.
For the construction of the solution-lifting algorithm, recall that by our assumption the input graph is given together with S and S ⊆ E(H). Then we can identify v * , u * and e * in G and H, and then we can recompute the set X. Next, we can compute the sets of paths P and P of G and H, respectively, in polynomial time. This allows us to assume that the solution-lifting algorithm has access to these sets.
To Suppose that H has the empty matching cut. Then the solution-lifting algorithm, given this matching cut of H, outputs the matching cuts of M 1 . Notice that M 1 = ∅, because M 1 contains the empty matching cut. The solution-lifting algorithm outputs the empty matching cut and all nonempty matchings of F using Observation 12.
Assume now that H is connected. Then G is connected as well and M 1 = ∅ if and only if F = ∅. By the construction of G , if F is not empty, then G has a vertex of degree one. In particular, the kernelization algorithm selects u * and e * in this case. Notice that e * is a bridge of G, and it holds that {e * } is a matching cut of both G and H. Observe also that {e * } ∈ M 2 . This matching cut is generated by the solution-lifting algorithm for the cut {e * } of H: when the algorithm finishes listing the matching cuts of M 2 for {e * }, it switches to the listing of all nonempty matchings of F . This can be done with polynomial delay by Observation 12.
Next, we analyze the matching cuts of M 2 . By definition, a matching cut M of G is in M 2 if M ∩ E(G ) = ∅. This means that M ∩ E(G ) is a matching cut of G , and for a nonempty matching M of G, M ∈ M 2 if and only if M ∩ E(G ) is a nonempty matching cut of G . We exploit this property and the solution-lifting algorithm lists nonempty matching cuts of G and then for each matching cut of G , it outputs all its possible extensions by matchings of F . For this, we define the following relation between matching cuts of H and G . Let M be a nonempty matching cut of H and let M be a nonempty matching of G (note that we do not require M to be a matching cut). We say that M is equivalent to M if the following holds: (ii) For every P xy ∈ P such that P xy is short, M ∩E(P xy ) = M ∩E(P xy ) (note that P xy = P xy in this case).
(iii) For every long P xy ∈ P, (note that e x , e y are the second x-edge and y-edge of P xy , because P xy is constructed by contracting of some middle edges of P xy ).
We use the properties of the relation summarized in the following claim. . Then for every long path P xy ∈ P we do the following.
• If M contains the x-edge or the second x-edge (the y-edge or the second y-edge, respectively) e of P xy , then include e in M .
• If M contains the middle edge of P xy , then include in M an arbitrary middle edge of P xy .
It is straightforward to verify that M is a matching of G and M is equivalent to M . Recall that the vertices of V (G )\Y are internal vertices of long paths P xy ∈ P. Consider such a path P xy . We use the property that the numbers of edges of M in P xy and of M in P xy have the same parity. Suppose that x ∈ and y ∈B. Let Q 1 , . . . , Q r be the connected components of P xy − M listed with respect to the path order in which they occur in P xy starting from x. Then r is even and V (Q i ) ⊆ A if i is odd and V (Q i ) ⊆ B if i is even. Therefore, |M ∩ V (P xy )| is odd. Then |M ∩ V (P xy )| is odd, and for the connected components R 1 , . . . , R s of P xy − M , s is even. Assume that the connected components are listed with respect to the path order induced by P xy . We modify A by setting A := A ∪ V (R i ) for odd i ∈ {1, . . . , s} and B := B ∪ V (R i ) for even i ∈ {1, . . . , s}. By this construction, E G (V (P xy ) ∩ A , V (P xy ) ∩ B ) = M ∩ E(P xy ). The case when x and y are in the same set orB, respectively, is analyzed by the same parity arguments. We conclude that by going over all long paths P xy ∈ P, we construct A and B such that M = E(A , B ). This completes the proof of (ii).
From now, we can assume that each nonempty matching of G that is equivalent to a nonempty matching cut of H is a matching cut.
We show (iii) by contradiction. Assume that there are two distinct nonempty matching cuts M 1 and M 2 of H and a nonempty matching cut M of G such that M is equivalent to both M 1 and M 2 . The definition of equivalency implies that there is a long path P xy ∈ P such that M 1 ∩ E(P xy ) = M 2 ∩ E(P xy ). Notice that both sets are nonempty and the sizes of both sets have the same parity. We consider two cases depending on the parity. Let e x denote the x-edge, e x the second x-edge, e y the y-edge, e y the second y-edge, and e the unique middle edge of P xy .
Suppose that |M 1 ∩ E(P xy )| is odd. If |M 1 ∩ E(P xy )| = 3, then M 1 ∩ E(P xy ) = {e x , e, e y }, because {e x , e, e y } is the unique matching of P xy of size three. Since M 1 ∩E(P xy ) = M 2 ∩E(P xy ), we obtain that |M 2 ∩ E(P xy )| = 1. In particular, either e x / ∈ M 2 or e y / ∈ M 2 . Assume by symmetry, that e x / ∈ M 2 . However, by condition (iii)(a) of the definition of equivalency, the x-edge of P xy is in M if and only if x-edge of P xy in M 1 and M 2 ; a contradiction. Hence, we can assume that |M 1 ∩ E(P xy )| = |M 2 ∩ E(P xy )| = 1. If either M 1 ∩ P xy or M 2 ∩ E(P xy ) consists of the x-edge or the y-edge of P xy , we use the same arguments as above. This means that both M 1 and M 2 contains a unique edge of P xy that belongs to {e x , e, e y }. Suppose that e x ∈ M 1 but e x / ∈ M 2 . However, this contradicts (iii)(d). By symmetry, we obtain that e x / ∈ M 1 , M 2 and e y / ∈ M 1 , M 2 . This means that M 1 ∩ E(P xy ) = M 2 ∩ E(P xy ) = {e} contradicting that the sets are distinct.
Assume that |M 1 ∩ E(P xy )| is even. Since M 1 ∩ E(P xy ) and M 2 ∩ E(P xy )| are nonempty, |M 1 ∩ E(P xy )| = |M 2 ∩ E(P xy )| = 2. As we already observe in the previous case, e x ∈ M 1 (e y ∈ M 1 , respectively) if and only if e x ∈ M 2 (e y ∈ M 2 , respectively). In particular, if M 1 ∩E(P xy ) = {e x , e y }, then M 2 ∩E(P xy ) = {e x , e y }; a contradiction. Notice that if e x , e y / ∈ M 1 , then M 1 ∩ E(P xy ) = {e x , e y }. Then we obtain that M 2 ∩ E(P xy ) = {e x , e y }; a contradiction. This implies that either e x ∈ M 1 , M 2 and e y / ∈ M 1 , M 2 or, symmetrically, e x / ∈ M 1 , M 2 and e y ∈ M 1 , M 2 . In both cases |M 1 ∩ {e x , e, e y }| = |M 2 ∩ {e x , e, e y }| = 1 and we obtain a contradiction using (iii)(d) in exactly the same way as in the previous case when .|M 1 ∩ E(P xy )| is odd. This completes the proof of (iii).
Finally, we show (iv). Let M be a nonempty matching cut of G . We show that H has a nonempty matching cut M that is equivalent to M . We construct M as follows. First, we include in M the edges of M ∩E(G[Y ]). Then for every long path P xy ∈ P, we do the following. Denote by e x the x-edge, e x the second x-edge, e y the y-edge, e y the second y-edge, and e the unique middle edge of P xy .
• If e x ∈ M (e y ∈ M , respectively), then include e x (e y , respectively) in M .
• If e x , e y ∈ M and |M ∩ E(P xy )| is odd, then include e ∈ M . If e x , e y ∈ M and |M ∩ E(P xy )| is even, then no other edge of P xy is included in M , that is, M ∩ E(P xy ) = {e x , e y }.
• If e x ∈ M and e y / ∈ M , then It is straightforward to verify that M is a matching of H satisfying conditions (i)-(iii) of the definition of equivalency. Then using exactly the same argument as in the proof of (ii) we observe that M is a matching cut of H. This concludes the proof of (iv).
Claim 13.1 allows us to construct the solution-lifting algorithm for nonempty matching cuts of H that outputs nonempty matching cuts from M 2 . For each nonempty matching cut M of H, the algorithm lists the matching cuts M of G such that M is equivalent to M . Then for each M , we extend M to matching cuts of G by adding matchings of F = G − E(G ). For this, we consider the algorithm EnumPath(P x,y , A, B, C, h) that given a path P xy ∈ P, disjoint sets A, B, C ⊆ E(P xy ), and an integer h ∈ {0, 1}, enumerates with polynomial delay all nonempty matchings M of P xy such that A ⊆ M , B ∩ M = ∅, either C ⊆ M or C ∩ M = ∅, and |M | mod 2 = h. Such an algorithm exists by Observation 12. We also use the algorithm EnumMatchF(M ) that, given a matching cut M of G , lists all matching cuts of G of the form M ∪ M , where M is a matching of F . EnumMatchF(M ) is constructed as follows. Let A be the set of edges of F incident to the end-vertices of F (recall that each connected component of F contains at most one vertex of V (G )). Then we enumerate the matchings M of F such that M ∩ A = ∅. This can be done with polynomial delay by Observation 12.
We use EnumPath and EnumMatchF as subroutines of the recursive branching algorithm EnumEquivalent (see Algorithm 3) that, given a matching M of H, takes as an input a matching L of G and R ⊆ P and outputs the matching cuts M of G such that (i) L ⊆ M , (ii) M is equivalent to M , and (iii) the constructed matchings M differ only by some edges of the paths P xy ∈ R. To initiate the computations, we construct the initial matching L of G and the initial set of paths R ⊆ P as follows. We define R ⊆ P to be the set of long paths P xy ⊆ P such that P xy ∩ M = ∅. Then L ⊆ M is the set of edges of M that are not in the paths of R . Recall that as an intermediate step, we enumerate nonempty matching cuts of G that are equivalent to M . Then it can be noted that to do this, we have to enumerate all possible extensions of M to M satisfying condition (iii) of the equivalence definition. Therefore, we call EnumEquivalent(L , R ) to solve the enumeration problem. Let us remark that when we call EnumMatchF(M ) in line (2), we immediately return each matching cut M produced by the algorithm. Similarly, when we call EnumPath(P xy , A, B, C, h) in line (13), we immediately execute the loop in lines (14)-(16) for each generated matching Z.
It can be seen from the description that EnumEquivalent is a backtracking enumeration algorithm. It picks P xy ∈ R and produces nonempty matchings of P xy . Notice that the sets of edges A, B, and C, the parity of the number of edges in a matching are assigned in lines (7)-(12) exactly as it is prescribed in condition (iii) of the equivalency definition. Then the algorithm branches on all possible selections of the matchings of P xy .The depth of the recursion is upperbounded by n. This implies that EnumEquivalent(L , R ) enumerates with polynomial delay all nonempty matching cuts M ∈ M 2 such that M ∩ E(G ) is a nonempty matching cut of G equivalent to M withe that are equivalent to M .
To summarize, recall that if H is connected and has a vertex of degree one, we used the matching cut {e * } to list the matching cuts formed by the edges of F = G−E(G ). Clearly, {e * } is generated by EnumEquivalent(L , R ) for L and R constructed for M = {e * }. Therefore, we conclude that the solution-lifting algorithm satisfies condition (ii * ) of the definition of a polynomial-delay enumeration kernel. This finishes the proof of the theorem.
Corollary 3. The (minimal) matching cuts of an n-vertex graph G can be enumerated with 2 O(fn(G)) · n O(1) delay.
Enumeration Kernels for the Parameterization by the Clique Partition Number
A partition {Q 1 , . . . , Q k } of the vertex set of a graph G is said to be a clique partition of G if Q 1 , . . . , Q k are cliques. The clique partition number of G, denoted by θ(G), is the minimum k such that G has a clique partition with k cliques. Notice that the clique partition number of G is exactly the chromatic number of its complement G. In particular, this means that deciding whether θ(G) ≤ k is NP-complete for any fixed k ≥ 3 [22]. Therefore, throughout this section, we assume that the input graphs are given together with their clique decompositions. With this assumption, we show that the matching cut enumeration problems admit bijective kernels when parameterized by the clique partition number. Our result uses the following straightforward observation. Proof. We show a bijective enumeration kernel for Enum MC. Let G be a graph and let Q be a clique partition of G of size at most k. We apply a series of reduction rules. First, we get rid of cliques of size two in Q to be able to use Observation 14. We exhaustively apply the following rule.
Reduction Rule 15.1. If Q contains a clique Q = {x, y} of size two, then replace Q by Q 1 = {x} and Q 2 = {y}.
The rule does not affect G and, therefore, it does not influence matching cuts of G. To simplify notation, we use Q for the obtained clique partition of G. Note that |Q| ≤ 2k.
By the next rule, we unify cliques that cannot be separated by a matching cut.
Reduction Rule 15.2. If Q contains distinct cliques Q 1 and Q 2 such that E(Q 1 , Q 2 ) is nonempty and is not a matching, then make each vertex of Q 1 adjacent to every vertex of Q 2 and replace Q 1 , Q 2 by the clique Q = Q 1 ∪ Q 2 in Q.
To see that the rule is safe, notice that if there are two distinct cliques Q 1 , Q 2 ∈ Q such E(Q 1 , Q 2 ) is nonempty and is not a matching, then for every partition {A, B} of V (G) such that E (A, B) is a matching cut, either Q 1 , Q 2 ⊆ A or Q 1 , Q 2 ⊆ B, because by Observation 14, each clique of Q is either completely in A or in B. This means that if G is obtained from G by the application of Rule 15.2, then M is a matching cut of G if and only if M is a matching cut of G. Therefore, enumerating matching cuts of G is equivalent to enumerating matching cuts of G .
We apply Rule 15.2 exhaustively. Denote byĜ the obtained graph and byQ the obtained clique partition. Notice that the rule is never applied for a pair of cliques of size one and, therefore,Q does not contain cliques of size two.
In the next step, we use the following marking procedure to label some vertices ofĜ.
(i) For each pair {Q 1 , Q 2 } of distinct cliques ofQ such that E(Q 1 , Q 2 ) = ∅, select arbitrarily an edge uv ∈ E(Ĝ) such that u ∈ Q 1 and v ∈ Q 2 , and mark u and v.
(ii) For each triple Q, Q 1 , Q 2 of distinct cliques ofQ such that some vertex of Q is adjacent to a vertex in Q 1 and to a vertex of Q 2 , select arbitrarily u ∈ Q, x ∈ Q 1 and y ∈ Q 2 such that ux, uy ∈ E(Ĝ), and then mark u, x, y.
Notice that a vertex may be marked several times. Our final rule reduces the size of the graph by deleting some unmarked vertices.
Reduction Rule 15.3. For every clique Q ∈Q of size at least three, consider the set X of unmarked vertices of Q and delete arbitrary min{|X|, |Q| − 3} vertices of X.
Denote byG the obtained graph and denote byQ the corresponding clique partition ofG such that every clique ofQ is obtained by the deletion of unmarked vertices from a clique of Q. Our kernelization algorithm returnG together withQ (recall that by our convention each instance should be supplied with a clique partition of the input graph).
To upper bound the size of the obtained kernel, we show the following claim.
This completes the description of the kernelization algorithm. Now we show a bijection between matching cuts of G andG, and construct our solution-lifting algorithm. Note that we already established that M is a matching cut ofĜ if and only if M is a matching cut of G. Therefore, it is sufficient to construct a bijective mapping of the matching cuts ofG to the matching cuts ofĜ. LetQ = {Q 1 , . . . ,Q r } andQ = {Q 1 , . . . ,Q r }, whereQ i ⊆Q i for i ∈ {1, . . . , r}. Notice that Q andQ have no cliques of size two. Hence, we can use Observation 14. We show the following claim. Proof of Claim 15.2. IfM = E(∪ i∈IQi , ∪ j∈JQj ) is a matching cut ofĜ, thenM is a matching cut ofG, becauseQ i ⊆Q i for i ∈ {1, . . . , r}.
For the opposite direction, assume thatM = E(∪ i∈IQi , ∪ j∈JQj ) is a matching cut ofG. For the sake of contradiction, suppose thatM = E(∪ i∈IQi , ∪ j∈JQj ) is not a matching cut of G. This means that there is a vertex that is incident to at least two edges ofM . By symmetry, we can assume without loss of generality that there is h ∈ I and u ∈Q h such that u is adjacent to distinct vertices x and y, where x ∈Q i and y ∈Q j for some i, j ∈ J.
Suppose that i = j, that is, x and y are in the same cliqueQ i . Then, however, E(Q h ,Q i ) is not a matching and we would be able to apply Rule 15.2. Since Rule 15.2 was applied exhaustively to obtainĜ andQ, this cannot happen. We conclude that i = j.
By
Step (ii) of the marking procedure, there is u ∈Q h , x ∈Q i and y ∈Q j such that u x , u y ∈ E(Ĝ) and the vertices u , x , y are marked. This means that u ∈Q h , x ∈Q i and y ∈Q j . Then u x , u y ∈M andM is not a matching; a contradiction. The obtained contradiction concludes the proof.
Finally, we show thatM = ∅ if and only ifM = ∅. Clearly, ifM = ∅, thenM = ∅. Suppose thatM = ∅. Then there are i ∈ I and j ∈ J such that uv ∈M for some u ∈Q i and v ∈Q j . By Step (i) of the marking procedure, there are u ∈Q i and v ∈Q j such that u v ∈ E(Ĝ) and u , v are marked. Then u v ∈ E(G) and u v ∈M by the definition ofM . Hence,M = ∅.
Using Claim 15.2, we are able to describe our solution-lifting algorithm. LetM be a matching cut ofG. Let {A, B} be a partition of V (G) such thatM = E (A, B). By Observation 14, there is a partition {I, J} of {1, . . . , r} such that A = ∪ i∈IQi and B = ∪ j∈JQj . The solution-lifting algorithm outputsM = E(∪ i∈IQi , ∪ j∈JQj ).
To see correctness, note thatM = E(∪ i∈IQi , ∪ j∈JQj ) is a matching cut ofĜ by Claim 15.2. Moreover, for distinct matching cutsM 1 andM 2 ofG, the constructed matching cutsM 1 and M 2 , respectively, ofĜ are distinct, that is, the matching cuts ofG are mapped to the matching cuts ofĜ injectively. Finally, to show that the mapping is bijective, consider a matching cutM ofĜ. Let {A, B} be a partition of V (Ĝ) withM = E(A, B). By Observation 14, there is a partition {I, J} of {1, . . . , r} such that A = ∪ i∈IQi and B = ∪ j∈JQj . ThenM = E(∪ i∈IQi , ∪ j∈JQj ) is a matching cut ofG by Claim 15.2, and it remains to observe thatM is constructed by the solution-lifting algorithm fromM .
It is straightforward to see that both kernelization and solution-lifting algorithms are polynomial. This concludes the construction of our bijective enumeration kernel for Enum MC.
For Enum Minimal MC and Enum Maximal MC, the kernels are exactly the same. To see it, note that our bijective mapping of the matching cuts ofG to the matching cuts ofĜ respects the inclusion relation. Namely, ifM 1 andM 2 are matching cuts ofG andM 1 ⊆ M 2 , thenM 1 andM 2 are mapped to the matching cutsM 1 andM 2 , respectively, ofĜ such thatM 1 ⊆M 2 . This implies that the solution-lifting algorithm outputs a minimal (maximal, respectively) matching cut ofĜ for every minimal (maximal, respectively) matching cut ofG. Moreover, every minimal (maximal, respectively) matching cuts ofĜ can be obtained form a minimal (maximal, respectively) matching cut ofG.
Conclusion
We initiated the systematic study of enumeration kernelization for several variants of the matching cut problem. We obtained fully-polynomial (polynomial-delay) enumeration kernels for the parameterizations by the vertex cover number, twin-cover number, neighborhood diversity, modular width, and feedback edge number. Since the solution-lifting algorithms are simple branching algorithms, these kernels give a condensed view of the solution sets which may be interesting in applications where one may want to inspect all solutions manually. Restricting to polynomialtime and polynomial-delay solution-lifting algorithms seems helpful in the sense that they will usually be easier to understand.
There are many topics for further research in enumeration kernelization. For Matching Cut, it would be interesting to investigate other structural parameters, like the feedback vertex number (see [10] for the definition). More generally, the area of enumeration kernelization seems still somewhat unexplored. It would be interesting to see applications of the various kernel types to other enumeration problems. For this, it seems to be important to develop general tools for enumeration kernelizations. For example, is it possible to establish a framework for enumeration kernelization lower bounds similar to the techniques used for classical kernels [4,5] (see also [10,16])?
Concerning the counting and enumeration of matching cuts, we also proved the upper bound F (n + 1) − 1 for the maximum number of matching cuts of an n-vertex graph and showed that the bound is tight. What can be said about the maximum number of minimal and maximal matching cuts? It is not clear whether our lower bounds given in Propositions 3 and 4 are tight. Finally, it seems promising to study enumeration kernels for d-Cut [23], a generalization of Matching Cut that has recently received some attention. | 2021-01-12T02:15:47.216Z | 2021-01-11T00:00:00.000 | {
"year": 2021,
"sha1": "985884b2e8963e99ca789c26465fc0daeedaaf13",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "985884b2e8963e99ca789c26465fc0daeedaaf13",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
222135455 | pes2o/s2orc | v3-fos-license | Mifepristone as a Potential Therapy to Reduce Angiogenesis and P-Glycoprotein Associated With Glioblastoma Resistance to Temozolomide
Glioblastoma, the most common primary central nervous system tumor, is characterized by extensive vascular neoformation and an area of necrosis generated by rapid proliferation. The standard treatment for this type of tumor is surgery followed by chemotherapy based on temozolomide and radiotherapy, resulting in poor patient survival. Glioblastoma is known for strong resistance to treatment, frequent recurrence and rapid progression. The aim of this study was to evaluate whether mifepristone, an antihormonal agent, can enhance the effect of temozolomide on C6 glioma cells orthotopically implanted in Wistar rats. The levels of the vascular endothelial growth factor (VEGF), and P-glycoprotein (P-gp) were examined, the former a promoter of angiogenesis that facilitates proliferation, and the latter an efflux pump transporter linked to drug resistance. After a 3-week treatment, the mifepristone/temozolomide regimen had decreased the level of VEGF and P-gp and significantly reduced tumor proliferation (detected by PET/CT images based on 18F-fluorothymidine uptake). Additionally, mifepristone proved to increase the intracerebral concentration of temozolomide. The lower level of O6-methylguanine-DNA-methyltransferase (MGMT) (related to DNA repair in tumors) previously reported for this combined treatment was herein confirmed. After the mifepristone/temozolomide treatment ended, however, the values of VEGF, P-gp, and MGMT increased and reached control levels by 14 weeks post-treatment. There was also tumor recurrence, as occurred when administering temozolomide alone. On the other hand, temozolomide led to 100% mortality within 26 days after beginning the drug treatment, while mifepristone/temozolomide enabled 70% survival 60–70 days and 30% survived over 100 days, suggesting that mifepristone could possibly act as a chemo-sensitizing agent for temozolomide.
INTRODUCTION
Glioblastoma is the most frequent primary neoplasm of the central nervous system and the most aggressive brain tumor, with a life expectancy of 14-15 months post-diagnosis (1)(2)(3). It is characterized by uncontrolled cell proliferation, highly diffuse infiltration, resistance to apoptosis, robust angiogenesis, and DNA repair mechanisms contributing to drug resistance. The standard treatment for glioblastoma is surgery followed by chemotherapy based on temozolomide and radiotherapy, which leads to poor patient survival.
The growth of glioblastoma is associated with its capacity to maintain a balanced expression of proteins that control the cell cycle and allow for proliferation, motility and vascular neoformation. Furthermore, it is able to avoid recognition by the immune system. Reports in the Cancer Genome Atlas (TCGA) identify three main pathways participating in the pathogenesis of glioma: (RTK)/RAS/(PI3K), p53, and retinoblastoma (4).
A major factor in the strong resistance of tumors to temozolomide treatment is the overexpression of enzyme O6-methylguanine-DNA-methyltransferase (MGMT), which participates in the repair of temozolomide-induced DNA damage. Our group previously demonstrated that mifepristone enhances the temozolomide-induced decrease in orthotopic glioblastoma tumors by increasing apoptosis and reducing levels of MGMT (thus impeding repair of DNA damage) (5).
Among other pathways of glioma resistance to treatment described in the literature are those that contribute to angiogenesis, the formation of new blood vessels from a pre-existing vascular network. Several studies have correlated increased tumor vascularization with a lower rate of patient survival. Indeed, in the absence of angiogenesis, tumors cannot grow beyond a size of 1-2 mm 3 (6). One of the main promoters of angiogenesis is hypoxia, which stimulates the synthesis of the most important mediator in angiogenesis, the vascular endothelial growth factor (VEGF). The receptors of VEGF are reported to be overexpressed in glioblastoma (7,8). Among the strategies for inhibiting the expression of VEGF is the use of bevacizumab, a humanized monoclonal antibody. Two phase III studies on this drug have showed that the addition of bevacizumab to standard treatment (radiotherapy-temozolomide) for patients with newly diagnosed glioblastoma, was associated with a 4-month increase in progression-free survival without a significant effect on overall survival. Moreover, there was an increase in adverse events associated with bevacizumada (9,10), emphasizing the need to seek new pharmacological strategies.
Another pathway involved in glioblastoma is related to the blood-brain barrier (BBB). Many promising chemotherapeutic agents have had great difficulty in overcoming the mechanisms of the BBB. On one hand, it is a physical barrier comprised of tight junctions between endothelial cells and a lack of fenestrae. In addition, it is an active efflux system that transports a wide range of antineoplastic drugs (e.g., temozolomide) out of the brain. The best known of these transporters is P-glycoprotein (P-gp), a membrane protein belonging to the superfamily of ATP-binding cassette (ABC) transporters. The blocking of these transport proteins might be useful in the treatment of glioblastoma (11)(12)(13)(14).
To date, the search for new treatments against glioblastoma has not improved the survival of patients. An attractive strategy is the repositioning of approved drugs for use in combination with standard therapy. One attractive candidate for repositioning is mifepristone, a synthetic steroid that serves as an abortifacient drug based on anti-progestational and anti-glucocorticoid action. Mifepristone reportedly has antiproliferative effects in breast (15,16), cervix (17), endometrium (18), ovary (19), and prostate cancer (20), can cross the BBB, and provides palliative effects on brain tumors such as meningiomas (21) and glioblastoma (22). Additionally, it is considered safe (with few adverse effects) and has a low cost. Besides reducing levels of MGMT (5), mifepristone is reported to diminish the activity of P-gp in human leukemia cancer cells (23) and a gastric cancer cell line (24). However, whether or not mifepristone is an inhibitor of P-gp on glioma cells or in the efflux transport system mediated by P-gp in the BBB has not yet been established. Likewise, there are no reports, to our knowledge, on its effect on temozolomide treatment.
Mifepristone may serve as a chemo-sensitizing drug, considering the descriptions in the literature of its inhibition of multiple targets in cancer cells. The aim of the present study was to evaluate the capacity of a mifepristone/temozolomide treatment in an orthotopic rat model of glioblastoma to modulate angiogenesis, reduce P-gp levels in the glioma tumors and increase the intracerebral concentration of temozolomide. Since tumors initially sensitive to chemotherapy often develop resistance, tumor recurrence was monitored after the combined treatment ended. Finally, the MGMT level was quantified as a parameter of DNA repair in tumor cells.
Drugs and Reagents
Mifepristone and temozolomide were provided by Sigma Chemical Co. (St. Louis, MO, United States). Dulbecco's modified Eagle's medium (DMEM), FBS (fetal bovine serum), and EDTA (Ethylenediaminetetracetic acid) were purchased from Gibco-BRL (Grand Island, NY, United States). LC-MS/MS grade methanol was acquired from J.T.Baker. Acetic acid was of analytical grade. High-quality water for the solutions was processed with a Milli-Q Reagent Water System (Continental Water Systems, El Paso, TX, United States). A stock solution of temozolomide was prepared in DMSO at a final concentration of 4% and mifepristone was reconstituted in polyethylene glycol/saline solution. All standard solutions were stored at −20 • C until use.
Animals
Male Wistar rats (230-250 g) were obtained from the Faculty of Medicine of the UNAM, Mexico City, Mexico. The animals were kept in pathogen-free conditions on a 12-12 h light/dark cycle, with adequate temperature and humidity. All procedures for the care and handling of the animals were reviewed and approved by the Ethics Committee of the "Instituto Nacional
Tumor Cell Implantation
The rat glioma C6 cell line was supplied by the American Type Culture Collection (ATCC, Rockville, United States). These cells were maintained under sterile conditions in DMEM medium (Gibco, Grand Island, NY, United States) supplemented with 5% fetal bovine serum and incubated at 37 • C in a 5% CO 2 atmosphere.
The effect of Mif/Tz on tumor growth was evaluated on C6 glioma cells orthotopically implanted in Wistar rats. Each animal was anesthetized with a combination of tiletamine hydrochloride (10 mg/kg) and acepromazine maleate (0.4 mg/kg) administered subcutaneously (sc), then placed in a stereotactic device for surgery. The tumor cell implantation was performed according to Llaguno et al. (5). Briefly, after fastening the head in the frame, a midline incision was made and bregma was identified. The skull was then drilled at the coordinates of 2.0 mm right from bregma and 6 mm deep (hippocampus). C6 cells were harvested, washed and diluted in DMEM to a concentration of 7.5 × 10 5 in a volume of 3 µL. Employing an infusion pump, these cells were slowly implanted at a depth of 6 mm from the dura mater. The sham group was surgically opened and instead of implanting cancer cells, culture medium was injected.
Treatments
At 2 weeks post-surgery, the rats were randomly divided into six groups: (A) negative control (without surgery and without treatment, (B) sham surgery (in the absence of glioma cells and drug treatments) and four groups with the surgical implantation of cancer cells: (C) without drug treatment (vehicle control), (D) temozolomide alone (Tz), (E) mifepristone alone (Mif), (F) mifepristone/temozolomide (Mif/Tz). Tz was administered at a dose of 5 mg/kg ip and Mif at a dose of 10 mg/kg sc. The drugs were given for five consecutive days (Monday-Friday) during 3 weeks.
Determination of Tumor Growth
Brain tumor proliferation was measured by capturing images with a microPET/CT scanner (Albira ARS, Oncovision, Spain) at 2, 5, 7, 9, and 14 weeks post-surgery. For this purpose, 300 µCi of 18F-fluorothymidine (18F-FLT) were administered into the caudal vein. Another method of tracking tumor growth was by monitoring animal weight. Rats were weighed three times/week throughout the experiment, recording the global survival of each group.
Histological Analysis
The rats were euthanized and perfused with saline solution followed by 4% paraformaldehyde. Brains were removed and immersed in 4% paraformaldehyde for 2 weeks. The brain tissue was embedded in paraffin and sliced into sections (2 mm thick) on the coronal plane for the subsequent analysis with Eosin and Hematoxylin (H&E) and microvessel density immunohistochemical was evaluated with CD31 marker (#77699, Cell Signalling Technology).
Molecular Analysis
At the end of the study, the rats were sacrificed and the tumor was removed. The brain tissue was homogenized with a lysis buffer containing protease inhibitors (Cat. 78440; Thermo Scientist, TM). The samples were centrifuged at 10,000 g at 4 • C and the supernatant was recovered. The proteins were quantified with the BCA (bicinchoninic acid) assay and separated by electrophoresis on 4-20% gradient gel (Mini-Protean TGX 456-1094, Bio-Rad Laboratories, Inc, United States). Colored markers (Bio-Rad, CA, United States) were included to establish size. For each sample, 40 g of protein were used. Following the transfer of the proteins onto PVDF membranes (Amersham, United Kingdom), the latter were blocked for 2 h at room temperature with 5% non-fat dry milk. The antibodies employed were anti-MGMT (sc-166528, 1:1000, Santa Cruz Biotechnology, TX, United States), P-gp (12683, 1:500, Cell Signalling Technology) and β-actin (sc-69879, 1:1000; Santa Cruz Biotechnology, TX, United States). After washing, the membranes were incubated with IRDye R 800 CW goat anti-mouse or IRDye R 680RD goat anti-rabbit secondary antibodies (1:15000; LI-COR, Inc.) for 1 h. The membranes were scanned on an Odyssey Imaging System and their intensity of fluorescence was measured with Image Studio software. In each figure, representative blot images were selected from the same gel. For the evaluation of angiogenesis, the relative concentration of VEGF was assessed with an Elisa kit according to the manufacturer's instructions (human VEGF, ENZ-KIT156-0001, Enzo Life Sciences, Inc).
Determination of Temozolomide in Rat Brain Tissue
Male Wistar rats (200-230 g) were divided into groups for two drug treatments (n = 6 each): (1) Tz (30 mg/kg, ip) and (2) Mif/Tz (60 mg/kg, sc, and 30 mg/kg, ip, respectively). For the second group, mifepristone was administered 2 h before temozolomide. In both groups, rats were euthanized 45 min after Tz was given. The tissues were weighed and kept at −70 • C to await use.
The concentration of temozolomide was ascertained by chromatography on an LC-MS system (Agilent Agilent Technologies, Infinity 1260) with an autosampler temperature of 4 • C. The separation was carried out at 25 • C on an Agilent Zorbax SB-C18 column (1.8 µm, 2.1 mm × 50 mm) utilising a linear elution with (A) water (containing 0.5% acetic acid and 10 mM ammonium acetate) and (B) methanol as the mobile phase (10/90). The flow rate was set at 0.3 ml/min with an injection volume of 5 µl.
Mass spectrometry was performed on an Agilent QQQ Detector (Agilent Technologies, Infinity 1260) in the positive ESI mode with nitrogen as the solvent. The capillary voltage was 3.0 kV and the dissolvation temperature 350 • C. Quantification was achieved by using multiple reactions monitoring of the transitions of m/z 195.10-137.95 for temozolomide, and m/z 181.10-124.0 for theophylline as the internal standard.
Individual stock solutions of temozolomide (1 mg/ml) and theophylline (1 mg/ml) were prepared in separate volumetric flasks and dissolved in acid methanol (acetic acid 0.5% and methanol v/v, 20/80) for temozolomide and pure methanol for theophylline. Intermediate and final working solutions containing temozolomide were prepared in acid methanol and theophylline solutions were prepared in water. Calibration standards were prepared at following concentrations: 50, 100, 500, 1000, 2000, and 5000 ng/ml.
The internal standard solution (1000 ng/ml in water) was added to small slices of the brain (400 mg; 50 µl 1 M HCL and temozolomide working solutions for the calibration standards). The slices were individually homogenized before adding ethyl acetate and mixing for 5 min. The samples were centrifuged at 14000 rpm for 15 min at 4 • C. The supernatant was transferred to an Eppendorf tube, ethyl acetate was added, and centrifugation was performed at 14,000 rpm. The supernatant was transferred to an Eppendorf tube and evaporated to dryness under a stream of nitrogen at 24 • C. Afterward, 200 µl of acid methanol was added to the dry residue and injected into the chromatographic system.
Statistical Analysis
Data are expressed as the mean ± SD. Statistical significance was determined with one-way analysis of variance (ANOVA) on SPSS Base 20.0 software (SPSS Inc, Chicago, IL, United States). When necessary, the comparison of means was Bonferroni adjusted. In all cases, significance was considered at p < 0.05.
Animal Body Weight
During the first 2 weeks post-implantation of C6 cells, all animals continued to gain weight. Subsequently, the negative control and sham group gained weight while the untreated, Tz and Mif groups rapidly lost weight, similar to data previously reported by our group (5). The rats in the Mif/Tz group maintained their weight throughout the experiment (Figure 1).
Histological and Immunohistochemical Analysis
In the histological examination, applying H&E stain, we observed typical characteristics of glioblastoma in without treatment group, as hypercellularity, infiltration of tumor cells and mitosis. The tissue of the animals treated with Tz or Mif showed lesser hypercellularity and mitosis; however, the effect was more evident at 5 weeks post-surgery (at the end of the 3-weeks drug treatment period) with a considerable decrease infiltration of tumoral cells and inflammation cells, as well as the absence of pseudopalisading necrosis (Figure 2). These results are consistent with previously reported.
Expression of VEGF
At the end of 3 weeks drug treatment the rats were sacrificed to evaluated CD31 marker and VEGF expression. Vascular density was determined by CD31 marker, we observed that Mif and Tz decrease the vascular density compared to without treatment group; however, this decrease was greater in Mif/Tz group, these results were corroborated with the quantification of VEGF ( Figure 3A). VEGF expression is closely related to angiogenesis. Compared to the sham group, the untreated animals with implanted cancer cells displayed a significantly higher level of VEGF. Compared to the latter group, the level of VEGF declined (but not significantly) in animals receiving either Tz or Mif, and was significantly lower in the Mif/Tz group ( Figure 3B).
Expression of P-gp
Western blot data and band intensity analysis revealed that the protein expression of P-gp (Figure 4) was downregulated at the FIGURE 1 | Tumor growth in the orthotopic rat model of glioma was evaluated by comparing animals weight between groups: negative control ( ) and sham surgery ( ); and in four groups with implanted glioma cancer cells, one without drug treatment ( ) and the other given temozolomide only (Tz) ( ), mifepristone only (Mif) ( ), and mifepristone/temozolomide (Mif/Tz) ( ). Each point of the graphic represents the mean ± SEM of six animals. *Significant difference (p < 0.05) between Mif/Tz and sham.
Accumulation of Temozolomide in Brain Tissue
The accumulation of temozolomide in brain tissue was determined by LC-MS analysis after treatment with Mif/Tz or Tz ( Figure 5). Typical chromatograms obtained after the extraction of temozolomide in brain tissue from the groups of Tz and Mif/Tz are shown in Figure 5A. A significant two-fold greater intracerebral level of temozolomide was found in the brain tissue of the Mif/Tz versus Tz group (14820 ± 3852 vs 7136 ± 981 ng/g brain tissue); Figure 5B; p < 0.05.
Therapeutic Effect of Mifepristone/Temozolomide on Tumor Size
PET/CT scans were performed at 5, 7, 9, and 14 weeks postimplantation of tumor cells (the Mif/Tz treatments were given during week 2-5). In the images, the presence of red reflects the 18F-FLT uptake and thus the relative size of the tumor. The 18F-FLT uptake was higher at 5 weeks (3-week drug treatment). By 7 weeks post-surgery (2 weeks after the end of drug treatment), the 18F-FLT uptake had dropped drastically. At 9 weeks, however, 18F-FLT uptake appeared again, and can be observed at about the similar level at 14 weeks ( Figure 6A). This suggests a tumor cell growth again at 9 weeks, indicating a possible tumor recurrence that remains stable at 14 weeks post-surgery. The 18F-FLT uptake was also measured as total lesion proliferation (TLP). At 7 weeks post-surgery a significant decrease of TLP was observed. Moreover, at 9 and 14 weeks post-surgery (4 and 9 weeks after the end of drug treatment), the TLP increased again (Figure 6B). The average survival time for rats was similar in the untreated, Tz or Mif groups, being 25-35 days. Contrarily, 70% of the Mif/Tz animals survived 60-70 days and approximately 30% survived over 100 days (Figure 6C).
Histological Examination During Tumor Recurrence
Within the pathological characteristics of glioblastoma are an increase of necrosis, mitosis, and pleomorphism as well as a vascularity proliferation. As shown in Figure 7, these characteristics decreased with the treatment of Mif/Tz (5 weeks), in the 7 weeks groups (2 weeks after the end of treatment) we observed some hyperchromatic cells and a decrease of hypercellularity; however, at 9 and 14 weeks pseudopalisading, necrosis, mitotic activity and vascular proliferation increased. A close correlation was observed with the molecular images of the same groups.
Effect of Mifepristone/Temozolomide on VEGF During Tumor Recurrence
The brain tissue was processed for immunohistochemical assays with CD31 marker. At 5-weeks, Mif/Tz group showed a decrease in vessel density compared to without treatment group; however, there is an increase in positive cells at 9 and 14 weeks post-surgery (4 and 9 weeks after the end of drug treatment), interestingly, the density of positive cells was less compared to the group without treatment ( Figure 8A). The VEGF levels at the end of the 3week Mif/Tz treatment (at 5 weeks post-surgery) was significantly lower than that found in the untreated group and the same as that of the sham animals. However, this reduced level in the Mif/Tz group was reversed after drug treatment ended, during tumor recurrence at 9 and 14 weeks post-surgery (4 and 9 weeks after the end of drug treatment), this parameter increased in the Mif/Tz group, being similar to the value of the untreated group ( Figure 8B).
Effect of Mifepristone/Temozolomide on P-gp Levels During Tumor Recurrence
Evaluation of the expression of P-gp by Western blot at the end of the 3-week drug treatment period (at 5 weeks postsurgery) showed a significantly lower level for Mif/Tz-treated versus untreated rats (Figure 9). This reduced level in the Mif/Tz group was reversed after drug treatment ended, gradually rising until reaching the level of the untreated group at 14 weeks.
Effect of Mifepristone/Temozolomide on the Level of MGMT During Tumor Recurrence
At 5 weeks post-surgery, the expression of MGMT was lower in healthy sham rats compared to the untreated animals with implanted cancer cells. This point in time corresponds to the end of the drug treatments, at which time the combination regimen of mifepristone/temozolomide produced a significant decrease in the level of MGMT, in agreement with a our previous report (5). This effect was reversed at weeks 9 and 14, corresponding to the time of tumor recurrence (Figure 10).
DISCUSSION
Although there have been advances in the treatments of some cancers, the molecules recently developed for glioblastoma therapy have shown little success in improving patient prognosis and survival. Glioblastoma is currently treated with surgery followed by chemotherapy with temozolomide and radiotherapy, It has reported that the antitumor activity of temozolomide is schedule-dependent, with multiple administrations being more effective than a single treatment. In clinical use, the recommended dose of temozolomide is 75 mg/m 2 , daily until with a maximum of 49 doses and in the dose of maintenance of 200 mg/m 2 given for five consecutive days every 28-day cycle (5/28 days) (9, 10).
The scheme of drug treatments used presently is similar to that used in patients. In our study, temozolomide was administered for only three weeks because it is the average survival time of the rats with the individual treatments.
The dose of temozolomide was calculated based on several reports in the literature and in our previous work. The doses of temozolomide used in the present work is compared to metronomic doses of 2 mg/kg every day for 16 days reported by Kim et al. (25), the authors observed a significant effect on the tumor volume and microvessel density. Moreover there were no signs of toxicity with drug administration, such as body weight loss. Other study also showed similar results using temozolomide at dose of 5 mg/kg/day (26), showing a significant decrease on tumor growth. These results correlate with our previous findings where we used temozolomide 5 mg/kg/day × 21 days, there was a significant decrease on tumor growth measured as the proliferative activity in tumors (5).
In the case of mifepristone, we used a total dose of 150 mg/kg (10mg/kg × 5 days/3 weeks) in rats according with our previous report (5), On the other hand, several reports support that using low dose of the drugs it is more probably to find a synergistic effect when the drugs are combined. This is important in cancer because many studies looking for a synergistic effect more than an additive effect due to the side effects of chemotherapy.
The antihormonal agent mifepristone has been investigated in regard to different types of cancer, both hormone-and nonhormone-dependent (27). Mifepristone acts as an antagonist of progestins, glucocorticoids and androgens through the respective receptors. It reportedly inhibits cell growth in non-hormonedependent cancer cells, such as MDA-MB-321 (breast cancer) (28) and LNCaP (prostate cancer) (29), which are negative for progesterone, estrogen and androgen receptors.
Previous studies in our laboratory demonstrated the chemo-sensitizing effect of mifepristone in combination with temozolomide in a xenograft and an orthotopic glioma model (5,30). The current study evaluated two possible molecular mechanisms in this chemo-sensitizing effect: the inhibition of VEGF and CD31 marker to reduce angiogenesis and of P-gp to facilitate the capacity of temozolomide to cross the BBB (Figure 11).
A significant difference in weight was observed between the animals administered mifepristone/temozolomide and those given temozolomide only, mifepristone only, or without treatment animals. This result could be due to the decrease in tumor growth as was observed in the previous reports (5). Typical features of glioblastoma were seen in the H&E images shown; in the group without treatment, there was an increase in hypercellularity and vascular proliferation, which was diminished with the Mif/Tz treatment. A mechanism that has been little explored in cancer-induced weight loss is the modification of metabolic changes involved in cachexia. Cachexia is a complex metabolic disorder that impacts about 80% of patients with advanced cancers (31). Griffith et al. (32) reported body weight loss in glioma patients (32), studies on cachexia symptoms induced by glioblastoma have rarely been reported; Recently Cui et al. (33) demonstrated cachexia manifestations in an orthotopic glioma murine model (33); however, is necessary a metabolic pathway analysis during glioma cachexia. It has been reported that mifepristone impact in cancer cachexia by blocking the interaction of cortisol and induction of zinc-alpha2-glycoprotein (ZAG) expression in adipose tissue (34). On the other hand, cachexia is characterized by systemic inflammation and it has been reported that mifepristone reduced the expression of nuclear transcription factors, including NF-kB (35), a central mediator of proinflammatory gene induction. With these antecedents, it is interesting to investigate, in the future, the possible modulation of cachexia by mifepristone/temozolomide treatment.
The tumor microenvironment is known to play a key role in resistance to treatment. In particular, a hypoxic microenvironment is closely related to chemo-and radioresistance by modulating different mechanisms including angiogenesis (36). Glioma tumors are known to elevate levels of VEGF and its corresponding receptor, the activation of which is related to angiogenesis. Without angiogenesis, tumor growth would be severely limited.
Due to the importance of VEGF in the physiopathology of glioblastoma, one of the strategies to improve patient survival is to diminish its expression. Unfortunately, this strategy has not yet been fruitful. In the current effort, we observed that there was an additive effect by temozolamide and mifepristone in the inhibition of VEGF levels, the Mif/Tz rats exhibited a lower expression of VEGF compared to the other animals with implanted cancer cells, including the untreated, Tz and Mif groups. This results correlated with immunohistochemical studies with CD31 marker, vessel density was decreased in Tz and Mif groups; however, a lower vessel density was observed in Mif/Tz group. Hence, the combined treatment may contribute to an effective strategy for overcoming the resistance of glioblastoma tumors. It is known that the endothelial cells in the vascular bed of tumor are more susceptible to chemotherapeutic agents than resting endothelium, because they have significantly higher proliferation rates than the normal endothelium in the rest of the body. In addition metronomic chemotherapy, which is the continuous administration of the chemotherapeutic agent at a low dose, it exposes endothelial cells in tumor beds to drugs, inducing angiogenesis and apoptosis in endothelial cells before tumor cells (25). Therefore, it is possible that an additive apoptotic effect of Mif/Tz on vascular endothelial cells contribute to antitumor efficacy of the combined drugs.
On the other hand, recently it has been described that temozolomide is able to decrease the expression of VEGF levels at therapeutic or higher doses on U87 glioblastoma cells (37). The authors demonstrated that temozolomide added at doses below its therapeutic dose is not able to induce apoptosis in cells. But it is capable of inducing apoptosis when was introduced in therapeutic dose or above. In our work, the consecutive FIGURE 11 | Schematic portrayal of the possible mechanisms of the combination mifepristone/temozolomide treatment that improved the effect found with temozolomide alone. The mechanisms studied were: (1) the inhibition of angiogenesis, measured as reduced levels of VEGF; (2) the attenuation of DNA repair, evaluated as a decrease in MGMT; and (3) the increased capacity of temozolomide to pass through the BBB, assessed as a lower P-gp level and a higher concentration of temozolomide in brain cells. As described in a previous report by our group (5), mifepristone diminishes the level of anti-apoptotic protein Bcl-2 and impedes endothelial cell survival in tumors. This may be the mechanism by which mifepristone/temozolomide herein lowered the level of VEGF. The treatment with mifepristone or temozolomide alone decreased the levels of VEGF to a lesser extent, perhaps by the blockade of autocrine VEGF signaling through specific down-regulation of NRP-1. Additionally, a decline in the expression P-gp was found when administering mifepristone/temozolomide. Thus, this combination treatment may allow for an enhanced intratumoral concentration of temozolomide and contribute to greater tumor cell death. The latter was evidenced by lower tumor proliferation during the drug treatment period. As can be appreciated, mifepristone appears to sensitize glioblastoma cells to the effects of temozolomide.
doses of Mif/Tz administered to the animals could lead to a cumulative dose reaching therapeutic doses that may contribute to an additive effect in the reduction of VEGF levels.
Hernandez-Hernandez et al. described a progesteroneinduced increase in the expression of VEGF in the astrocytoma U373 cell line, and a mifepristone-induced reversal of the increase by recruitment of the steroid receptor coactivator (SRC-1) (38). Another possible mechanism leading to a lower level of VEGF is through the regulation of Bcl-2, a protein family composed of cell death regulators. It has been implicated in the differentiation of several cell types, including neuronal, epithelial and hematopoietic cells, as well as in the survival of endothelial cells (39). Karl et al. described pro-angiogenic activity by Bcl-2 based on its ability to activate the NF-κB signaling pathway and elicit expression of the pro-angiogenic CXCL8 and CXCL1 chemokines in endothelial cells (40). According to a previous report by our group, mifepristone reduces Bcl-2 expression in glioma cells (5). Therefore, the diminished VEGF level observed herein could possibly be related to a decrease in Bcl-2 induced by mifepristone.
The BBB, on the other hand, has been the greatest problem for many promising drugs developed to treat glioblastoma. The brain microvascular endothelium is peculiar, characterized by a lack of fenestrations and adherens junctions and by the presence of drug efflux transporters, such as P-glycoprotein (P-gp, Abcb1), the multidrug resistance proteins (MRPs, Abcc1) and breast cancer resistance protein (BCRP, Abcg2) (41). Several researches have focused on the role of inhibition of drug efflux transporters to improve chemotherapy response. P-glycoprotein is the bestcharacterized molecule of the class of efflux pump transporters, forming part of the BBB by removing drugs from the brain. This protein is expressed by endothelial cells in both healthy brain tissue and gliomas, and a key role has been attribute to it in the chemoresistance of several types of tumors (e.g., gliomas) (42). Consequently, it probably contributes to a low concentration of temozolomide in glioma tumor cells.
The present study found a significant drop in the level of P-gp in the Mif/Tz group. A decrease the levels of P-gp in patients should be able to enhance the intracellular distribution of temozolomide in brain tissue and trigger greater tumor cell death. Various transcription factors (in addition to transcriptional/translational regulation) are involved in regulation of efflux pump transporters (43). This protein is known to be regulated by a nuclear receptor, the pregnane X receptor (PXR) (44)(45)(46), which mediates the activation of several genes by xenobiotics, including several ABC transporters. Although the PXR promoter has not yet been characterized, dexamethasone is reported to boost PXR mRNA levels in primary cultures of human hepatocytes and rat hepatoma H4IIE cells, an effect blocked by mifepristone, suggesting that the GR pathway is involved in the regulation of these transporters (47,48).
On an other hand it has been reported that glioblastoma is characterized by aberrant activation of inflammatory responses; von Wedel-Parlow et al., reported that the proinflammatory cytokines interleukin-1 (IL-1b) and tumor necrosis factor-a (TNF-alpha) affect the expression of cerebral ABC-transporters in primary endothelial cells, the antiinflammatory glucocorticoid hydrocortisone leads to a induction of Abcg2 (BCRP) and Abcc1 (MRP) mRNA in microvascular endothelial cells whereas Abcb1 (P-gp)gene expression is downregulated (49). It has been reported that mifepristone decreased the levels of of TNF-alpha in rats exposed to Paraquat (50), and in endometrial epithelial and stromal cells reduced the secretion of IL-6 and TNF-alpha (51). However, more research is necessary to better understand the regulation and the role of mifepristone in efflux pump transporters.
Other strategy to improve treatment response is blocking the drug efflux transporters. Gooijer et al., reported an accumulation about 1.5 fold more of temozolomide in the brain by P-gp and BCRP inhibitors (52). These drug efflux transporters might be possible target of mifepristone to improve the efficacy of temozolomide against glioblastoma.
In the current contribution, the participation of mifepristone in the inhibition of drug efflux transporters was explored indirectly by evaluating the intracerebral concentration of temozolomide, representing a direct and indirect approach, respectively. The Mif/Tz rats exhibited a significantly lower level of P-gp and an increased intracerebral concentration of temozolomide compared to the Tz group. These results are consistent with the findings published by various authors. Mifepristone inhibits the activity of P-gp in a gastric cell line SGC7901/VCR (37) and in KG1a leukemia cells (23), enhances doxorubicin cellular accumulation in resistant human K562 leukemia cells (53), and increases the concentration of cisplatin in the tumors of mice given a combined cisplatin/mifepristone treatment (54). Hence, the blocking of drug efflux transporters by mifepristone could possibly increase the intracellular bioavailability of temozolomide in brain and tumor cells of patients, which should improve the therapeutic response.
Other drug efflux transporters that plays an important role in treatment resistance is MRP and blocking it could be an important strategy, it has been reported that mifepristone exhibited selective MRP1 inhibition (55). Hence, the blocking of drug efflux transporters by mifepristone could possibly increase the concentration of temozolomide in brain and consequently tumor cells can increase the disposition to drug.
In the second part of the present investigation, tumor growth after of the mifepristone/temozolomide treatment was monitored with a microPET/CT scanner measuring 18F-FLT uptake (Figure 6). There was a remarkable decrease at 7 week postimplantation with molecular imaging showing no proliferative activity. Afterward, new proliferation was observed at 9 week post-surgery, indicating tumor relapse. Nevertheless, the animals maintained a constant body weight and the proliferative activity did not rise by the next measurement at 14 weeks. The H&E images shown in the group at 7 weeks, there was a decrease in hypercellularity and vascular proliferation. However, after the end of the drug treatment, an infiltration of neoplastic cells with a hyperchromatic nucleus was observed again, in addition to an increase in the mitotic index and pseudopalisading. Despite being observed again these typical features of glioblastoma, which are associated with a poor prognosis, the animals survided longer. These results were corroborated with molecular images where it was observed tumor recurrence at week 9. Moreover, 70% of rats given mifepristone/temozolomide survived 60-70 days and approximately 30% survived over 100 days. In glioblastoma patients, a relapsed tumor inevitably causes 100% mortality.
Another molecular mechanism explored presently was the effect of the Mif/Tz treatment on MGMT, which is related to DNA repair in tumor cells. Glioblastoma stem cells are reported to express high levels of MGMT (56) and P-gp, in both cases generating more resistance to temozolomide, and therefore a greater probability of tumor relapse (57). Several studies have suggested that stem cells may be responsible for resistance and recurrence in glioblastoma. In such a case, a challenge in the treatment of glioblastoma would be the removal not only of the tumor cells, but also the glioblastoma stem cells.
O6-methylguanine-DNA-methyltransferase was found to significantly decrease by the end of the 3-week Mif/Tz treatment, thus confirming a previous finding by our group. Indeed, MGMT followed the same pattern as VEGF and P-gp. All three parameters were found to decrease during the Mif/Tz treatment, and then increase afterward. Within 14 weeks, all three of these molecules reached levels similar to the control group. In our study drug treatment were given only by 3 weeks; we did not observed adverse effects associated with the administration of mifepristone. The decrease of weight gain in the animals was due to implantation of tumor cells. In according to the literature, several clinical studies of mifepristone in patients with breast cancer (58), meningioma (59), and non-small cell lung cancer (60) have demonstrated that mifepristone has tolerable side effect, including nausea, lethargy, anorexia, fatigue, and hot flashes; even when mifepristone has been taken daily for long periods of time, it has mild adverse effects; therefore, the long-term administration of mifepristone may be feasible and well tolerated; we proposed in the near future to test this possibility and to evaluate whether mifepristone offers greater benefits during tumor recurrence. According to the current results, mifepristone could possibly contribute to the modulation of tumor relapse in glioblastoma by decreasing the levels of VEGF, MGMT, and P-gp. Further research is needed to explore other mechanisms of drug resistance of glioblastoma tumors.
CONCLUSION
Mifepristone herein improved the effect of temozolomide. The mifepristone/temozolomide combination produced a sharply lower expression of VEGF, CD31, P-gp, and MGMT compared to the other groups with implanted cancer cells, including the untreated animals and those given mifepristone or temozolomide alone. Moreover, the combination treatment increased the intracerebral concentration of temozolomide and diminished tumor proliferation. The present results strongly suggest that mifepristone could serve as part of a strategy to overcome the resistance of glioblastoma tumors to temozolomide. Future research is required to determine whether the mifepristone/temozolomide regimen can regulate glioma stem cells and inhibit the mechanisms related to tumor relapse.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The animal study was reviewed and approved by Ethics Committee of the "Instituto Nacional de Cancerología" (
AUTHOR CONTRIBUTIONS
ML-M participated in the experimental procedures for tumor cell implantation, helped with data processing, and performed the analysis of the results. SL-Z and MR-G designed the histological experiments. IV-L contributed to the LC/MS experiments for quantification of temozolomide in brain tissue. LM carried out the evaluation of the tumor growth by molecular imaging. PG-L planned and supervised the entire study. All authors read and approved the final version of the manuscript.
FUNDING
This work was partially financed by CONACYT (Mexico) through grant CB-258823. | 2020-10-06T13:16:10.137Z | 2020-10-05T00:00:00.000 | {
"year": 2020,
"sha1": "2efaf1943584d2b2948950301d0c1a9633c5df3a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.581814/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "10aa235650a6eaaad9059a8f9b02ce8e94e2c5c3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244510505 | pes2o/s2orc | v3-fos-license | Hydrological frequency analysis of large-ensemble climate simulation data using control density as a statistical control
Uncertainty in hydrological statistics estimated with finite observations, such as design rainfall, can be quanti‐ fied as a confidence interval using statistical theory. Ensemble climate data also enables derivation of a confi‐ dence interval. Recently, the database for policy decision making for future climate change (d4PDF) was developed in Japan, which contains dozens of simulated extreme rain‐ fall events for the past and 60 years into the future, allow‐ ing the uncertainty of design rainfall to be quantified as a confidence interval. This study applies an order statistics distribution to evaluate uncertainty in the order statistics of extreme rainfall from the perspective of mathematical theory, while a confidence interval is used for uncertainty evaluation in the probability distribution itself. An advan‐ tage of the introduction of an order statistics distribution is that it can be used to quantify the goodness-of-fit between observation and ensemble climate data under the condition that the extreme value distribution estimated from observa‐ tions is a true distribution. The order statistics distribution is called the control density distribution, which is derived from characteristics that order statistics from standard uni‐ form distribution follows beta distribution. The overlap ratio of the control density distribution and frequency dis‐ tributions derived from ensemble climate data is utilized for evaluation of the degree of goodness-of-fit for both data.
INTRODUCTION
The fifth report of the Intergovernmental Panel on Climate Change (IPCC) stated that there is no doubt that there is warming of the climate system and predicted that extreme rainfall in mid-latitude land areas would increase by the end of this century (IPCC, 2014). In light of this situation, research related to adaptation to climate change is increasing (e.g. Koninklijk Nederlands Meteorologisch Instituut (KNMI), 2020; Headquarters U.S. Army Corps of Engineers (USACE), 2011). In Japan, design rainfall used for flood-control management has been estimated from rainfall data, which are obtained from observations over several decades (Ministry of Land, Infrastructure, Transport and Tourism (MLIT), 2004). To make adaptation measures for torrential rainfall associated with climate change, a flood risk evaluation introducing risk-based approach has been proposed (e.g. Berkhout et al., 2014;Yamada et al., 2018;Yamada, 2019). The risk-based approach suggested by Yamada et al. (2018) and Yamada (2019) introducing mutual uncertainty evaluation based on huge ensemble climate data and mathematical statistics has characteristics that allow evaluatation of extreme phenomenon that are hard to predict with observations. Also, ensemble climate databases (Mizuta et al., 2017;Yamada et al., 2018;Sasaki et al., 2008Sasaki et al., , 2011 developed in Japan are utilized to set change ratios of rainfall with design level in each region, and flood-control management based on basin scale follows (MLIT, 2021).
In this study, we utilized the Database for Policy Decision Making for Future Climate Change (d4PDF) (Mizuta et al., 2017) which consists of simulations from the atmospheric general circulation model (AGCM) called the Meteorological Research Institute AGCM, version 3.2 (MRI-AGCM3.2) (Mizuta et al., 2012) with a horizontal resolution of 60 km (d4PDF-60 km), and dynamical downscaling (DDS) using the Non hydrostatic Regional Climate Model (NHRCM) (Sasaki et al., 2008) of the d4PDF-60 km to a horizontal resolution of 20 km targeted over Japan (d4PDF-20 km). The series of analyses presented in this study were performed using rainfall data developed by Yamada et al. (2018) and Yamada (2019) which were downscaled from d4PDF-20 km to a horizontal resolution of 5 km (d4PDF-5 km) over the Hokkaido region, northmost Japan, and its surrounding area.
In the process of design rainfall determination, goodnessof-fit testing is generally applied to evaluate validity over an assumed distribution of extreme rainfall. The Kolmogorov-Smirnov test is one such test (Kolmogorov, 1933;Smirnov, 1939). When this test is applied to frequency analysis of extreme hydrological events (e.g. World Meteorological Organization (WMO), 1989; USACE, 1994; Japan Meteorological Agency (JMA), 2021), the results can be used to determine the range of uncertainty in extreme rainfall under an assumed distribution. This test can also be applied to evaluate the goodness-of-fit between ensemble climate data and observations (e.g. Tanaka et al., 2019;Shimizu et al., 2020). However, the power of the Kolmogorov-Smirnov test becomes weak at the tail of an assumed distribution. The acceptance region of this test is quite wide at the tail of the assumed probability distribution, which leads to the existence of infinite values of extremes under the Kolmogorov-Smirnov test. In the light of weak power of the Kolmogorov-Smirnov test, a probability limit method theory was proposed by Moriguti (1995a), which has strong power at the tail of an assumed probability distribution. Shimizu et al. (2018Shimizu et al. ( , 2020 introduced the probability limit method into the construction of confidence intervals. The confidence interval based on the probability limit method test quantifies the threshold of the range estimated probability distribution from finite observation data with high accuracy. Also, the validity of confidence interval derived from resampling to d4PDF-5 km was supported by its consistency with the confidence interval based on the probability limit method (Shimizu et al., 2018(Shimizu et al., , 2020. Uncertainty in estimated probability distributions is quantified by the above confidence interval. Thus, to evaluate uncertainty in order statistics of extreme rainfall we adopted the concept of a control band, which was first described by Gumbel (1958). Here, the control band expresses the range of the i-th order statistics following an assumed distribution. Specifically, we applied an i-th order statistics for extreme rainfall, which is constructed using characteristics of beta distribution described later; we call this function the control density distribution. The control density distribution can be applied to evaluate the goodness-of-fit between ensemble climate data and observed data under the condition that the probability distribution used for construction of the control density distribution is true. Here, the confidence interval proposed by Shimizu et al. (2020) is based on a hypothesis test, the probability limit method, and diagnostics of rejection of an assumed distribution. On the other hand, control density distribution is not based on the hypothesis theory, and test statistics do not exist. Thus, the control density distribution is not used for diagnosis on the rejection of an assumed distribution. We describe our method, analyze the properties of the control density distribution and its relation to sample size, verify it using the Monte Carlo method, and apply the control density distribution method to d4PDF-5 km. The novelty of this study is the introduction of a control density distribution for the goodness-of-fit evaluation between ensemble climate data and observations.
CONCEPT OF THE CONTROL DENSITY DIS-TRIBUTION
In Gumbel's Statistics of Extremes (1958), he discussed the theoretical distribution of the i-th order statistics of the Gumbel distribution. The control band can be derived from this theoretical distribution. The lower boundary x lower i and upper boundary x upper i of the control band of the i-th order statistics are derived from the following equations: where x (i) is the i-th order statistics, f x i x is its theoretical probability distribution function (PDF), and α is the significance level. Connecting x lower i and x upper i for different i gives the control curve of the assumed distribution (Gumbel, 1958).
The control band and control curve are useful for statistical control, but f x i x is not always known for distributions other than the Gumbel distribution. However, it provides more information than the control band and control curve. Thus, we suggest the following method to determine the control bands for an arbitrary distribution. Then, using those control bands, the PDF f x i x can be derived. The derived distribution is referred to as the control density distribution in this study.
Theoretical method of finding the control band for an arbitrary distribution
A control band of order statistics was employed using the following procedure.
(1) Probability-representing function Generally, a random variable's distribution is represented by a PDF or a cumulative distribution function (CDF). Moriguti (1995b) proposed the following probabilityrepresenting function (PRF): where F(x) is a cumulative distribution function, and its inverse function, G(y), is defined as the PRF. Because the domain of G(y) is the range of F(x), it becomes [0, 1]. Using the PRF, the uniform distribution on [0, 1] can be transformed to any type of distribution. If a random variable Y follows a uniform distribution of [0, 1], the cumulative distribution function of the random variable X = G(y) will be F(x). In this way, the characteristics of the uniform distribution on [0, 1] can be generalized to any distribution; this is discussed below.
(2) Characteristics of the i-th order statistics of the uniform distribution on [0,1] Because the properties of the uniform distribution on [0, 1] can be extended to any type of distribution using the PRF, the characteristics of the statistics of the uniform distribution on [0, 1] should be examined. The i-th order statistics x (i) of the uniform distribution on [0, 1] follows a beta distribution, for which the cumulative distribution function is as follows: where B(x; α, β) is incomplete beta function, α and β are parameters of the beta distribution, and for the i-th order statistics in a sample of size n, α = i, β = n -i + 1.
(3) Finding the arbitrary distribution of the i-th order statistics For the results shown in Figure 1, we assumed a sample size of 60 and plotted the PDF of the 1st, 10th, 20th, 30th, 40th, 50th, and 60th order statistics derived from a uniform distribution. Here, the function form of order statistics FREQUENCY ANALYSIS OF LARGE-ENSEMBLE DATA derived from the uniform distribution is equivalent to the beta distribution. By projecting the control band, x lower i − uniform and x upper i − uniform , of the i-th order statistics derived from the uniform distribution to the PRF of the Gumbel distribution (or any type of distribution), we can obtain the control band of the i-th order statistics of the Gumbel distribution: Deriving the control density distribution from the control band Using the control band, the PDF of the i-th order statistics can be reverse derived. The process is shown in Figure 2. To ensure that the CDF of the Gumbel distribution follows a straight line, the Gumbel probability paper is applied in this figure. The projection points of the i-th order statistics from the uniform distribution to the Gumbel distribution are plotted in Figure 2(a). The probability between two consecutive projection points was fixed to 0.5%. Thus, connecting the projection points gives the control curve of the Gumbel distribution. The control curves representing the 99%, 80%, 60%, 40%, and 20% control bands are plotted in Figure 2 Figure 2(c) represents our concept of the control density distribution. For each order statistics, the density is calculated by dividing the fixed probability of 0.5% by the distance between two consecutive projection points. For example, the probability density between the lower boundary of the 99% control band and the lower boundary of the 98% control band is Next, we checked the reliability of the control density distribution using the Monte Carlo method. The test procedure can be summarized as follows: The true distribution is the Gumbel distribution with location parameter μ = 0, and scale parameter β = 1, and the sample size is 60. Then, 60 × 10,000 samples are generated to obtain the probability densities in each order, and compare them to control density distribution. The results are shown in Figure 3. It is apparent that the frequency distributions of order statistics, which are generated from the Gumbel distribution accord with the control density distribution quite well. Figure 3(c) shows that the histogram of order statistics is almost identical to the control density distribution, indicating that the reliability of the control density distribution is high.
Characteristics of the control density distribution
In frequency analysis of hydrological events, the size of the sample represents the period that the hydrological data cover (in years). For Japan, observations over about 60 years are considered for flood-control management. It is very difficult to estimate rainfall with a return period that is significantly longer than the period of observation. On the other hand, the control density distribution estimates ranges of order statistics following an assumed probability distribution. Figure 4 shows the 99% control band of the Gumbel distribution for various sample sizes, which were set to 10, 50, 100, 200 and 500. Some conclusions can be drawn. First, for a sample of size n, the return period that the control band can cover ranges up to n. Therefore, extrapolation is necessary for estimating extreme rainfall with a return period longer than the sample size. Second, by connecting the upper and lower limits of the control band of the different sample sizes, we obtain a straight line that is parallel to the true Gumbel distribution (Figure 4a). The same situation of the control band is derived for different significance levels, which estimates the maximum values of the different sample sizes following the Gumbel distribution with the same parameter as in the true Gumbel distribution. Hence, Figure 1. Practical method for finding the probability distribution function (PDF) of the i-th order statistics for an arbitrary distribution by protecting the beta distribution to the PDF of the targeted distribution it is reasonable to extrapolate the control band with a straight line that parallels the true distribution. Third, for a specific return period, a larger sample size results in a narrower control band.
APPLYING CONTROL DENSITY DISTRIBU-TION TO ENSEMBLE CLIMATE DATA
In this section, the control band and control density distribution are applied to a goodness-of-fit evaluation using ) and (c) that for a specific return period, the Mote Carlo test agrees with control density well FREQUENCY ANALYSIS OF LARGE-ENSEMBLE DATA d4PDF-5 km and observations. Here, the details of the d4PDF data are shown in the following. The d4PDF-20 km comprised of ensemble climate data obtained from a NHRCM with a horizontal resolution of 20 km, using the d4PDF-60 km as boundary conditions. Specifically, this regional downscaling experiment was constructed from an experiment targeting the past 60 years from 1951 to 2010. The 4K experiment assumes an increase in the global average temperature of 4K against the pre-industrial revolution levels and targets the 60 years from 2051 to 2110. The past experiment includes 50 ensemble members that perturb sea-ice conditions, sea-surface temperatures, and initial conditions, for a total of 60 years × 50 members = 3000 years. The 4K experiment includes 6 sea-surface temperature patterns and 15 ensemble members that perturb those patterns, for a total of 60 years × 6 sea-surface patterns × 15 members = 5400 years. The reproductivity of d4PDF-5 km developed by Yamada et al. (2018) and Yamada (2019) was verified through comparison with observations Hoshino et al., 2020).
The target area is the Tokachi River Obihiro reference point, which is located in Hokkaido. The observation of basin average 3-day annual maximum rainfall which covered 100 years, and the basin average 3-day annual maximum rainfall of d4PDF-5 km past experiment for total 3,000 years are used.
In conventional frequency analysis, the following steps are conducted (e.g. Stedinger et al., 1993;Takara, 1998). First, a distribution type is found for the observations. Second, the best-fitting parameter is determined for the observations. Third, hypothesis testing is used as the stochastic control to determine whether the distribution type should be Figure 4. Characteristics of control density showed (a) 99% control bands of the same distribution for different sample sizes and (b) that for a specific return period, larger sample size means narrower control bands changed. Fourth, if the best-fitting distribution passes the hypothesis test, the distribution is considered the assumed distribution and applied to estimate the rainfall for a return period larger than the sample size.
In extreme value theory (Fisher and Tippett, 1928;Gnedenko, 1943), even when the sample follows the assumed distribution and uses the best-fitting curve, uncertainty is present. In this study, the uncertainty in one sample can be quantified using the control band and control density distribution.
To use the ensemble climate dataset to help consideration on the assumed distribution, we propose two ideas: (1) Continue to use the best-fitting distribution of the observations as the assumed distribution because there is no model bias in the observations and the results can easily be compared to those of traditional frequency analysis.
(2) Use the average of the best-fitting distribution of the ensemble climate dataset as the assumed distribution. With this method, the estimated distribution is affected by the climate model; however, it includes large ensemble members and can thus complement idea (1). Figure 5 shows the analysis of idea (1). We considered an annual maximum 3-day rainfall. The Kolmogorov-Smirnov test and the probability limit method were both applied. As shown in the figure, for the same 5% significance level, the probability limit method had a smaller acceptance region at the tail ends of the estimated distribution compared to the Kolmogorov-Smirnov test. This means that the probability limit method had a stronger power for testing in these areas, in line with Shimizu et al. (2020). This result is important for frequency analyses of extreme hydrological events, which often focus on long return periods and would fall into the tail ends of the distribution. Using hypothesis tests for stochastic control, all of the observations were in the acceptance region for both tests; hence, no observations should be considered outliers and the estimated distribution should not be rejected at the 5% significance level.
Consider the best-fitting distribution of the observation data as the assumed distribution
The 99% control band in the figure is for the 100-year sample, the same period as the observation. For comparisons with d4PDF-5 km, we also needed to construct a control density distribution with the same sample size as d4PDF-5 km, which is 60 years. The results are shown in Figure 5(b). We can assess the uncertainty in the rainfall for a certain return period using the control bands. For example, Figure 5(c) shows the risks of exceeding the observations for return periods of 200, 100, 50, and 20 years are 7%, 7%, 21%, and 39%, respectively. For conventional frequency analysis, the largest observation value has a return period that is much longer than 100 years; however, with the present method, it has the same return period as the length of the observation, i.e. 100 years, and we know that the risk of exceeding that value is 7%.
For a certain return period, such as 100 years, we now have not only the expected values of extreme rainfall events but also their whole distribution for the 100-year return period. Hence, we can compare them to the d4PDF-5 km ( Figure 5(c)). We can see that the shape of the control density distribution agrees well with the shape of the histogram; however, the histogram appears to be shifted to the left compared to the control density distribution. The reason for this shift may be that the assumed distribution was determined according to the best-fitting Gumbel distribution of the observation. Also, the overlap ratio of the control density distribution and frequency distributions of each order, which are derived from ensemble climate data, can be applied as a goodness-of-fit evaluation index for ensemble climate data against observations. When the overlap ratio is higher, the goodness-of-fit is considered high under the condition that the assumed distribution is regarded as the true distribution.
Consider the average of the best-fitting distribution of the ensemble climate dataset to be the assumed distribution
To evaluate validity of control density distribution, using the average of the best-fitting distribution of the ensemble climate data as the assumed distribution is introduced. The results are shown in Figure 6. First, we can compare the best-fitting Gumbel distribution of the observations to the average of the best-fitting distributions of the d4PDF-5 km. The plot shifts to the right, which agrees with the previous results. Furthermore, comparing Figures 5(c) and 6(c), shows that, in the current case, not only the shape but also the position of the control density distribution agrees better with the histogram than do the results of the previous case. In the current case, the risks of exceeding the observations for return periods of 200,100,50,and 20 years become 14%,14%,34%,and 57%,respectively. Here,in Figures 5(c) and 6(c), the black line corresponding to the 200-year return period represents the value derived in the following manner. The observed maximum value corresponding to the 100-year return period is extrapolated to obtain a 200year return period with the same slope as the Gumbel distribution estimated from the observed data and d4PDF-5 km. In Figure 5, the Gumbel distribution estimated from the observation (solid red line) and, in Figure 6, that obtained from d4PDF-5 km (solid green line), are used to estimate observations with a 200-year return period. Then, an extrapolated value corresponding to a 200-year return period can be obtained. To verify the validity of the proposed method, frequency distributions with arbitrary return periods derived from each member of d4PDF-5 km are compared to the control density distribution, which is the Gumbel distribution fitted to a sample constructed of the average values of d4PDF-5 km in Figure 6. That figure confirms that the two distributions accord with each other well, supporting the validity of the proposed control density distribution. A comparison with the values of the previous case reveals some implications.
SUMMARY
Observational data which we use for various decision making such as flood control management are essentially limited compared to enormous degrees of freedom in the climate system. Therefore, it is necessary to incorporate confidence intervals into the discussion in order to understand statistics of extremes. To quantify uncertainty in statistics of extremes, we adopted the probability limit method, which has high power of test at the tail of an assumed distribution for derivation of confidence interval. In the probability limit method, samples of order statistics from the uniform distribution on [0, 1] are generated by the Figure 5. Method considering the best-fitting distribution of observations as the true distribution showed: (a) the relationship between observation and control density, (b) the general relationship between ensemble climate simulation data and control band, and (c) the relationship between ensemble climate simulation data and control density for a specific return period Monte Carlo method, and the occurrence probability of the order statistics located near the end most of tail of the beta distribution in each sample is extracted. The distribution of the occurrence probability of these order statistics is defined as the distribution of the test statistic. This enables an analytic derivation of the distribution of possible thresholds for each order at a given significance level and achieves high test power at the tail of the distribution. Therefore, confidence intervals based on the probability limit method, which were proposed in our previous studies (Shimizu et al., 2020), quantifies the uncertainty in the estimated probability distribution with high accuracy, while the control density distribution quantifies the probability distribution of order statistics with arbitrary return periods. In addition, we constructed a method for evaluating the goodness-of-fit between observation and ensemble climate data under the assumption that the population distribution is the probability distribution of extreme rainfall estimated from the observation. Moreover, quantification of the degree of goodness-of-fit between observation and the ensemble climate data is possible through calculation of the overlap ratio between the frequency distribution calculated from the ensemble climate data and the control density distribution for arbitrary return periods. The goodness of fit evaluation based on the control density distribution is not applied to decide rejection of an assumed distribution, because the derivation process of control density distribution does not include the theoretical test statistics distribution. Therefore, for threshold estimation of extreme rainfall and possibility of rejection of an assumed distribution, a confidence interval based on the probability limit method should be utilized. On the other hand, quantification of consistency, in which order statistics in ensemble climate data against observations is required, control density distribution might be useful. Figure 6. Method considering the average of the best-fitting distribution of the ensemble climate dataset as the true distribution showed: (a) the relationship between observations and the control density, (b) the general relationship between ensemble climate simulation data and control band, and (c) the relationship between the ensemble climate simulation data and control density for a specific return period | 2021-11-24T16:39:38.720Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "cda0eb3868c685ff7037ce05501d5e424d9739cf",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/hrl/15/4/15_84/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3ad5afe78375983e36a64456c1f96b5c9991d94c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
16570008 | pes2o/s2orc | v3-fos-license | 2-Deoxy-D-Glucose Treatment of Endothelial Cells Induces Autophagy by Reactive Oxygen Species-Mediated Activation of the AMP-Activated Protein Kinase
Autophagy is a cellular self-digestion process activated in response to stresses such as energy deprivation and oxidative stress. However, the mechanisms by which energy deprivation and oxidative stress trigger autophagy remain undefined. Here, we report that activation of AMP-activated protein kinase (AMPK) by mitochondria-derived reactive oxygen species (ROS) is required for autophagy in cultured endothelial cells. AMPK activity, ROS levels, and the markers of autophagy were monitored in confluent bovine aortic endothelial cells (BAEC) treated with the glycolysis blocker 2-deoxy-D-glucose (2-DG). Treatment of BAEC with 2-DG (5 mM) for 24 hours or with low concentrations of H2O2 (100 µM) induced autophagy, including increased conversion of microtubule-associated protein light chain 3 (LC3)-I to LC3-II, accumulation of GFP-tagged LC3 positive intracellular vacuoles, and increased fusion of autophagosomes with lysosomes. 2-DG-treatment also induced AMPK phosphorylation, which was blocked by either co-administration of two potent anti-oxidants (Tempol and N-Acetyl-L-cysteine) or overexpression of superoxide dismutase 1 or catalase in BAEC. Further, 2-DG-induced autophagy in BAEC was blocked by overexpressing catalase or siRNA-mediated knockdown of AMPK. Finally, pretreatment of BAEC with 2-DG increased endothelial cell viability after exposure to hypoxic stress. Thus, AMPK is required for ROS-triggered autophagy in endothelial cells, which increases endothelial cell survival in response to cell stress.
Introduction
Autophagy is a tightly regulated catabolic process involving the degradation of cellular components using lysosomal machinery. This process plays an important role in cell growth, development, and homeostasis by maintaining a balance between the synthesis, degradation, and subsequent recycling of cellular products. Autophagy is a major mechanism by which a starving or stressed cell reallocates nutrients from ancillary processes to more essential ones [1][2]. For example, autophagy can be induced by hypoxia [3], energy deprivation [4], starvation [5] and ischemia [6]. Mechanistically, autophagy is initiated when the autophagosome, a double-membrane structure, is formed to surround certain targeted cytoplasmic proteins and organelles. This process and the double-membrane structures are associated with the conversion of the microtubule-associated protein light chain 3B-I (LC3-I) to LC3B-II. The protein/organelle containing autophagosome fuses with a lysosome to degrade its inner contents [1]. Lysosomes can be disrupted by chloroquine or bafilomycin A to block autophagosome degradation and provoke autophagosome accumulation, which is marked by an increase in LC3-II [7].
Increasing evidence suggests that autophagy plays an important role in the cardiovascular system under physiological and pathological conditions including ischemia-reperfusion injury in the heart and other organs [8], cardiomyopathy [9], myocardial injury, atherosclerosis [10,11], and vascular pathology in Alzheimer's disease [12].
Reactive oxygen species (ROS) and reactive nitrogen species (RNS) are reported to be important in mediating autophagy [13,14]. ROS have also been reported to stabilize autophagosomes during periods of nutrient deprivation, hypoxia, ischemiareperfusion injury, and general cell stress [15]. For example, during cellular starvation or nutrient deprivation, increased generation of mitochondrial-derived hydrogen peroxide (H 2 O 2 ) induces oxidation and consequent inhibition of Atg4, the cysteine proteases (autophagins) which play crucial roles in autophagy by proteolytic activation of Atg8 paralogs for targeting to autophagic vesicles by lipid conjugation, as well as in subsequent deconjugation reactions [16]. Despite of growing evidence that the redox regulation of the cysteine protease Atg4 by ROS correlates with the occurrence of autophagy, the mechanistic details of how ROS/RNS initiates autophagy remain to be elucidated.
AMPK is a serine/threonine kinase, which operates as a metabolic switch that is engaged in conditions when cellular ATP is becoming depleted. Upon activation, AMPK induces formation of the tuberous sclerosis complex to inhibit phosphorylation of the mammalian target of rapamycin (mTOR), which triggers autophagy through two downstream signaling partners, ribosomal protein S6 kinase and 4Ebinding protein 1(4-eBP1) [17]. Some recent reports have implicated AMPK with regulation of autophagy. For example, aminoimidazole carboxamide ribonucleotide (AICAR) treatment and glucose deprivation of human mammary cancer derived cells (MCF-7s) inhibit autophagy [18]. Matsui and colleagues also reported that in cardiac myocytes, autophagy is induced by inhibition of mTOR, a phenomenon that protects against cell death [19].
Published studies from our laboratory and others have established an intricate balance between AMPK signaling and the redox state of vascular endothelial cells. ROS and RNS mediate AMPK activation induced by a wide range of stimuli, including hyperglycemia [20], hypoxia [21], treatment with metformin [22], nicotine [23], and therapy with statin drugs [24]. Conversely, AMPK activation inhibits the formation of ROS by NADPH oxidase and stimulates nitric oxide (NO) by endothelial NO synthase (eNOS) [25]. Further, AMPK has also been implicated in c-Jun N-terminal kinase (JNK) activation, nuclear factor (NF)-kB mediated transcription, E-selectin and vascular cell adhesion molecule-1 (VCAM-1) expression in endothelial cells [26]. However, whether or not AMPK is involved in ROS-triggered autophagy is unknown.
2-Deoxy-D-glucose (2-DG) is a relatively specific blocker for glycolysis because it cannot be further metabolized by hexokinase. Therefore, 2-DG triggers glucose deprivation without altering other nutrients or metabolic pathways [27]. In addition, 2-DG is reported to activate AMPK, increase ROS in cancer cells [28], and trigger autophagy [29]. Thus, 2-DG appears to be an ideal tool to dissect the interactions between autophagy, ROS, and AMPK. However, while 2-DG induces autophagy in cancer cells and mouse embryonic fibroblasts [30,31], its ability to trigger autophagy in endothelial cells has not been demonstrated. In this study, we provide the first demonstration that AMPK is required for ROStriggered autophagy in endothelial cells exposed to 2-DG.
siRNA Transfection
Control non-targeted siRNA, AMPKa, or UCP-2 targeted siRNA (10 mM) were added to OPTI-MEM reduced serum media (GIBCO, Invitrogen) with Lipofectamine TM 2000. Proliferating HUVEC at 60% confluence in a 6-well plate were incubated in 1.0 mL EBM supplemented with siRNA for 6 hrs. Then 1.0 mL EBM containing 2X fetal bovine serum and antibiotics was added, and the cells cultured for 24 hrs.
Determination of ROS
BAEC were incubated in EBM without phenol red containing CM-H 2 DCFDA (5 mM) as a membrane-permeable probe to detect intracellular ROS, under various stimulation conditions. After 30 min incubation, fluorescence was detected using a Synergy HT microplate reader (BIO-TEK) with excitation set at 490 nm and emission detected at 520 nm.
Measurement of AMP, ADP, and ATP
AMP, ADP, and ATP in BAEC were assessed by high performance liquid chromatography (HPLC) as described previously [32].
AMPK Activity Assay
AMPK activity was determined using the SAMS peptide as an AMPK substrate, as previously described [22].
Western Blotting and Quantification
Western blot was performed as described previously [22]. Band intensity (area 6 density) on Western blots was analyzed by densitometry (model GS-700, Imaging Densitometer; Bio-Rad, Hercules, CA). Background was subtracted from the quantification area.
Light Microscopy and Immunofluorescence
Confluent BAEC on glass slides were transduced with LC3B-GFP encoding BacMam virus (Invitrogen) for 24 hrs. BAEC were then incubated for 0.5 hr with 1 mM LysoTracker Red (Invitrogen) and rinsed several times with PBS. The cells on the slides were fixed with 4% formaldehyde for 10 min and permeablized with 0.2% triton X-100 for 20 min. The slides were mounted and observed with OLYMPUS IX71 microscope (Olympus, Tokyo, Japan).
Statistical Analysis
Data are presented as mean 6 standard error of the mean (SEM). Differences were analyzed by using a one-way analysis of variance (ANOVA) followed by a Tukey post-hoc analysis. p values less than 0.05 were considered statistically significant.
2-DG induces autophagy in endothelial cells
To establish whether 2-DG exposure triggered autophagy in endothelial cells, BAEC were exposed for 2 to 24 hrs to 5 mM 2-DG, a dose comparable to the physiologic concentration of D-Glucose, and the conversion of LC3-I to LC3-II was then determined. Exposure of BAEC to 2-DG for 24 hrs markedly increased the conversion of the cytoplasmic LC3-I to the autophagosomal membrane-bound LC3-II, indicating 2-DG might induce autophagy (Fig. 1A). However, rather than inducing autophagy, an increase in LC3-II could instead be due to 2-DG repressing autophagosome fusion with lysosomes and degradation of LC3-II. To determine the mechanism of 2-DG action, we examined the effect of disrupting lysosomal function using chloroquine and bafilomycin A. These compounds increase lysosomal pH and interfere with the function of lysosomal enzymes [7], thereby increasing autophagosome accumulation in the cell. BAECs treated with 2-DG and chloroquine or bafilomycin A had higher levels of LC3-II than cells treated with 2-DG alone (Fig. 1B), which indicates 2-DG acts by inducing autophagosome formation and does not disrupt their downstream maturation into autophagolysosomes.
To confirm the effect of 2-DG on autophagosome formation, immunofluorescence analysis of BAEC expressing GFP-LC3 and stained with Lysotracker (an organelle-selective, fluorescent probe that labels and tracks acidic organelles, such as lysosomes) was preformed. 2-DG increased formation of autophagosomes, indicated by increase in the number and distribution of GFPtagged LC3 spots (Fig. 1C). Treatment of cell with both 2-DG and chloroquine further increased the number and distribution of LC3-GFP spots in these cells. Further, 2-DG increased fusion of GFP-LC3 tagged vacuoles with lysosomes, which is indicated by co-localization of GFP-LC3 with LysoTracker stained vesicles (Fig. 1C, merged image). This demonstrated that 2-DG treatment increased fusion of autophagosomes with lysosomes, a definitive event in the induction of cellular autophagy. Together, these data show that 2-DG induces conversion of LC3-I to LC3-II and triggers autophagy in endothelial cells.
H 2 O 2 mediates 2-DG-induced autophagy
Current literature indicates that H 2 O 2 induces autophagy in HeLa cells and HEK293 cells [14,33]. These findings are intriguing because nutrient starvation (such as 2-DG treatment) is known to generate ROS in a variety of cell types. We therefore tested whether H 2 O 2 could also trigger autophagy in endothelial cells. Treatment of BAEC for 30 min with H 2 O 2 (100 mM) increased the LC3-II-to-actin ratio 1.6-fold (Fig. 1D). We also reasoned that blocking the effects of ROS should mitigate the phenotype of 2-DG-induced autophagy. We tested this hypothesis by assessing the effect of 2-DG on the conversion of LC3I to LC3II in BAEC overexpressing either SOD1 or catalase from an adenovirus vector. SOD1 is an enzyme that buffers O 2 2 by converting it to H 2 O 2 and catalase then scavenges H 2 O 2 ( Figure S1 in Text S1). Both SOD1 and catalase overexpression decreased 2-DG-enhanced LC3-II conversion. 2-DG treatment of cells expressing the GFP negative control had a 2.0-fold increase in LC3-II conversion, which was higher than the 1.5-fold increase observed in SOD1-overexpressing cells and 1.4-fold increase in catalase-overexpressing cells (Fig. 1E). To further elucidate the effect of antioxidant enzyme overexpression on 2-DG-induced autophagosome formation, immunofluorescence analysis of BAEC expressing GFP-LC3 was preformed. 2-DG treatment (lower three panels) increased formation of autophagosomes (green fluorescent spots) compared to non-2-DG treated cells (Fig. 1F). However, SOD1 and catalase overexpression attenuated the 2-DG-mediated formation of autophagosomes. Overall these results indicate that ROS are critically involved in 2-DG-induced autophagy and that H 2 O 2 treatment of endothelial cells induces autophagy.
2-DG induced AMPK activation in a dose-and timedependent manner
To further establish if ROS is responsible for 2-DG-induced AMPK activation, we performed a time-course study to determine if ROS production proceeds AMPK activation. Confluent BAEC were stimulated with 5 mM 2-DG for 5 to 120 min, and AMPK activation was measured by monitoring phosphorylation of AMPKa-Thr172. As depicted in Fig. 2A, 2-DG treatment increased the phosphorylation of AMPKa-Thr172 within 5 min, and this reached a maximum of 2.8-fold greater than that in 2-DG untreated cells by 10 min after treatment without affecting the total AMPK levels. After 10 min, phosphorylated AMPKa-Thr172 levels decreased and returned to basal levels by 60 min after treatment. ACC is a well-established downstream target of AMPK and is activated by phosphorylation of Ser79. We measured ACC-Ser79 phosphorylation and observed a similar time course for ACC activation as that for AMPK ( Figure S1 in Text S1). Further, 2-DG increased the phosphorylation of AMPK-Thr172 (Fig. 2B) and ACC-Ser 79 ( Figure S1 in Text S1) in a dose-dependent manner. 2-DG also increased AMPK activity to 4-fold after 5 min compared to basal activity and reached a peak 5.5-fold higher than basal activity after 10 min (Fig. 2C). These data establish that 2-DG induces AMPK activation in both a timeand dose-dependent manner.
AMPK is required for 2-DG-induced autophagy
In cancer cells, AMPK is reported to be a key regulator of autophagy via a mechanism that involves inactivation of mTOR [30] and cardiac myocytes [31]. We therefore explored whether AMPK was also involved in 2-DG-induced autophagy in endothelial cells. BAEC were transduced with an adenovirus vector encoding a dominant-negative AMPK (AMPK-DN) to block cellular AMPK functions. 2-DG treatment of cells expressing the GFP negative control for 24 hrs induced a 2-fold increase in LC3-II levels, but AMPK-DN overexpression blocked the 2-DGinduced increase in LC3-II levels (Fig. 3A). We confirmed these results by siRNA-mediated silencing of the AMPKa gene, which is the critical catalytic subunit of AMPK in HUVEC (Fig. 3B). As observed in BAEC, 2-DG treatment increased LC3-II levels 2-fold by 24 hrs post treatment in cells transfected with non-targeted control siRNA, but transfection with AMPKa1/2-targeted siRNA dramatically inhibited AMPK expression and reduced the 2-DGinduced increase in the LC3-II levels to 1.3-fold (Fig. 3B). Taken together, these data show that AMPK is required for 2-DGinduced autophagy in endothelial cells.
AMPK induces autophagy via downstream mTOR signaling
The mTOR integrates the input from several upstream pathways by sensing the nutrient levels, bioenergetic status, and redox state of the cell [34]. We hypothesized that mTOR signaling was involved in the 2-DG induced AMPK-mediated induction of autophagy. We therefore examined phosphorylation status of AMPK and mTOR after AICAR, 2-DG, and rapamycin treatment. The 70-kDa ribosomal protein S6 kinase (p70S6K) and eukaryote initiation factor 4E binding protein 1 (4E-BP1) are the two well-characterized targets of mTORC. When we probed . C: BAEC expressing GFP-LC3 were treated with chloroquine or 2-DG, and the accumulation of LC3 II (green), localization of LC3 II with lysosomes (red), and DAPI (blue) staining of nuclei in response to treatment were analyzed by fluorescence microscopy. Scale bars, 5 mm. D: Confluent monolayers of BAEC were treated with 100 mM H 2 O 2 for the indicated times. Cell lysates analyzed by Western blot using antibody against LC3 (n = 3; one-way ANOVA: *p,0.05 vs control). E: BAEC were transduced adenovirus vectors encoding SOD1 or catalase for 48 hrs and then exposed to 5 mM 2-DG for 24 hrs (n = 3; two-way ANOVA, * p,0.05, GFP vs. 2-DG, SOD1 vs BAEC by Western blot analysis, we observed that both AICAR and 2-DG led to the phosphorylation of AMPK as well as its upstream activator ACC (Fig. 2C). As predicted, both AICAR and 2-DG also attenuated the phosphorylation of mTOR and its two targets, p706K and 4E-BP1. Similarly, the canonical mTOR inhibitor rapamycin increased of LC3-II levels without affecting AMPK phosphorylation (Fig. 2C and Figure S2 in Text S1). Thus, we concluded that 2-DG induced autophagy via activation of AMPK, which functions through the downstream inhibition of the mTOR signaling pathway.
ROS is involved in 2-DG-induced AMPK activation, which can be attenuated by anti-oxidant treatment
Recent studies using cancer cell lines [28] and C. Elegans [35] showed that glucose deprivation activated AMPK by production of ROS. However, it is not clear whether ROS also regulate 2-DG-induced AMPK activation in endothelial cells. To evaluate this, we first examined the effects of antioxidants on 2-DG-induced AMPK activation in BAEC. Pre-treatment of BAEC with Tempol (an O 2 2 scavenger; 10 mM) or NAc (a thiol antioxidant; 2 mM) significantly reduced 2-DG-enhanced phosphorylation of AMPK. In addition, NAc inhibited ACC phosphorylation induced by 2-DG ( Fig. 4A and B, Figure S2 in Text S1). We further assayed AMPK activity in BAEC by quantifying phosphorylation of the SAMS peptide, an AMPK substrate, using a [ 32 P]-ATP assay. After treatment with 2-DG (5 mM), AMPK activity was 3-fold greater than baseline within 10 min, and NAc treatment caused AMPK activity to drop from 3-fold to 2-fold over baseline levels (Fig. 4C). Finally, we overexpressed antioxident proteins SOD1 or catalase to further examine the involvement of ROS ( Figure S3 in Text S1). Overexpression of either SOD1 or catalase, but not GFP, attenuated 2-DG-enhanced phosphorylation of AMPK and ACC (Fig. 4D). In the SAMS peptide phosphorylation assay, 2-DG treatment increased AMPK activity 7-fold, and this was enhanced 2-fold by overexpression of SOD1 and 2.5-fold by overexpression of catalase (Fig. 4E). Overall these results indicate that ROS are involved in 2-DG-induced AMPK activation in endothelial cells, a phenomenon that is reversible if ROS levels are suppressed with anti-oxidants.
2-DG increases intracellular H 2 O 2 levels in endothelial cells
We next determined the effect of 2-DG on ROS production in endothelial cells using CM-H 2 DCFDA as an intracellular H 2 O 2 probe. It is known that AMPK activation by AICAR reduces the generation of ROS [20]. To circumvent the confounding potential of ROS attenuation by AMPK activation, we detected intracellular H 2 O 2 in 2-DG-stimulated HUVEC transfected with AMPKtargeted siRNA. As expected, the intracellular H 2 O 2 levels in cells in which AMPKa1/2 was knocked down were significantly higher than in cells transfected with the non-targeted control siRNA following a 10-min, 5-mM 2-DG treatment (Fig. 5A). Pretreatment with a catalase inhibitor 3-amino-1,2,4-triazine (ATZ) and thioredoxin reductase inhibitor 1-chlo-2,4-dinitrobenzene
Suppression of Atg4 attenuates 2-DG induced autophagy
Sherz-Shouval et al. [16] recently reported that ROS are essential for autophagy and specifically regulate the activity of Atg4. Although these studies were done in Chinese Hamster Ovary and HeLa cell lines, it is likely that these results can be extrapolated to endothelial cells as well. Based on these findings, we examined Atg4 expression in HUVEC after 2-DG treatment or after transfection of Atg4-targeted siRNA. As shown in Fig. 5D, silencing of Atg4 efficiently blocked Atg4 expression in HUVEC. Further, 2-DG treatment of Atg4-silenced cells led to an attenuated increase in LC3-II levels. These findings are consistent with the observation that H 2 O 2 inhibits Atg4, which in turn promotes lipidation of Atg8 leading to increased autophagy.
ROS-dependent AMPK activation is independent of changes in the AMP-to-ATP ratio
Scott et al. found that there are regulatory AMP-and ATPbinding sites in the subunits of AMPK [36]. As a consequence, high concentrations of ATP antagonize AMPK activation that results from AMP binding to the heterotrimer. 2-DG appears to induce AMPK activation primarily through the decrease in intracellular ATP levels that results from blocking glycolysis. We speculated that ROS-mediated AMPK activation via glucose deprivation might not depend on a change in the AMP-to-ATP ratio. We therefore quantified the ADP:ATP and AMP:ATP ratios in BAEC exposed to 2-DG at different time points after treatment. Both AMP:ATP and ADP:ATP ratios significantly increased by 5 min after 2-DG addition (Fig. 6A). Neither pretreatment of BAEC with Tempol or NAc (Fig. 6B) nor overexpression of SOD1 or catalase inhibited the observed increase in these ratios (Fig. 6C). This is the first study to demonstrate that 2-DG-induced AMPK activation is mediated by ROS and is likely independent of the AMP:ATP nucleotide ratio.
Mitochondria are the sources of ROS in 2-DG-treated endothelial cells
Mitochondria, NAD(P)H oxidase, and xanthine oxidase [37] can generate ROS in both physiological and pathological conditions. We sought to identify the major sources of ROS synthesis in 2-DG-treated endothelial cells. Mito-Tempol is a synthetic Tempol derivate that preferentially scavenges O 2 2 from mitochondria. Pre-treatment of cells with 10 mM mito-Tempol for 30 min dramatically decreased 2-DG-induced phosphorylation of AMPK (Fig. 7A). Further, overexpression of MnSOD, a SOD isoform located in the mitochondrial matrix ( Figure S3 in Text S1), attenuated the 2-DG-induced phosphorylation of AMPK and ACC (Fig. 7B).
Mitochondrial uncoupling protein-2 (UCP-2) is a mitochondrial anion carrier protein that mediates mitochondrial proton leakage and uncouples ATP synthesis from oxidative phosphorylation. UCP-2 overexpression ( Figure S3 in Text S1) dramatically decreased the 2-DG phosphorylation of AMPK and ACC (Fig. 6C). Consistently, transfection of UCP-2-targeted siRNA, but not control siRNA, enhanced both basal phospho-AMPK levels and 2-DG-induced AMPK phosphorylation in HUVEC ( Fig. 7D and Figure S3 in Text S1). To further identify the sources of ROS in these cells, we used infected cells with adenovirus vectors carrying dominant negative subunits of NADPH oxidase to induce NAD(P)H oxidase dysfunction. p47 phox and p67 phox are subunits of the NADPH oxidase that activate NAD(P)H oxidase activity by binding with gp91 phox on the mitochondrial membrane [38], but expression of dominant negative p47 phox (p47-DN) and p67 phox (p67-DN) lead to NAD(P)H oxidase dysfunction. p47-DN or p67-DN overexpression did not affect 2-DG-induced AMPK and ACC phosphorylation (Fig. 7E and Figure S3 in Text S1). Moreover, incubation of BAEC with the xanthine oxidase inhibitors, allopurinol and oxypurinol, did not block 2-DGinduced AMPK and ACC phosphorylation (Fig. 7F and G). These findings strongly suggest that mitochondria are the main source of ROS induced by 2-DG in BAEC.
AMPK-regulated autophagy contributes to endothelial cell survival under hypoxic conditions
We next sought to demonstrate the physiological relevance of our findings. For this purpose, we exposed BAEC to hypoxic conditions and treated them with 2-DG to determine the role that autophagy plays under hypoxic conditions. As showed in Figure 8A, we observed increased cell death in BAEC after 12 hrs of hypoxia. Interestingly, 2-DG pre-treatment prevented the hypoxia-induced cell death. We quantified the release of lactate dehydrogenase (LDH) from cells as a marker of plasma membrane damage, reflecting the degree of cell death. LDH release, and therefore cell death, in BAEC pre-treated with 2-DG (5 mM) was significantly lower than that in untreated cells (Fig. 8A). In these experiments, we were also able to demonstrate that whereas 2-DG protected cells from hypoxia-induced death, pre-treatment with 3-methyladenine (3-MA), an autophagy inhibitor, partially blocked the 2-DGinduced protection (Fig. 8B). In contrast, SOD1 or catalase overexpression blocked the 2-DG-induced protection and cell survival under hypoxic conditions (Fig. 8C). Finally, AMPK-DN overexpression prevented 2-DG-induced cell survival (Fig. 8D). Together, these results show that 2-DG induces autophagy through AMPK activation in a ROS-dependent manner, thereby protecting cells from hypoxia-induced cell death.
Discussion
The present study has, for the first time, demonstrated that AMPK activation by mitochondrial ROS is required for the induction of autophagy in endothelial cells. Mechanistically, we found that ROS, via AMPK activation, increase autophagy by inhibition of mTOR and Atg4 activity and ROS-triggered autophagy increases endothelial cell survival under hypoxic conditions. Thus, our results reveal a novel signaling pathway in which mitochondrial ROS activate AMPK that in turn increases autophagy. Induction of autophagy then serves to increase endothelial cell survival (Fig. 8E). Glucose deprivation induces a depletion of intracellular ATP [39], which consequently elevates the AMP:ATP ratio and activates AMPK [29]. Recent study has demonstrated that glucose deprivation increases O 2 2 and H 2 O 2 generation in human colon and breast cancer cells [40]. Our results demonstrate that 2-DG induces autophagy in endothelial cells, a phenomenon not previously reported in this cell type. Specifically, we have shown that whether AMPK is activated via nutritional stress (as in exposure to 2-DG) or by induction of ROS synthesis, endothelial cells activate an autophagy signaling pathway rather than a cell death pathway. This result is consistent with other published data. For example, Matsuda et al. reported that 2-DG induces autophagy in A/J mouse peritoneal macrophages [41]. This process is also likely to be involved in the ability of 2-DG to prevent brain injury induced by ischemiareperfusion [42]. 2-DG is also reported to decrease phosphorylation of mTOR and its downstream targets, p70S6k and 4E-BP1, in human breast cancer cells, leading to a reduced carcinogenic response and attenuated tumorigenicity [43]. AMPK activation also represses mTOR activity in cells with reduced energy stores or after AICAR treatment [44]. Consistent with these findings, our results demonstrate that 2-DG induces autophagy through an AMPKregulated pathway, and siRNA-mediated AMPK knockdown prevents conversion of LC3-I to LC3-II. Most compelling of all, in our validation of data showing the involvement of H 2 O 2 in 2-DGenhanced phosphorylation of AMPK, we were also able to demonstrate that ROS are involved in the coordinated role of 2-DG and AMPK on autophagy. Overall, these data indicate a novel autophagy signaling pathway important for endothelial cell survival.
Our results have further demonstrated that ROS might be a general mechanism for AMPK activation under certain physiological conditions, including glucose deprivation. Consistent with these findings, Cai et al. demonstrated that glucose deprivation induces ROS production to activate AMPK in pancreatic b cells [45]. Several groups, including our own, have previously shown that diverse stimuli can induce ROS generation, and in general, this results in AMPK activation. Exposure to different oxygen concentrations, ranging from frank hypoxia [46] to hyperemia [47], triggers AMPK activation via mitochondrial ROS production. In addition, many exogenous stimuli, such as free fatty acids, including palmitic acid and arachidonic acid [48], pharmacological stimuli, such as metformin [22], etoposide, and resveratrol have all been observed to increase AMPK activation through a mitochondrial ROS mechanism. All of these compounds are potentially important drug treatments. Metformin is a common treatment for diabetes and metabolic syndrome. Etoposide is a cancer therapeutic agent that inhibits topoisomerase II activity, and resveratrol is a botanical phytoallexin that has shown beneficial effects on hyperglycemia in animal models of diabetes [49]. Phosphorylation of AMPK via ROS is also seen to occur in skeletal muscle during both stretch [50] and contraction [51]. These data support our concept that AMPK is an important redox sensor that effectively operates in a variety of physiological and pathological conditions.
Another important finding in this study is that 2-DG-induced ROS-activated AMPK activation prevents mitochondrial ROS generation using a feedback system. Our data suggest an intricate balance between AMPK signaling and the redox state of vascular endothelial cells. AMPK activation maintains redox homeostasis by inhibiting intracellular ROS production from mitochondrial, NAD(P)H oxidase or by increasing anti-oxidant gene expression. Our group previously reported that AMPK activation up-regulates UCP-2 expression to prevent oxidant stress [52]. Additional indirect evidence of these redox regulating properties has been described. For example, AICAR induces AMPK activation and attenuates the ROS production seen in hyperglycemia [20]. Rosiglitazone, a thiozolidinedione drug, also induces AMPK activation and prevents hyperglycemia-induced ROS production [53]. AMPK activation can also inhibit ROS production from other sources, such as NAD(P)H oxidase and adiponectin exposure [54]. Moreover, AMPK has also been implicated in the regulation of anti-oxidant enzyme expression in endothelial cells by directly regulating the forkhead box O (FoxO) 1 and 3 transcription factors [55] to induce expression of thioredoxin (Trx), which is an important ROS scavenger. Indeed, investigators have also identified AMPK-FoxO-Trx as being involved in the cellular antioxidant defense mounted against disparate ROS/cell stress triggers, such as shear stress [56] and free fatty acid stimulation [48]. AMPK activation was also reported to increase the expression of MnSOD [57], and AMPKa1 deletion decreases the expression of SOD, catalase, c-glutamylcystine synthase, and Trx [58]. Therefore, AMPK activation appears to be an important target for maintaining cellular redox homeostasis.
Endothelial dysfunction is one of the earliest events that occurs in a number of cardiovascular diseases, including atherosclerosis, hypertension, and diabetes. Endothelia dysfunction is characterized by accelerated endothelial cell death [59]. AMPK expression levels and activation are reduced in the early stages of these cardiovascular diseases [26,60]. Thus, the defects in the ROS-AMPK-autophagy pathway described in these studies may contribute to the initiation and progression of endothelial dysfunction. Therefore, greater insight into the complex autophagic and apoptotic regulatory controls in the endothelium and vascular smooth muscle system is likely to lead to improved clinical treatments for related diseases. | 2014-10-01T00:00:00.000Z | 2011-02-28T00:00:00.000 | {
"year": 2011,
"sha1": "43eaa1f183cd720f0c169405de9adf889cb75850",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0017234&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43eaa1f183cd720f0c169405de9adf889cb75850",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
213813258 | pes2o/s2orc | v3-fos-license | Preparation and Characterization of Transparent Glass Ceramics Containing Na3Gd(PO4)2
Transparent glass ceramics with the mole percent composition of 21.3Na2CO3-2.2Gd2O3-38SiO2-36.5H3BO3-2.0P2O5 were prepared by melting crystallization method. The heat treatment system, crystal phase, micromorphology and transmittance of the samples were tested and characterized by differential scanning calorimetry analysis(DSC), X-ray diffraction (XRD), scanning electron microscopy(SEM) and UV-Vis-NIR, respectively. The optimum heat treatment condition is 730°C for 2h. The Na3Gd(PO4)2 crystal phase were precipitated homogeneously among the glass matrix. The results of FTIR spectra confirmed the presence of PO4 groups in the glass ceramics samples. The refractive index of glass ceramics and glass samples were measured. In the visible region, the transmittance of glass ceramics is up to 80%.
Introduction
In recent years, the research on glass-ceramics has attracted wide attention. Glass-ceramics is a new type of composite material of crystalline phase and glass phase obtained by heat treatment of precursor glass material and proper control of crystallization time. Crystals in glass-ceramics are formed by nucleation and grain growth after heat treatment [1][2][3][4][5][6][7][8]. Therefore, glass-ceramics have the common advantages of both glass and crystal, which are easy to prepare, good transmittance and stable physical and chemical properties. It has been applied in thermology, chemistry and biology. Among them, phosphate glass-ceramics with moderate phonon energy, low scattering, high refractive index and energy transfer efficiency between phosphate glass-ceramics ions have become a research hotspot in recent years [9][10][11][12][13][14][15].For example, Tomasz K. Pietrzak et al. studied the preparation of nano-sized Li 3 Me 2 (PO 4 ) 2 F 3 glass-ceramics, using different transition metals (Me = V, Fe, Ti) doped LiF-Me 2 O 3 -P 2 O 5 glass-ceramics, and their effects on the formation and crystallization of glass matrix [16]. S. Suresh, T et al. studied the effect of nickel ion coordination on the spectral properties of multicomponent CaF 2 -Bi 2 O 3 -P 2 O 5 -B 2 O 3 glass-ceramics [17]. Swati Soman et al. studied the effect of isothermal heat treatment on the phase transition, microstructures and ionic conductivity of Li 2 O-Al 2 O 3 -TiO 2 -P 2 O 5 glass-ceramics [18].
Transparent glass-ceramics containing Na 3 Gd(PO 4 ) 2 crystal were prepared by melt crystallization method. The structure of precursor glass and glass-ceramics was studied, and the effect of heat treatment on the morphology of samples was analyzed. Infrared spectroscopy was used to study the microstructural changes of precursors glass and glass ceramics before and after heat treatment.
Experimental
Glass samples with a mass percentage ratio of 21.3Na 2 CO 3 -2.2Gd 2 O 3 -38SiO 2 -36.5H 3 BO 3 -2.0P 2 O 5 were prepared by melting method. The purity of Gd 2 O 3 was 99.99%, and the other reagents were analytically pure. Batches of about 20g raw materials were well mixed in a mortar, covered corundum crucibles under air atmosphere, the mixed materials were heated at 1200C for 1h in a resistance furnace with the heating rate of 2C /min, and then the system was heated up to 1400C for 2h. Subsequently, the melt was transferred into a steel mold, followed by annealing at 450 °C for 2 h, then the glass was cooled to room temperature with the annealing furnace, and the transparent glass sample which eliminates the internal stress labeled PG. After heat treatment the samples were transformed into glass ceramics. The samples of glass ceramics were cut into small pieces with the size of 10 mm 10 mm 2 mm and polished for other tests.
The differential scanning calorimetry (DSC) analysis of glass powder was carried out by using SDT 2960 thermal analyzer of TA Company in the temperature range of 200 ~900 C/min at a heating rate of 10 C/min. During the measurements Al 2 O 3 was used as a reference. Glass ceramics samples were ground into fine powder in agate mortar, and the 2500v X-ray diffraction analyzer of Rigaku company in Japan was used to determine the glass ceramics under different heat treatment conditions. The X-ray diffraction apparatus with Cu Kа radiation over the angular range 10 ~ 90° in a step size of 4°/min. The transmittance of the samples was measured by Ultraviolet-Visible-Near Infrared Spectrophotometer of Shimadzu (Shimadzu, UVMIN-1240) in the range of 0-1100 nm. The microstructure of glass ceramics was characterized by scanning electron microscopy (SEM, JEOL, JSM-7610F) operated at 10kV. The refractive index of samples was measured by Abbe refractometer (2WAJ). The Fourier transform infrared spectra of the samples were taken using a FTIR-8400S spectrometer in the wave number range of 1500-400 cm −l . KBr pellets were used to record the FTIR spectra of the samples. Figure 2. XRD patterns of PG and glass ceramics under different heat treatment system. Figure 1(a) shows the DSC curve of the precursor glass. As seen in Figure 1, there is a distinct exothermic peak at 730C. On the basis of DSC and experiment, the detailed heat treatment system of glass samples are established and shown in Table 1. Figure 2 shows the X-ray diffraction patterns of glass PG and glass-ceramic samples GC1-GC3. After comparing the diffraction peaks with PDF standard cards, the crystal phase of the sample was determined to be monoclinic hexagonal Na 3 Gd(PO 4 ) 2 (38-0059).The XRD pattern of sample PG shows no diffraction peaks characteristic, and the broad band is from typical amorphous SiO 2 , indicating that it is completely amorphous. After heat treatment, some obvious diffraction peaks appeared in samples GC1, indicating that the sample has crystalline phase. With the increase of crystallization time, the diffraction peaks of sample GC2-GC3 increase and sharp peaks appear, which indicates that the crystal content is increasing gradually. According to the differential thermal analysis diagram of the sample, the phosphate glass-ceramics with high crystallinity and uniform grain size can be obtained. The optimum heat treatment condition is to keep the glass-ceramics at 730C for 2 h. At the same time, the grain size is analyzed and calculated by Scherrer formula (1):
Result and discussion
Where, D is grain size, K is constant 0.943, and β is half-peak width and height. It is converted into radian system, that is, β=(FWHM/180)×3.14. According to the calculation, the grain size of the sample is about 254 nm. Figure 3 shows the SEM images of PG and glass ceramics under different heat treatment system. From the picture, it can be seen that there is almost no crystal phase in PG. The sample GC1 has grains with uneven size and low crystallization degree. With the increase of crystallization time, the growth of GC2 grains increases and the size is uniform, and the morphology is spherical. The grain size of sample GC3 increases, and the initial distribution of non-uniform particle size varies greatly, resulting in irregular shape and obvious agglomeration. The results show that the crystallinity increases with the increase of crystallization time, and the grain size and quantity also increase. The crystallization time is an important factor affecting the growth and distribution of grains. Figure 4, it can be seen that the transmittance of GC1 is higher in infrared-visible region. The SEM images show that there are fewer grains in the sample GC1, which has less influence on the refractive index and reflection of light. The transmittance of sample GC2-GC3 decreases because of the increase of grains and crystallization degree in glass-ceramics, in which the transmittance of sample GC3 is the lowest, because with the increase of heat treatment time, the crystal grains grow and increase, and the phenomenon of agglomeration occurs, which reduces the gap between crystals, increases light scattering and diffraction, and increases light loss. It is noticeable that the transmittance of all samples tend to decrease at 435 nm. This effect is more prominent with increasing crystallization degree. This phenomenon could be explained by Henry theory, in which the intensity of scattered light in GC follows a λ -8 R 7 relationship, where λ is the wavelength of light and R is the average radius of crystals in GC [19]. So the transmittance of GC decreases with the decrease of wavelength as the grain size increases. The refractive index of samples are shown in Table3. As shown in Table 2, the refractive index of PG is similar to that of GC1 and GC2, but the refractive index of GC3 is much larger than that of GC1 and GC2. This indicates that the grain size of glass-ceramics increases with the increase of heat treatment time, the grain size becomes larger, the refractive index increases and the transmittance decreases. 1.5206 Figure 5 shows the infrared spectra of glass ceramics GC1-GC3 in the frequency region between 400 and 1400 cm -1 . FTIR bands of the samples and their assigned vibrational modes are listed in Table 3
Conclusions
Glass ceramics with Na 3 Gd(PO 4 ) 2 crystalline phase were prepared by melt crystallization method for the first time. The optimum heat treatment condition was determined to be 730C crystallization for 2 h. The transmittance of glass-ceramics in visible region is 80%. The crystal grain size is 254 nm. It is proved that with the increase of heat treatment time, the transmittance of glass-ceramics decreases and the refractive index increases. | 2019-12-05T09:25:23.125Z | 2019-11-27T00:00:00.000 | {
"year": 2017,
"sha1": "9662b2bb7f8d69be66264a94716403d5ef39880c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/678/1/012082",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0e7e5408def21a892d237df6a5988f8f94585348",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
17747698 | pes2o/s2orc | v3-fos-license | Simulated microgravity inhibits L-type calcium channel currents partially by the up-regulation of miR-103 in MC3T3-E1 osteoblasts
L-type voltage-sensitive calcium channels (LTCCs), particularly Cav1.2 LTCCs, play fundamental roles in cellular responses to mechanical stimuli in osteoblasts. Numerous studies have shown that mechanical loading promotes bone formation, whereas the removal of this stimulus under microgravity conditions results in a reduction in bone mass. However, whether microgravity exerts an influence on LTCCs in osteoblasts and whether this influence is a possible mechanism underlying the observed bone loss remain unclear. In the present study, we demonstrated that simulated microgravity substantially inhibited LTCC currents and suppressed Cav1.2 at the protein level in MC3T3-E1 osteoblast-like cells. In addition, reduced Cav1.2 protein levels decreased LTCC currents in MC3T3-E1 cells. Moreover, simulated microgravity increased miR-103 expression. Cav1.2 expression and LTCC current densities both significantly increased in cells that were transfected with a miR-103 inhibitor under mechanical unloading conditions. These results suggest that simulated microgravity substantially inhibits LTCC currents in osteoblasts by suppressing Cav1.2 expression. Furthermore, the down-regulation of Cav1.2 expression and the inhibition of LTCCs caused by mechanical unloading in osteoblasts are partially due to miR-103 up-regulation. Our study provides a novel mechanism for microgravity-induced detrimental effects on osteoblasts, offering a new avenue to further investigate the bone loss induced by microgravity.
T he maintenance of bone mass and the development of skeletal architecture are dependent on mechanical stimulation. Numerous studies have shown that mechanical loading promotes bone formation in the skeleton, whereas the removal of this stimulus during immobilization or in microgravity results in reduced bone mass. Microgravity, which is the condition of weightlessness that is experienced by astronauts during spaceflight, causes severe physiological alterations in the human body. One of the most prominent physiological alterations is bone loss, which leads to an increased fracture risk. Long-term exposure to a microgravity environment leads to enhanced bone resorption and reduced bone formation over the period of weightlessness 1,2 . An approximately 2% decrease in bone mineral density after only one month, which is equal to the loss experienced by a postmenopausal woman over one year, occurs in severe forms of microgravity-induced bone loss 3 . Experimental studies have shown that real or simulated microgravity can induce skeletal changes that are characterized by cancellous osteopenia in weight-bearing bones 4,5 , decreased cortical and cancellous bone formation [5][6][7] , altered mineralization patterns 8 , disorganized collagen and non-collagenous proteins 9,10 , and decreased bone matrix gene expression 11 . Decreased osteoblast function has been thought to play a pivotal role in the process of microgravity-induced bone loss. Both in vivo and in vitro studies have provided evidence of decreased matrix formation and maturation when osteoblasts are subjected to simulated microgravity 12,13 . The mechanism by which microgravity, which is a form of mechanical unloading, has detrimental effects on osteoblast functions remains unclear and merits further research.
Unfortunately, conducting well-controlled in vitro studies in sufficient numbers under real microgravity conditions is difficult and impractical because of the limited and expensive nature of spaceflight missions. Thus several ground-based systems, particularly clinostats, have been developed to simulate microgravity using cultured cells to investigate pathophysiology during spaceflight. A clinostat simulates microgravity by continuously moving the gravity vector before the cell has sufficient time to sense the gravity vector, therefore, sensing no weight would have effects on cells similar to those of weightlessness. This method is called gravity-vector averaging 14 .
Calcium is an important osteoblast regulator, and calcium channels are clearly associated with the regulation of osteoblast functions. Voltage-sensitive calcium channels (VSCCs), particularly LTCCs that selectively allow Ca 21 to cross the plasma membrane, are key regulators of intracellular Ca 21 homeostasis in osteoblasts 15 . LTCCs are composed of the pore-forming a 1 subunit and the auxiliary a 2 d and b subunits; LTCCs in osteoblasts are devoid of the c subunit 16 . The a 1 subunit determines the fundamental properties of individual VSCCs and has four homologous domains, I-IV, each with six transmembrane segments that are linked by cytoplasmic loops with intracellular NH 2 and COOH termini 17 . Among the 10 known a 1 subunits, the L-type Cav1.2 a 1C subunit is the most abundant and is the primary site for Ca 21 influx into growing osteoblasts 15,18 .
LTCCs, particularly Cav1.2 LTCCs, play fundamental roles in cellular responses to external stimuli, including mechanical forces and hormonal signals, in osteoblastic lineage bone cells 17,19 . Several lines of evidence have found that bone density increases 20 and that bone resorption decreases when these calcium channels are activated in osteoblasts 21 . The application of cyclic strain to the substratum results in the increased incorporation of calcium in Ros 17/2.8 cell cultures, and this response is diminished in the presence of verapamil, which is a blocker of LTCCs 22 . The administration of the LTCC antagonists verapamil and nifedipine can substantially suppress mechanical loading-induced increases in bone formation in rats, suggesting that LTCCs mediate mechanically induced bone adaptation in vivo 23 . The levels of the extracellular matrix proteins osteopontin and osteocalcin increased in periosteal-derived osteoblasts by applying strain alone or strain in the presence of the LTCC agonist Bay K8644 within 24 h post-load. This mechanically induced increase in osteopontin and osteocalcin was inhibited by nifedipine 24 . In addition, physiological hormones such as parathyroid hormone and activated vitamin D 3 also modulate bone calcium homeostasis via LTCCs 25,26 . Thus, LTCCs play important roles in regulating osteoblast function.
Recent studies have shown that many factors participate in LTCC regulation. MicroRNA (miRNA), which is a small non-coding RNA molecule, has become the subject of many studies and functions in the silencing and post-transcriptional regulation of gene expression 27,28 . miRNAs function via base-pairing with complementary sequences within mRNA molecules 29 . Thus, these mRNA molecules are silenced by one or more of the following processes: the cleavage of the mRNA strand into two pieces, the destabilization of the mRNA through the shortening of its poly (A) tail, and decreased translation efficiency of the mRNA into proteins by ribosomes 29,30 . miR-1 31,32 , miR-137 33,34 , miR-328 35 , miR-155 36 , miR-145 37 , and miR-103 38 participate in regulating Cav1.2 expression in several types of cells, whereas their functions in osteoblasts have not been confirmed.
Taken together, these data suggest that LTCCs have an important role in osteoblast function and that LTCCs are highly sensitive to mechanical stimulation 39 . In addition, LTCCs in osteoblasts may be regulated by miRNAs. However, to our knowledge, whether microgravity exerts an influence on LTCCs in osteoblasts and the possible mechanisms underlying this effect remain unclear. In the present study, we tested the hypothesis that simulated microgravity inhibits LTCCs in osteoblasts using patch-clamp analyses of whole-cell Ca 21 currents in MC3T3-E1 osteoblast-like cells under simulated microgravity and normal gravity conditions. In addition, we used quantitative real-time PCR (QPCR) and specific immunostaining approaches to examine the effects of simulated microgravity on Cav1.2 subunit expression. Moreover, we assessed the role of miR-103 in mediating the expression of the Cav1.2 subunit and the properties of LTCCs in osteoblasts.
Results
Simulated microgravity attenuates the Bay K8644-induced increase in the intracellular calcium concentration ([Ca 21 ] i ). We performed calcium imaging to test for changes in [Ca 21 ] i induced by Bay K8644 to determine whether simulated microgravity can affect LTCCs in MC3T3-E1 cells. The fluorescence intensity increased substantially within one second after the application of 10 mM Bay K8644 to the culture solution (Figure 1a and 1b). However, the effect of Bay K8644 on intracellular calcium dramatically decreased when the cells were pretreated with simulated microgravity (Figure 1c and 1d). The change in the fluorescence intensity ratio (R 5 [(F max 2 F 0 )/F 0 ] 3 100%) of the control group was 2.48 6 0.52, and the ratio of the simulated microgravity group was 1.57 6 0.23. The difference between the ratios of the two groups is statistically significant (P , 0.05, Figure 1e). In addition, 75.3% 6 9.7% of the cells under simulated microgravity conditions and 80.7% 6 4.6% of the cells in the control group responded to Bay K8644 when the cells were screened for [Ca 21 ] i changes, as shown in Figure 1f. The difference in the percentage of cells responding to Bay K8644 between the two groups was not statistically significant (P . 0.05).
Simulated microgravity reduces LTCC currents in osteoblasts.
Electrophysiological recordings were performed on trypsinized cells to further confirm the influence of simulated microgravity on LTCCs in MC3T3-E1 cells. Figure 2 illustrates typical whole-cell LTCC currents recorded from osteoblasts from the control ( Figure 2a) and simulated microgravity (Figure 2b) groups. The results show a reduction in LTCC currents due to simulated microgravity in the absence or presence of Bay K8644. The peak inward current was recorded at 110 mV for both control and simulated microgravity cells. The application of 10 mM Bay K8644 caused the current amplitude to increase by approximately 2-fold and to activate more steeply and at more negative potentials, whereas the application of 1 mM nifedipine suppressed the inward currents almost completely (Figure 2a and 2b). These properties suggest that the recorded inward currents were Ba 21 currents through LTCCs.
Because cell size may affect the current amplitude, the currents were normalized for membrane capacitance (C m ) as an indirect measurement of cell size and were expressed in picoampere (pA) per picofarad (pF). The inward currents were smaller at all command potentials in simulated microgravity compared with the control group regardless of whether the LTCCs were activated by Bay K8644 (Figure 2c and 2d. The I-V relation, which was expressed in terms of current density, was calculated using the estimated C m ). The LTCC current densities of the MC3T3-E1 cells of the simulated microgravity group were considerably smaller compared with those of the control group (Figure 2e). The mean peak current densities at 110 mV in the simulated microgravity and control groups were 22.41 6 0.38 and 23.52 6 0.48 pA/pF, respectively (P , 0.05, Figure 2e). The application of 10 mM Bay K8644 caused the maximum inward current density to increase by 1.5-fold, with no change in the maximal activation voltage (Figure 2f). The mean peak current densities in cells of the simulated microgravity and control groups were 23.24 6 0.32 and 25.43 6 0.49 pA/pF, respectively (P , 0.05, Figure 2f), in the presence of Bay K8644, indicating an approximately 2-fold decrease in sensitivity to Bay K8644 in the simulated microgravity group compared with the control.
Simulated microgravity down-regulates Cav1.2 but up-regulates its transcript level. The alteration of LTCC current and activity involves several significant components. The L-type Cav1.2 subunit is known to play a central role in the regulation of both LTCC current and activity; however, the roles of Cav1.2 in www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 8077 | DOI: 10.1038/srep08077 mediating the function of LTCCs under real or simulated microgravity conditions remain unclear. Therefore, we investigated whether Cav1.2 expression was altered under simulated microgravity conditions. We performed immunostaining for the Cav1.2 subunit in MC3T3-E1 cells to study the expression and cellular localization of Cav1.2 in cells under simulated microgravity conditions. In Figure 3, immunostaining for the Cav1.2 subunit in MC3T3-E1 cells is shown before and after exposure to 48 h of simulated microgravity conditions ( Figure 3). Control cells stained for Cav1.2 showed abundant plasma membrane and intracellular localization, particularly on the cell surface (Figure 3b and 3c). In contrast, the 48 h simulated microgravity conditions decreased immunostaining for Cav1.2 (Figure 3f and 3g). Intracellular staining persisted but was less intense than that observed in control cells, and the staining for Cav1.2 in the cell periphery markedly decreased (Figure 3f and 3g). Images were compared with cells that had been incubated with Fluor 488-conjugated secondary antibody in the absence of primary antibody to determine the specificity of staining (Figure 3d). The Cav1.2 knockdown reduces calcium currents. We examined LTCC currents by knocking down Cav1.2 expression to further clarify whether the alterations in Cav1.2 expression are involved in the reduction of LTCC currents in osteoblasts. Western blotting was used to evaluate gene knockdown efficiency following siRNA transfection. As shown in Figure 5a, siRNA treatment resulted in an approximately 60% suppression of the protein at 48 h posttransfection, with significant suppression lasting up to 72 h (P , 0.05). Therefore, the cells were subjected to patch clamp at 48 h post-transfection, which is the period at which Cav1.2 expression was maximally suppressed. LTCC current densities were significantly lower at all command potentials between cells receiving scrambled or Cav1.2 siRNA, regardless of whether the LTCCs were activated by Bay K8644 (Figure 5b and 5c). The difference in the mean peak current densities at 110 mV between the Cav1.2 knockdown (21.58 6 0.26 pA/pF) and the control cells (22.76 6 0.34 pA/pF) was significant (P , 0.05, Figure 5d). Moreover, in the presence of Bay K8644, the mean peak current densities in cells from knockdown and control cells were 22.72 6 0.34 and 24.75 6 0.44 pA/pF, respectively, and the difference between the two groups was significant (P , 0.05, Figure 5e). miR-103 is up-regulated under simulated microgravity conditions. All six miRNAs that have been reported to mediate Cav1.2 expression were examined by QPCR to ascertain which miRNA family is relevant to the alteration in Cav1.2 expression under simulated microgravity conditions. Figure 6 shows that miR-103 was remarkably up-regulated in the simulated microgravity group compared with controls (P , 0.05). Other than miR-103, the remaining miRNAs showed no significant differences between the two groups (P . 0.05, Figure 6). These findings indicate that miR-103 may be involved in regulating Cav1.2 expression under simulated microgravity conditions. miR-103 inhibition partially rescues the decrease in Cav1.2 induced by simulated microgravity. To confirm the effect of miR-103 on Cav1.2 expression under simulated microgravity conditions, a miR-103 inhibitor was transfected into MC3T3-E1 cells, and western blot analyses were performed to test for Cav1.2 expression. miR-103 expression was significantly down-regulated (P , 0.05, Figure 7a) in miR-103 inhibitor-transfected cells. Under simulated microgravity conditions, Cav1.2 expression significantly increased in miR-103 inhibitor-transfected cells compared with that of miR-103 negative control-transfected cells (P , 0.05, Figure 7b); however, Cav1.2 expression was not restored to control levels. In addition, the miR-103 inhibitor had no effects on Cav1.2 expression in cells under normal gravity conditions (P , 0.05, Figure 7b). These data suggest that miR-103 partially regulates Cav1.2 expression in MC3T3-E1 cells under simulated microgravity conditions.
A miR-103 inhibitor partially counteracts the decrease in LTCC currents induced by simulated microgravity. Next, the influence of miR-103 on LTCC currents was investigated to further assess the role of miR-103 on the expression of Cav1.2. Under normal gravity conditions, the inward currents did not differ between the negative control group (Figure 8a) and the miR-103 inhibitor group (Figure 8b). However, the inward currents were larger at all command potentials in the miR-103 inhibitor group (Figure 8d) compared with the negative control group (Figure 8c) under simulated microgravity conditions in the absence or presence of Bay K8644. The LTCC current densities in the miR-103 inhibitortransfected cells were significantly larger compared with those of the negative control group under simulated microgravity conditions (P , 0.05, Figure 8e and 8f). The difference in the mean peak current densities at 110 mV between the miR-103 inhibitor group (22.86 6 0.33 pA/pF) and the negative control group (22.02 6 0.38 pA/pF) was significant (P , 0.05, Figure 8e). The application of 10 mM Bay K8644 caused the maximum inward current density to increase by 1.6-fold with no change in the maximal activation voltage. In the presence of Bay K8644, the mean peak current densities in osteoblasts from the two groups were 24.34 6 0.43 and 22.93 6 0.32 pA/pF, and the difference between two groups was significant (P , 0.05, Figure 8f). Similar to the finding for Cav1.2 expression, miR-103 inhibitor transfection could not restore the LTCC currents back to the control levels (P , 0.05, Figure 8e and 8f). Additionally, miR-103 inhibitor had no effects on the LTCC currents in cells under normal gravity conditions (P . 0.05, Figure 8e and 8f).
Discussion
Although the catabolic effects of microgravity on bone have been well documented, the mechanism by which mechanical unloading results in osteoblast dysfunction remains unclear. We have postulated that simulated microgravity has adverse effects on osteoblasts by inhibiting LTCCs. Our results indicate that simulated microgravity substantially inhibits LTCCs in MC3T3-E1 osteoblast-like cells by suppressing Cav1.2 expression. Furthermore, we demonstrated that the up-regulation of miR-103 that is induced by simulated micro- Orbital spaceflight has clearly demonstrated that the absence or the reduction of gravity has significant detrimental effects on astronauts. Health hazards in astronauts are represented by cardiovascular deconditioning and bone loss. Skeletal deconditioning, such as reduced bone mass, altered mineralization patterns and decreased bone matrix gene expression, has been described in astronauts and in rat models of simulated microgravity. The skeletal system impairment that is induced by mechanical unloading, which is one of the main limitations of long-term spaceflight, has received general attention by researchers 40 . LTCCs are involved in the production and release of paracrine/autocrine factors 41,42 and in changes in gene expression 43 in response to mechanical stimulation. Li et al. reported that LTCC inhibition significantly attenuates the bone formation that is associated with mechanical loading in rats and mice 44 . These findings suggest that LTCCs play important roles in the regulation of osteoblast function and bone metabolism. In the present study, we investigated the effects of simulated microgravity on LTCC currents in cultured MC3T3-E1 cells using whole-cell patch clamp recordings. By measuring inward currents, we found that simulated microgravity significantly reduced LTCC currents. This finding was also confirmed by calcium imaging, which showed that simulated microgravity significantly reduced Bay K8644-induced intracellular calcium increases. These observations are consistent with previous studies. Numerous bone anabolic regulatory factors, including parathyroid hormone 45,46 , vitamin D 3 45 , and mechanical stimuli 47,48 , are able to activate and enhance LTCC currents. Therefore, microgravity, which is a form of mechanical unloading, may reduce LTCC currents in osteoblasts.
Many factors can regulate LTCCs. The major LTCC subunit in osteoblasts is Cav1.2 15,18 . Recent studies have shown that amyloid precursor protein (APP) inhibits LTCCs by down-regulating Cav1.2 expression in GABAergic inhibitory neurons 49 In addition to the APP and CaMKII studies mentioned above, other reports have investigating the regulation of the Cav1.2 channel protein. For example, selenium deficiency increases oxidative stress levels in the mouse myocardium, which is positively related to the up-regulation of Cav1.2 genes and proteins 51 . Wang et al. demonstrated that Cav1.2 mRNA and protein levels increase in ROS cells following a 24-h incubation with a permeable analog of cAMP 52 . These experiments suggested that changes in Cav1.2 expression that are induced by different factors coincide with altered Cav1.2 mRNA expression. However, our findings indicated that increased Cav1.2 mRNA expression is not consistent with decreased Cav1.2 protein expression in MC3T3-E1 cells under simulated microgravity conditions. Therefore, this result suggested that a mechanism of posttranscriptional regulation might participate in regulating Cav1.2 protein expression. miRNA, which is a small non-coding RNA molecule, has roles in RNA silencing and post-transcriptionally regulating gene expression. Recently, six miRNAs have been linked to the regulation of Cav1.2 expression under different experimental conditions using a luciferase-based reporter assay. Cacna1c, which encodes a LTCC Cav1.2 subunit, is the gene target of miR-137 during the regulation of adult neurogenesis and neuron maturation 33,34 . Other studies have shown that miR-1 is associated with heart defects and atrioventricular block through mediating Cav1.2 expression 31,32 . Lu et al. reported that miR-328 contributes to the adverse atrial electric remodeling in atrial fibrillation through targeting the L-type Ca 21 channel genes Cacna1c and Cacnb1, which encode for a 1c and b 1 subunits, respectively 35 . Moreover, miR-155 36 , miR-145 37 , and miR-103 38 have also been reported to play a crucial role in regulating Cav1.2 expression. We examined all six of these miRNAs by real-time PCR to determine which may be relevant to the altered Cav1.2 expression in MC3T3-E1 cells under simulated microgravity conditions. Our results showed that simulated microgravity increases miR-103 expression but has no effects on the other miRNAs. This finding indicated that miR-103 might be involved in regulating Cav1.2 expression under simulated microgravity conditions.
We studied the effects of treating MC3T3-E1 cells with a miR-103 inhibitor to further determine the role of miR-103 in regulating Cav1.2 expression under simulated microgravity conditions. Our data showed that miR-103 inhibition remarkably increased the expression of Cav1.2 subunits and LTCC currents in MC3T3-E1 cells under simulated microgravity conditions; however, this treatment could not completely counteract the decreases in Cav1.2 expression and LTCC currents that were induced by simulated microgravity. These results are consistent with the finding by Favereaux et al., who demonstrated that the knockdown or overexpression of miR-103 upor down-regulates, respectively, the level of Cav1.2 expression in neurons 38 . miRNA functions in the post-transcriptional regulation of gene expression via base-pairing with mRNA molecules 29 . miRNA silences mRNA by one or more of the following processes: the cleavage of the mRNA strand into two pieces, the destabilization of the mRNA through the shortening of its poly (A) tail, and reduced translation efficiency of the mRNA into proteins by ribosomes 29,30 . In this study, simulated microgravity down-regulated Cav1.2 expression but up-regulated its transcript level, suggesting that miR-103 decreases Cav1.2 subunit expression by blocking the translation of the mRNA into protein. Collectively, these studies suggest that the up-regulation of miR-103 in simulated microgravity is at least partially involved in the regulation of Cav1.2 subunit expression and LTCC currents in MC3T3-E1 cells. In addition to the miRNAs mentioned, there have been other reports investigating Cav1.2 expression at post-transcriptional level. Recent studies have shown that Cav1.2 contain an endoplasmic reticulum retention motif in the proximal C-terminal region, and the Cavb subunit has a role in regulating proteasomal degradation of this subunit 53 . Moreover, Rougier et al. showed that Nedd4-1 promotes the sorting of newly synthesized Cav1.2 for degradation by both the proteasome and the lysosome 54 . However, whether ubiquitination pathway or other pos-sible mechanisms for regulating Cav1.2 expression at post-transcriptional level in osteoblasts under microgravity condition remain to be investigated.
In conclusion, simulated microgravity inhibits LTCCs in MC3T3-E1 cells via the suppression of Cav1.2 expression. Moreover, the down-regulation of Cav1.2 expression and the inhibition of LTCCs are partially related to the up-regulation of miR-103 induced by simulated microgravity. To our knowledge, this study is the first to demonstrate the relation between the inhibition of LTCCs and the up-regulation of miR-103 under conditions of simulated microgravity in MC3T3-E1 cells in vitro. This work may provide a novel mechanism of microgravity-induced adverse effects on osteoblasts, offering a new avenue to further investigate microgravity-induced bone loss. A more detailed analysis of the mechanisms accounting for the suppressive effect of simulated microgravity on Cav1.2 expression is under investigation.
Methods
Materials. Unless otherwise stated, all chemicals and reagents used in this study were obtained from Sigma Chemical Company.
Cell culture. Mouse osteoblast-like MC3T3-E1 cells were grown in a-minimum essential medium (a-MEM; Hyclone) containing 10% fetal calf serum (Hyclone), 100 U/ml penicillin G, and 100 mg/ml streptomycin. The cells were maintained in a humidified incubator at 37uC with 5% CO 2 and were subcultured every 72 h.
Clinorotation to simulate microgravity. The clinostat is an effective, ground-based tool that is used to simulate microgravity. The clinostat consists of two groups of turntables: one vertical turntable and one horizontal turntable. The vertical chambers rotate around the horizontal axis, which designates clinorotation. Clinorotation mimics certain aspects of a microgravity environment by nullifying the integrated gravitational vector through continuous averaging. The horizontal chambers rotate around the vertical axis, which designates rotational control. The cells were exposed to clinorotation for 48 h at 24 rpm. In the present study, the cells were seeded at a density of 1 3 10 5 cells on 2.5 cm 3 3.0 cm coverslips that were placed in 6-well plates. After the cells grew for 24 h and adhered to the coverslips, the coverslips were inserted into the fixture of the chambers, which were subsequently filled with a-MEM with 10% FBS and aspirated to eliminate air bubbles. The chambers were divided into two groups: horizontal rotation control and clinorotation. The clinostat was placed in an incubator at 37uC 55,56 .
Calcium imaging. After 48 h of incubation, the cells were loaded with Fluo-3-AM. For this manipulation, each chamber was washed twice with 1 ml of HEPES-buffered salt solution (HBSS). Following the wash, 5 mM Fluo-3-AM in HBSS was added, and the cells were incubated for 40 minutes in a 5% CO 2 humidified incubator in the dark. Then, changes in intracellular Ca 21 levels in individual cells were measured using a digital imaging system equipped with a laser confocal scanning microscope (FluoView 1000, Olympus). The cells were excited at a wavelength of 488 nm, and the emission fluorescence was recorded at 525 nm. Images were acquired at a rate of 1 s per frame for up to 1 min. Once the cells were focused and a stable baseline cytosolic calcium level was recorded, the HBSS was exchanged for a high potassium HBSS, which had 55 mM KCl instead of 6 mM and 70 mM NaCl instead of 120 mM. This high potassium HBSS also contained 10 mM Bay K8644 57 .
Image analysis was performed using customized sequences from Bio-Rad Comos software and the confocal image analysis system. Changes in fluorescence were normalized by calculating the percent change ratio (R) from the resting level before stimulation using the equation R 5 [(F max 2 F 0 )/F 0 ] 3 100%, where F 0 is the mean of several determinations of fluorescence intensity taken before the application of high potassium HBSS, and F max is the maximum fluorescence intensity after 10 mM Bay K8644 was added 24 .
Measurement of the LTCC currents. Whole-cell currents were recorded with an amplifier (CEZ-2300, Nihon Kohden) and a version interface (Axon Instruments) using patch clamp techniques. Command-voltage protocols and data acquisition were performed with pCLAMP software (version 8.0, Axon Instruments). Patch pipettes (tip resistance 2-6 MV when filled with a pipette solution) were fabricated on an electrode puller (Narishige) using borosilicate glass capillary tubing. Cell membrane capacitance (C m ) and access resistance (R a ) were estimated from the capacitive current transient evoked by applying a 20 mV pulse for 40 ms from a holding potential of 260 mV to 240 mV.
The cell was held at 240 mV and then stepped in 10 mV increments from 230 to 60 mV. Voltage steps were 250 ms in duration, and 2 s intervals were allowed between steps. Nonspecific membrane leakage and residual capacitive currents were subtracted using the p/4 protocol. Ba 21 replaced Ca 21 as the charge carrier to increase unitary currents, and the divalent cation concentration was elevated in the bath solution. Barium was used as a current carrier for two reasons: barium current through L-type channels is known to be larger than calcium currents; and barium inhibits potassium channel activation 58,59 . Two types of external solutions, solutions A www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 8077 | DOI: 10.1038/srep08077 and B, were used. Solution A was used while making a gigaohm seal between the recording pipette and cell surface. This solution contained (in mM) 120 NaCl, 30 mannitol, 3 K 2 HPO 4 , 1 MgSO 4 , 30 HEPES and was supplemented with 0.1% bovine serum albumin and 0.5% glucose, with the pH corrected to 7.4 with NaOH. After a seal of 2 GV was obtained, the perfusion fluid was changed to solution B during current recording. Solution B contained (in mM) 108 BaCl 2 and 10 HEPES, with the pH corrected to 7.6 with Ba(OH) 2 . Cs 1 was used in the pipette solution to minimize outward K 1 current. The pipette solution contained (in mM) 150 CsCl, 5 EGTA, 10 HEPES, 5 Na 2 ATP, and 10 D-glucose, with the pH adjusted to 7.2 with CsOH 24,58-60 .
Immunocytochemistry and fluorescence microscopy. The detection of the Cav1.2 subunit was performed using a rabbit polyclonal antibody against Cav1.2, which was obtained from Alomone Laboratories. The cells were fixed in 4% (vol/vol) paraformaldehyde and then incubated in blocking buffer containing 5% (vol/vol) normal donkey serum, 0.3% (vol/vol) Triton X-100, and PBS to permeabilize and block nonspecific binding. The primary antibody was diluted 15100 with 1% (vol/vol) normal donkey serum and 0.1% (wt/vol) BSA in PBS. Then, the cells were incubated in the dark for 1 h at room temperature using Alexa Fluor 488-conjugated (Invitrogen) secondary antibody (15200). The cells were counterstained for 10 min in the dark with the nuclear dye ToPro3 (Molecular Probes), which was diluted 154,000 in PBS. The fluorescence intensity was analyzed using an inverted microscope linked to a confocal scanning unit (FluoView 1000, Olympus) 15 .
Western blot analysis. The cells were lysed in RIPA buffer (Thermo) containing a protease inhibitor cocktail (Roche). Equal amounts of protein from each sample were added to a NuPage Bis-Tris polyacrylamide gel (Invitrogen) and run for 2 hours using MES SDS running buffer (Invitrogen). Then, the proteins were transferred to nitrocellulose membranes and blocked for 5 hours at room temperature with milk (5% w/v) in Tris-buffered saline (TBS) with Tween-20 (0.1%; TBS-T). The blots were incubated with a primary antibody (15200) directed against the Cav1.2 subunit overnight at 4uC with oscillation. The blots were incubated with horseradish peroxidase-conjugated secondary antibody (1510,000; Jackson). The secondary antibodies were detected and visualized using the Super Signal West substrate (Fisher Scientific). Densitometry measurements were made using Tanon imaging software 61 .
mRNA and miRNA expression assays. Total RNA from MC3T3-E1 was isolated using TRIzol reagent (Invitrogen). The concentration and purity of total RNA were determined by measuring the absorbance at 260 and 280 nm using a NanoDrop ND-1000 Spectrophotometer.
For mRNA, cDNA was synthesized using a Prime Script RT Kit (TaKaRa). The expression levels of target genes were determined quantitatively using an ABI 7500 real-time PCR system with SYBR Premix (TaKaRa). Amplification was performed for 40 cycles under the following conditions: 95uC for 45 s, followed by 40 cycles at 58uC for 45 s and 72uC for 60 s. The primers pairs were as follows: Cacna1c For miRNA, cDNA was synthesized using a miRNA First Strand Synthesis kit (Agilent Technologies). Then, an aliquot of the RT reaction was used as a template in a standard real-time RT-PCR amplification using SYBR Premix, the universal reverse primer 59-TGG TGT CGT GGA GTCG-39, and the miR-103 (mimat0000546)-specific forward primer 59-ACA CTC CAG CTG GGA GCA GCA TTG TAC-39. Amplification was performed for 40 cycles under the following conditions: 95uC for 2 min, followed by 40 cycles at 95uC for 10 s and 60uC for 40 s 31,50 .
The quantification of gene expression was performed using the comparative threshold cycle (DDC T ) method. GAPDH was used as a control for Cav1.2 mRNA quantification, and small nuclear RNA U6 was used as a control for miRNA samples 35,62 .
Synthesis and transfection of miRNA inhibitor. The miR-103 inhibitor was designed and synthesized by RiboBio Corporation. The sequence of miR-103 inhibitor is 3'-UCA UAG CCC UGU ACA AUG CUG CU-5'. Five nucleotides or deoxynucleotides at both ends of the antisense molecules were locked.
Osteoblasts were transfected with inhibitor or negative control using Lipofectamine 2000. The medium was replaced at 6 h after transfection. The cells were collected for protein assay or patch clamp at 48 h after transfection 35 . | 2018-04-03T04:52:36.170Z | 2015-01-28T00:00:00.000 | {
"year": 2015,
"sha1": "131c18cf25d9c9b372c0e220ee6969775a5863ae",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/srep08077.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "131c18cf25d9c9b372c0e220ee6969775a5863ae",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
225886139 | pes2o/s2orc | v3-fos-license | Sentiment analysis of film reviews based on CNN-BLSTM-Attention
In order to accurately analyse the emotional tendency of film reviews, help investors make decisions and improve the quality of works, an optimized CNN-BLSTM-Attention sentiment analysis model was designed. The CNN model has a strong ability to capture the local correlation of spatial or temporal structures. The RNN model can either process sequences of any length or capture long-range dependencies, but it is easy to cause the problem of gradient disappearance. The CNN-BLSTM-Attention sentiment analysis model designed in this paper, which combines the advantages of CNN and RNN, is more accurate when being used to analyze the sentiment characteristics of texts. The experimental results show that the accuracy of the CNN-BLSTM-Attention after optimization model is better than that of CNN and RNN models in the experiment, which proves the effectiveness of the analysis method in this paper and can provide some significance for the optimization of related sentiment analysis models.
Introduction
With the rapid development of Internet technology and the rise of social networks, more and more people choose to express their opinions on film and television works through the Internet, which makes it easier for filmmakers to understand the public's opinions and evaluations of movies. Under this kind of network environment, a large number of movie reviews with personal emotions are generated. Analyzing these texts with personal emotions is a beneficial work for the film industry and consumers.
The main task of sentiment analysis is to complete the classification of sentimental text. The text sentiment analysis mainly includes text classification, information extraction and text generation [1]. For sentiment analysis, a large number of related sentiment classification methods have emerged. For example, Kim proposed the CNN (Convolutional Neural Network) model in 2014 [2], which used word vectors to classify text and achieved impressive results. Mikolov [3] and others proposed RNN (Recurrent Neural Network) model in 2010. Later on the basis of RNN, some scholars proposed the LSTM model, which is a variant of RNN. Convolutional neural network (CNN) and recurrent neural network (RNN) are network models that are currently used in sentiment analysis.
In this paper, a new model of sentiment analysis, CNN-BLSTM-Attention, is proposed through the investigation and research of various deep learning models, which is superior to CNN and RNN models. CNN is a multi-layer neural network. Its basic structure includes an input layer, a convolution layer, a pooling layer, a fully connected layer, and an output layer. In this paper, CNN is first used to train the data to obtain a training model. The training process is shown in Figure 1. To build a CNN model, multiple one-dimensional convolution kernels need to be defined firstly, then these convolution kernels should be used to perform convolution calculations on the inputs. In this process, convolution kernels with different widths may capture the correlation of different numbers of neighboring words. Secondly, timing maximum pooling should be performed to all output channels, and then the pooled output values of these channels would be linked into a vector. Finally, the connected vectors would be transformed into the output of each class by the fully connected layer. This step generally uses discarded layers to deal with overfitting.
LSTM model
Recurrent neural network (RNN) has a wide range of applications in processing serialized inputs. It is a typical model in deep learning frameworks. Long short-term memory (LSTM) network [4] is a variant of recurrent neural network. LSTM introduces 3 gates, namely input gate, forget gate and output gate, and memory cells with the same shape as the hidden state to record additional information.
The input gate ∈ ×ℎ , forget gate ∈ ×ℎ , and output gate ∈ ×ℎ of the time step are calculated as follows: = ( , , ∈ ×ℎ and ℎ , ℎ , ℎ ∈ ℎ×ℎ are weight parameters and , , ∈ 1×ℎ are bias parameters. The characteristics of ordinary LSTM model is that the current time step is determined by the earlier sequence, and the information is passed from front to back through the hidden state. However, the current time step sometimes may also be determined by subsequent time steps. BLSTM handles this kind of information more conveniently by adding hidden layers passed from back to front. The BLSTM model is shown in Figure 5.
CNN-BLSTM-Attention model
Aiming at resolving the weakness of the CNN model which is sensitive to local features, and the RNN model which is prone to generate the problem of gradient disappearance, we fused and transformed the CNN and RNN model, and proposed a new model combining the advantages of the two models. The workflow of the model contains the following process. Firstly, we should convert the texts into the corresponding vocabularies. Secondly, we put these vocabularies through the CNN's convolution layer, pooling layer, and Dropout operation, and then through the BLSTM layer and Attention layer. Finally, after full connection layer and Softmax layer, we will get the final output. This model not only solves the weakness that CNN model is sensitive to local features but also uses a chain neural network to store and propagate information by accessing the BLSTM network. Figure 2 is a schematic diagram of the design model of this paper. Among them, Dropout specifically refers to the inverted dropout method, and its calculation expression is ℎ = ∅( 1 1 + 2 2 + ⋯ + + ) (4) Here is the activation function, 1 , … , are inputs, the weight parameter of the hidden unit is 1 , … , , and the deviation parameter is . Let the probability of the random variable equals 0 and 1 is and 1− , respectively. Calculate the new hidden unit ℎ ′ when using the discard method Since ( )=1− , therefore That is to say, the discard method does not change the expected value of its input. Since the discarding of hidden layer neurons is random during training, it plays a role of regularization when training the model, and can be used to deal with overfitting.
In addition, this paper also adds attention mechanism on the basis of CNN and RNN. The key point of using attention mechanism is to calculate the background variable. The reset gate, update gate, and candidate hidden states are ′ = ( ′ −1 + ′ −1 + ′ + ) (7) and with subscripts are the weight parameter and deviation parameter of the gating cycle unit, respectively.
Data set
This paper uses Stanford's IMDb dataset (Stanford's Large Movie Review Dataset) as the dataset for text sentiment classification [5]. This data set is divided into two data sets for training and testing, each containing 25,000 reviews of movies downloaded from IMDb. In each dataset, the number of reviews labeled "positive" and "negative" is equal.
Parameter setting
The experimental environment of this article is python 3.6.9, using Mxnet deep learning open source framework. In the experiments in this paper, the hyperparameter settings are shown in Table 1:
Experimental results and analysis
The experimental results are shown in Table 2. The accuracy rate of the CNN model is the lowest. The BLSTM model is one percentage point higher than the CNN model. The CNN-BLSTM-Attention optimized by this paper has a significantly higher accuracy rate, reaching 0.99. It shows that our optimization model has better accuracy than the original CNN and RNN model.
Conclusions and prospects
The sentiment analysis of movie reviews has strong practical significance. This paper makes full use of the advantages of CNN and LSTM models, and proposes a new sentiment analysis model based on CNN and BLSTM models which adds attention mechanism. Experimental results also show that the optimization algorithm in this paper is superior to CNN and BLSTM models. However, due to the increased complexity, the training time of the model is greatly increased, and there is still room for improvement. | 2020-06-18T09:08:22.774Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "c8ffb4fb26c40a612dbe5262bf7b74340d0b288d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1550/3/032056",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b60594e25e54753726f89f338b6657c7bd9c6bca",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
226246393 | pes2o/s2orc | v3-fos-license | Optimum distance flag codes from spreads via perfect matchings in graphs
In this paper, we study flag codes on the vector space Fqn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathbb {F}}}_q^n$$\end{document}, being q a prime power and Fq\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathbb {F}}}_q$$\end{document} the finite field of q elements. More precisely, we focus on flag codes that attain the maximum possible distance (optimum distance flag codes) and can be obtained from a spread of Fqn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\mathbb {F}}}_q^n$$\end{document}. We characterize the set of admissible type vectors for this family of flag codes and also provide a construction of them based on well-known results about perfect matchings in graphs. This construction attains both the maximum distance for its type vector and the largest possible cardinality for that distance.
Introduction
Random network coding is introduced in [ Xaro Soler-Escrivà xaro.soler@ua.es perform random linear combinations of the received vectors instead of simply routing them, as it happens when using classical channels of communication. This process is specially vulnerable to error dissemination, and, as a solution to this problem, in [13], Koetter and Kschischang introduce the concept of subspace codes as adequate errorcorrection codes in random network coding. These codes are families of subspaces of an n-dimensional vector space over the finite field F q endowed with a specific distance. If the dimension of all the subspaces in a code is fixed, we say that it is a constant dimension code.
In coding theory, the distance of a code is closely related to its error-correction capability. More precisely, a code with distance d can detect d − 1 errors and correct up to d− 1 2 . The size of a code is also an important parameter, since it determines the amount of different messages that can be encoded. Hence, one of the main problems in this area is the construction of codes with the largest possible size with a given minimum distance or codes with the largest minimum distance for a given size. In the particular context of network coding, this fact motivates the search for subspace codes having the largest possible distance and the best size for that distance.
Spreads are objects coming from finite geometry, introduced by Segre in [25]. A kdimensional spread (or k-spread) of F n q is just a collection of k-dimensional subspaces that pairwise intersect trivially and cover the space F n q . It is well known that k-spreads exist if, and only if, k divides n. In the network coding setting, k-spread codes are optimal codes in the previous sense: they attain the best distance for their dimension, and their cardinality is as large as possible.
When using subspace codes, every codeword (a subspace) is sent in a single use of the channel. In contrast, in [23,24], Nobrega and Uchôa-Filho present the notion of multishot codes, where codewords are sequences of r subspaces of F n q and need r successive uses of the channel (shots) to be sent. This approach allows to construct codes with good parameters without modifying neither the field size q nor the dimension n. As a particular case of multishot codes, we have the class of flag codes. A flag on F n q is just a sequence F = (F 1 , . . . , F r ) of nested subspaces of F n q . The vector (dim(F 1 ), . . . , dim(F r )) is called type of the flag F. Now, given integers 0 < t 1 < · · · < t r < n, a flag code of type (t 1 , . . . , t r ) on F n q is a nonempty collection of flags of this type. Flag codes in network coding appear for the first time in [19]. Later on, in [2], a study of flag codes attaining the best possible distance (optimum distance flag codes) is undertaken. In that work, these codes are characterized in terms of the constant dimension codes used in different shots (the projected codes). Moreover, it was also shown that the presence of a spread code among the projected codes leads to constructions of optimum distance flag codes that also reach the largest possible size.
In particular, in [2], the authors build optimum distance flag codes with a k-spread as a projected code fixing the full type vector, that is, (1, . . . , n − 1). They conclude that such codes exist just if n = 2k or n = 3 and k = 1. On the other hand, in this paper, we deal with the converse problem: given n and a divisor k of n, we look for conditions on the type vector of an optimum distance flag code on F n q having a k-spread as a projected code. We show that not every type vector (t 1 , . . . , t r ) is allowed for this purpose, only those satisfying Observe that, for n = 2k or n = 3 and k = 1, any full type vector is admissible. In fact, for n = 2k, we have k = n − k. However, this equality does not hold for an arbitrary divisor k of n, and to overcome the gap between dimensions k and n − k, we need to provide suitable nested subspaces of these dimensions. We solve this problem by using classical combinatorial results about the existence of perfect matchings on bipartite regular graphs. Finally, for any given admissible type vector, we get a construction of flag codes with the maximum distance and the largest cardinality among optimum distance flag codes of the corresponding type.
The paper is organized as follows: In Sect. 2, we recall some background on finite fields, constant dimension codes, flag codes and graphs. In Sect. 3, we determine the set of admissible type vectors for a flag code to attain the maximum possible distance and to have a k-spread as a projected code. Then, for those type vectors, we undertake the construction of such codes in several stages. First, we consider the type (1, n − 1) and construct optimum distance flag codes from the spread of lines, exploiting the existence of perfect matchings in bipartite regular graphs. Then, using the field reduction map, we translate the previous construction into the type (k, n − k). Finally, by taking advantage of some properties satisfied by the mentioned map, we finish with the full admissible type (1, . . . , k, n − k, . . . , n − 1) and any other admissible type vector. Our codes have the best size for the given admissible type vector and the associated maximum distance. We complete this section with an example of our construction for the admissible type (2, 4) on F 6 2 having a 2-spread as the subspace code used at the first shot.
Preliminaries
We devote this section to recall some background we will need along this paper. This background involves finite fields, subspace and flag codes and graph theory.
Results on finite fields
Most of the following definitions and results about finite fields as well as the corresponding proofs can be found in [18].
Let q be a prime power and F q the finite field with q elements. Consider f (x) ∈ F q [x] a monic irreducible polynomial of degree k and α ∈ F q k a root of f (x). Then, we have that F q k ∼ = F q [α], which allows us to realize the field F q k as , the following square matrix is a field with q k elements. We also have the natural field isomorphism For any positive integer n, we denote by P q (n) the set of all vector subspaces of F n q . The Grassmann variety G q (k, n) is the set of all k-dimensional subspaces of F n q . Any subspace U ∈ G q (k, n) can be generated by the rows of some full-rank matrix U ∈ F k×n q . In that case, we write U = rowsp(U ) and say that U is a generator matrix of U. By taking the generator matrix in reduced row echelon form (RREF), we get uniqueness in the matrix representation of the subspace U.
Let us take n = ks with k > 1. The field isomorphism φ provided by (1), in turn, naturally induces a map ϕ between P q k (s) and P q (ks) given by: This map is known as field reduction since it maps subspaces over F q k into subspaces over the subfield F q (see [8,17,21,22]). Let us recall some useful properties of the map ϕ pointed out in [17] that we will use in Sect. 3.2.2.
Constant dimension codes
The Grassmannian G q (k, n) can be considered as a metric space with the subspace distance defined as: for all U, V ∈ G q (k, n) (see [13]).
A constant dimension (subspace) code of dimension k and length n is any nonempty subset C ⊆ G q (k, n). The minimum subspace distance of the code C is defined as [26] and references therein, for instance). It follows that the minimum distance of a constant dimension code C is upper-bounded by: Constant dimension codes C ⊆ G q (k, n) in which the distance between any pair of different codewords is d S (C) are said to be equidistant. For such codes, there exists some value c < k such that, given two different subspaces U, V ∈ C, it holds that dim(U ∩ V) = c. Hence, the minimum distance of the code is precisely d S (C) = 2(k − c), and C is also called an equidistant c-intersecting constant dimension code.
In case the value c is the minimum possible dimension of the intersection between k-dimensional subspaces of F n q , that is, equidistant c-intersecting codes attain the bound given in (4). In particular, for dimensions k n 2 , we have that these codes are 0-intersecting codes known as partial spreads. The cardinality of any partial spread C in G q (k, n) always satisfies (see [9,Lemma 7]). Partial spread codes and equidistant codes have been studied in [6,9,10]. Whenever k divides n, the previous bound is attained by the so-called spread codes (or k-spreads) of F n q . Notice that a k-spread S is a subset of G q (k, n) whose elements give a vector space partition of F n q . Spreads are classical objects coming from finite geometry (see [25], for instance). For further information related to spreads in the network coding framework, we refer the reader to [8,21,22,26].
The following spread is due to Segre [25]. In the network coding setting, it was presented for the first time in [21] as a construction of spread code. Denote by G L k (q) the general linear group of degree k over the field F q . Let P ∈ G L k (q) be the companion matrix of a monic irreducible polynomial in F q [x]. We will write I k and 0 k to denote the identity matrix and the zero matrix of size k × k, respectively. Take s ∈ N such that n = sk. Then, the following family of k-dimensional subspaces is a spread code: where is the set of k × ks matrices with the first nonzero block from the left equal to I k .
Remark 2.2
Notice that the matrices in are in reduced row echelon form and it is clear that the field reduction map ϕ defined in (2) gives a bijection between the Grassmannian of lines G q k (1, s) and the spread code S(s, k, P) We will come back to this fact in Sect. 3.2.2.
Given a constant dimension code C ⊆ G q (k, n), the dual code of C is the subset of G q (n − k, n) given by where U ⊥ is the orthogonal of U with respect to the usual inner product in F n q . In [13], it was proved that C and C ⊥ have both the same cardinality and minimum distance. Notice that the dual of a partial spread of dimension k n 2 is an equidistant (n −2k)intersecting code of dimension n − k and conversely.
Flag codes
Subspace codes were introduced for the first time in [13] as error-correction codes in random network coding. In that paper, the authors propose a suitable network channel with a single transmitter and several receivers that is used just once, so that subspace codes can be considered as one-shot codes. The use of the channel more than once was suggested originally in [23] and gave rise to the so-called multishot codes as a generalization of subspace codes. We call any nonempty subset C of P q (n) r a multishot code of length r 1, or just r -shot code. In particular, if codewords in C are sequences of nested subspaces, we say that C is a flag code. Flag codes were first studied as orbits of group actions in [19], and, in [16], the reader can find a study of bounds on the cardinality of full flag codes with a prescribed distance. Let us recall some concepts in the setting of flag codes.
With this notation, F i is said to be the i-th subspace of F. In case the type vector is (1, 2, . . . , n − 1), we say that F is a full flag.
The space of flags of type (t 1 , . . . , t r ) on F n q is denoted by F q ((t 1 , ..., t r ), n) and can be endowed with the flag distance d f that naturally extends the subspace distance defined in (3): given two flags F = (F 1 , . . . , F r ) and F = (F 1 , . . . , F r ) in F q ((t 1 , . . . , t r ), n), the flag distance between them is A flag code of type (t 1 , . . . , t r ) on F n q is defined as any non-empty subset C ⊆ F q ((t 1 , . . . , t r ), n). The minimum distance of a flag code C of type (t 1 , . . . , t r ) on F n q is given by Given a type vector (t 1 , . . . , t r ), for every i = 1, . . . , r , we define the i-projection to be the map The i-projected code of C is the set In particular, if C is a full flag code, we have that (10) becomes for n even, , for n odd.
Matchings in graphs
Now we introduce some basic concepts and results on graphs in order to use them in the construction of a specific family of flag codes with the maximum distance in Sect. 3. All these definitions and results together with their proofs can be found in [3].
is an incident edge with v and v . Two edges are adjacent if they have a common vertex. Given a vertex v ∈ V we call the degree of v to the number of incident edges with v. A graph G is said to be k-regular, if each vertex in G has degree k.
On the other hand, a set of vertices (or edges) is independent if it does not contain adjacent elements. A set M ⊆ E of independent edges of a graph A graph G is bipartite if the vertex set can be partitioned into two sets V = A ∪ B such that there is no pair of adjacent vertices neither in A nor in B. For this class of graphs, perfect matchings are just bijections between A and B given by a subset of edges of the graph connecting each vertex in A with another vertex in B. The following classic result, whose proof can be found in [3] (pages 37-38), states the existence of perfect matchings in a family of graphs: Theorem 2.3 Any k-regular bipartite graph admits a perfect matching.
The search for maximum matchings in bipartite graphs is a classical problem in graph theory that started in the 1930s with the works of König and of Egerváry (see [5,14]). Later, Kuhn and M. Hall tackled this problem and presented the first formal algorithms to find perfect matchings in bipartite graphs (see [11,15]). This type of algorithms is known as "Hungarian Method". Finding matchings in non-bipartite graphs is a more difficult problem, and the first (efficient) algorithm in this direction was provided by Edmonds in 1965 (see [4]). For further information on matching theory, see [20] and references inside.
We will use the previous Theorem through Sect. 3 in order to give perfect matchings of a particular regular bipartite graph of our interest. Such matchings will allow us to construct disjoint flag codes of a specific type as we will show later.
Optimum distance flag codes from spreads
Flag codes attaining the bound in (10) are called optimum distance flag codes and can be characterized in terms of their projected codes in the following way: Theorem 3.1 (see [2]) Let C be a flag code of type (t 1 , . . . , t r ). The following statements are equivalent: (i) C is an optimum distance flag code.
(ii) C is disjoint, and every projected code C i attains the maximum possible subspace distance.
As a consequence, the i-projected codes of an optimum distance flag code have to be partial spreads if t i n 2 and equidistant (2t i − n)-intersecting subspace codes for dimensions t i > n 2 . As mentioned in Sect. 2.2, whenever k divides n, k-spread codes are partial spread codes (constant dimension codes with maximum distance) with the best size. This good property of spreads naturally gives rise to the question of finding optimum distance flag codes having a spread as their i-projected code when the dimension t i is a divisor of n. Note that, due to the disjointness property, we could have at most one spread among the projected codes. In [2], it was proved that having a spread as a projected code makes optimum distance flag codes attain the maximum possible size as well.
Theorem 3.2 [2, Theorem 3.12] Let k be a divisor of n and assume that C is an optimum distance flag code of type (t 1 , . . . , t r ) on F n q . If some t i = k, then |C| q n −1 q k −1 and equality holds if, and only if, the i-projected code C i is a k-spread of F n q .
In the same paper, it is shown that, for the full type vector (1, . . . , n − 1), it is possible to find optimum distance full flag codes having a spread as a k-projected code only if either n = 2k or n = 3 and k = 1. Observe that there, the authors always work with full flag codes (the full type vector is fixed) and then provide conditions on n and k. Now, we deal with the inverse problem: given n and a divisor k of n, we look for conditions on the type vector of an optimum distance flag code on F n q having a k-spread as a projected code. We conclude that not all the type vectors are allowed. Let us describe the admissible ones and provide a construction of optimum distance flag codes for them, based on the existence of perfect matchings in a specific graph.
Admissible type vectors
This paper is devoted to explore the existence of optimum distance flag codes of a general type vector (t 1 , . . . , t r ), not necessarily the full type, having a spread as their iprojected code when t i is a divisor of n. The next result states the necessary conditions that the type vector (t 1 , . . . , t r ) must satisfy. Theorem 3.3 Let C be an optimum distance flag code of type (t 1 , . . . , t r )on F n q . Assume that some dimension t i = k divides n and the associated projected code C i is a kspread. Then, for each j ∈ {1, . . . , r }, either t j k or t j n − k.
Proof Notice that in case i = r , clearly t j t r = k, for every j = 1, . . . , r . Suppose that i < r . Let us show that t i+1 n − k.
Since t i = k divides n, we can write n = sk, for some s 2. If s = 2, we have that n − k = k and the result trivially holds. In case s > 2, then s < 2(s − 1) and we have that 2k < n < 2(s − 1)k = 2(n − k). We deduce that k n 2 < n − k. Now, by contradiction, assume that t i+1 < n − k. We distinguish two possibilities: (1) If k < t i+1 n 2 , since C is an optimum distance flag code, by Theorem 3.1, its projected code C i+1 must be a partial spread of dimension t i+1 and cardinality |C i+1 | = |C i | = q n −1 q k −1 . Contradiction with (5). (2) If n 2 < t i+1 < n −k, the projected code C i+1 has to be an equidistant (2t i+1 −n)intersecting constant dimension code. In other words, the subspace distance of C i+1 is 2(n − t i+1 ). Hence, its dual code C ⊥ i+1 is a partial spread of dimension n − t i+1 > n − k > k and cardinality |C i+1 | = |C i | = q n −1 q k −1 , which again contradicts (5).
We conclude that t i+1 n − k.
Remark 3.4
This result provides a necessary condition on the type vector of any optimum distance flag code on F n q having a k-spread as a projected code. According to this, we say that a type vector is admissible if it satisfies the conditions in Theorem 3.3. In other words, if k ∈ {t 1 , . . . , t r } ⊆ {1, . . . , k, n − k, . . . , n − 1}.
Notice that every type vector containing the dimension k is admissible when n = 2k since, in that case, it holds k = n − k. This particular case has been already studied in [2], where it was proved that optimum distance flag codes of any type vector containing the dimension k can be constructed from a k-spread (planar spread). Moreover, those codes were shown to attain the maximum possible cardinality as well. In the next subsection, we tackle the problem of constructing flag codes attaining the maximum distance and having a k-spread as their projected code for any admissible type vector in the general case n = ks, for s 3.
A construction based on perfect matchings
This part is devoted to describe a specific construction of optimum distance flag codes on F n q from a k-spread of a given admissible type vector (t 1 , . . . , t r ). By means of Theorem 3.3, if such codes exist, their type vector must satisfy k ∈ {t 1 , . . . , t r } ⊆ {1, . . . , k, n − k, . . . , n − 1}. For the sake of simplicity, we undertake this construction in several phases: we consider first the admissible type vector (1, n − 1), that is, the construction of optimum distance flag codes from the spread of lines. Secondly, by using the field reduction map defined in Sect. 2.1, we properly translate the construction in the first step to get optimum distance flag codes of type vector (k, n − k) having the k-spread S introduced in (6) as its first projected code. Then, taking advantage of certain properties of the k-spread S, we extend the construction in the second step to obtain optimum distance flag codes of the full admissible type, that is, (1, . . . , k, n − k, . . . , n − 1). Finally, this last construction gives optimum distance flag codes of any admissible type vector after a suitable puncturing process. Let us explain in detail all these stages.
The type vector (1, n − 1): starting from the spread of lines
Take n 3. In this section, we provide a construction of optimum distance flag codes on F n q from the spread of lines, that is, having the Grassmannian G q (1, n) as a projected code. By Theorem 3.3, the only admissible type vector in this case is (1, n − 1). In other words, to give an optimum distance flag code from the spread of lines of F n q , we have to provide a family of |G q (1, n)| pairwise disjoint flags of length two, all of them consisting of a line contained in a hyperplane. To do so, we translate this problem to the one of finding perfect matchings in bipartite regular graphs, using the results given in Sect. 2.4. Let us precise this.
Consider the graph G = (V , E), with set of vertices V = G q (1, n) ∪ G q (n − 1, n) and set of edges E defined by Notice that the set of vertices in G consists of the lines and hyperplanes of F n q . An edge (l, H ) of G exists if, and only if, the line l is contained in the hyperplane H . With this notation, the next result holds.
Proposition 3.5 The graph G = (V , E) is bipartite and q (n−1) −1
q−1 -regular. Proof It is clear that G is a bipartite graph by definition. Moreover, the number of hyperplanes containing a fixed line coincides with the number of lines lying on a given hyperplane. This number is precisely q (n−1) −1 q−1 . Then, the degree of any vertex in G coincides with this value, and then, G is q (n−1) −1 q−1 -regular. Note that the problem of giving a family of flags with the desired conditions can be seen as the combinatorial problem of giving a perfect matching in G. Since G is a regular bipartite graph, we can use Theorem 2.3 to conclude that there exist perfect matchings in G. More precisely, there exists a subset M ⊂ E that matches V , that is, each edge in M has an extremity in G q (1, n) and the other one in G q (n − 1, n). In particular, the set M has a number of edges equal to |G q (1, n)|. This matching M induces naturally a bijection, also denoted by M, between the set of lines and the set of hyperplanes in F n q . Moreover, by the definition of E, we have that the map M : G q (1, n) → G q (n − 1, n) satisfies that l ⊂ M(l) for any l ∈ G q (1, n). This fact allows us to construct a family of flags of type (1, n − 1) on F n q in the following way: Let us see that the family C is a flag code with projected codes C 1 = G q (1, n) and C 2 = G q (n − 1, n) satisfying the desired conditions. Theorem 3.6 Given n 3, the code C defined in (11) is an optimum distance flag code of type (1, n − 1) on F n q with the spread of lines as a projected code.
Proof Since the map M defined above is bijective, the code C must be a disjoint flag code with projected codes C 1 = G q (1, n) and C 2 = G q (n − 1, n). In particular, as d S ( C 1 ) = d S ( C 2 ) = 2 is the maximum possible distance for constant dimension codes of dimension 1 and n −1 in F n q , by Theorem 3.1, we have that C is an optimum distance flag code with G q (1, n) as a projected code.
Remark 3.7
Observe that, by means of Theorem 3.2, our code C defined as above attains the maximum possible cardinality for flag codes of type (1, n − 1) and distance 4, which is For the particular case n = 3, the previous bound was given in [16], where the author studied bounds for the cardinality of full flag codes with a given distance. Observe that this is the only case in which optimum distance full flag codes with a spread as a projected code can be constructed, apart from the case n = 2k, as it was explained in [2].
Note that, despite the fact that Theorem 2.3 guarantees the existence of perfect matchings in regular bipartite graphs, in order to provide a concrete construction of optimum distance flag codes of type (1, n −1) on F n q , we need to get a precise matching in G. As said in Sect. 2, this can be done by using known algorithms (see [7,11,12,15], for instance). In Sect. 3.3, we exhibit an optimum distance flag code of type (2,4) constructed as a perfect matching obtained with GAP.
The type vector (k, n − k)
Take n = ks a natural number with k 2 and s 3. In order to construct optimum distance flag codes of type (k, n − k) on F n q , we will use the construction of optimum distance flag codes of type (1, s − 1) on F s q k given in Sect. 3.2.1 together with the field reduction map defined in Sect. 2.1. Let us explain this construction.
Let M : 1, s) be a bijection such that l ⊂ M(l) for any l ∈ G q k (1, s). By Theorem 3.6, we know that the code is an optimum distance flag of type (1, s − 1) on F s q k . In particular, the code C is disjoint. On the other hand, given P ∈ G L k (q) the companion matrix of a monic irreducible polynomial of degree k in F q [x], the associated field isomorphism φ : F q k → F q [P] induces the field reduction ϕ : P q k (s) −→ P q (ks) as in (2). Notice that, by Proposition 2.1, we have that for any m ∈ {1, . . . , s − 1}, it holds that ϕ (G q k (m, s)) ⊆ G q (mk, sk). Moreover, given U, V subspaces of F s q k with U ⊆ V, then ϕ(U) ⊆ ϕ(V). As a consequence, if (l, M(l)) ∈ C, then (ϕ(l), ϕ(M(l))) is a flag of type (k, n − k) on F n q . This fact allows us to define a family of flags over F n q as follows C = {(ϕ(l), ϕ(M(l))) | l ∈ G q k (1, s)}.
By Remark 2.2, we know that ϕ gives a bijection between G q k (1, s) and the k-spread S = S(s, k, P) defined in (6). Hence, the family C is a flag code with projected codes and the following result holds: Theorem 3.8 The code C defined in (12) is an optimum distance flag code of type (k, n − k) on F n q having the spread S as a projected code.
Furthermore, by the injectivity of ϕ (see Proposition 2.1), we also have that | C 2 | = |ϕ(G q k (s − 1, s))| = |G q k (s − 1, s)|. As |G q k (1, s)| = |G q k (s − 1, s)|, we conclude that | C| = | C 1 | = | C 2 | and C is disjoint. Let us now prove that the projected codes of C are constant dimension codes with the maximum possible distance. Since the projected code C 1 is a spread, it is enough to check this property for C 2 . Given any two different subspaces ϕ(H ), ϕ(H ) ∈ C 2 = ϕ(G q k (s − 1, s)), by means of Proposition 2.1, we have that H , H are different hyperplanes. Moreover, since the intersection of any two hyperplanes in F s q k is a (s − 2)-dimensional subspace of F s q k we have that Notice that n − 2k = 2(n − k) − n is the minimum among the possible dimensions of the intersection of subspaces in G q (n − k, n). Hence, C 2 is an equidistant (n − 2k)intersecting constant dimension code and, by applying Theorem 3.1, we are done.
The full admissible type vector
In this subsection, we finally tackle the construction of optimum distance flag codes of the full admissible type, that is, of type (1, . . . , k, n − k, . . . , n − 1) on F n q having the k-spread S defined in (6) as a projected code. To do this, we start from the optimum distance flag code C of type (k, n − k) defined in (12). Recall that the construction of this code depends on the choice of a bijection M : Let us fix an order in the set of lines of F s q k and write G q k (1, s) = {l 1 , l 2 , . . . , l L }, where L = |G q k (1, s)|. This order in G q k (1, s) naturally induces respective orders in the sets G q k (s − 1, s), S and H = ϕ(G q k (s − 1, s)) as follows: where is the set defined in (7), and q k is the RREF matrix generating the line l i . Now, given a hyperplane H i = M(l i ) of F s q k , we can write for l i 2 , . . . , l i s−1 some lines of F s q k . By the properties of the field reduction ϕ described in Proposition 2.1, we have that So, any subspace H i ∈ H can be decomposed as a direct sum of subspaces in S. This representation is not unique since H i can be written as direct sum of different collections of lines. Moreover, given that for any line l i s in F s q k \ H i , it holds that H i ⊕ l i s = F s q k , by using Proposition 2.1 again, we conclude that H i ⊕ S i s = F n q . As a consequence, the rows of the matrix form a basis of F n q . Moreover, any collection of j n rows of W i generates a jdimensional subspace of F n q . Denote by W ( j) i the submatrix of W i given by its first j rows. We also denote by W In addition, for any 1 . This fact allows us to define F W i the flag of type (1, . . . , k, n − k, . . . , n − 1) associated with W i in the following way: Finally, given the family of matrices {W i } L i=1 , we define the family of associated flags of type (1, . . . , k, n − k, . . . , n − 1): Let us see that C is an optimum distance flag code. To do so, we analyze the structure of its projected codes: and for all j = 1, . . . , k.
Proposition 3.9
Given the flag code C defined as above, for each j = 1, . . . , k the following is satisfied: (1) The code C j is a partial spread in G q ( j, n) with cardinality L = q n −1 q k −1 . (2) The code C k+ j is an equidistant (n−2k+2( j −1))-intersecting constant dimension code in G q (n − k + j − 1, n) with cardinality L = q n −1 q k −1 . As a consequence, C k+ j is a constant dimension code of maximum distance.
In particular, we have that C k = S and C k+1 = H.
Proof By construction, it is clear that C k = S and C k+1 = H. Now, for any 1 j k, given two different indices i 1 , i 2 ∈ {1, . . . , L}, we have that Hence, C j is a partial spread in the Grassmannian G q ( j, n) with |C j | = L.
To prove (2), consider subspaces W We know that dim(H i 1 ∩ H i 2 ) = (s − 2)k = n − 2k, and then, the subspace sum H i 1 + H i 2 is the whole space F n q . As a consequence, and then it follows that which is the minimum possible dimension of the intersection between subspaces of dimension n −k + j −1 of F n q . Thus, we conclude that C k+ j is an equidistant (n −2k + 2( j − 1))-intersecting constant dimension code with exactly L elements. In particular, we have that d S (C k+ j ) = 2(k − ( j − 1)) and C k+ j is a constant dimension code with the maximum distance.
Theorem 3.10
The flag code C defined in (15) is an optimum distance flag code of type (1, . . . , k, n − k, . . . , n − 1) on F n q with the k-spread S as a k-projected code. This code has cardinality |C| = L = q n −1 q k −1 and distance d f (C) = 2k(k + 1). Proof By means of Proposition 3.9, we conclude that C is a disjoint flag code of cardinality L with projected codes attaining the maximum distance for their corresponding dimensions. Then, by Theorem 3.1, C is an optimum distance flag code, that is, d f (C) = 2k(k + 1).
Remark 3.11
The code C defined in (15) attains the maximum possible distance for flag codes of type (1, . . . , k, n − k, . . . , n − 1) on F n q . Furthermore, by means of Theorem 3.2, it also has the best possible size among the optimum distance flag codes of the full admissible type vector on F n q .
The general case
Finally, in order to get an optimum distance flag code of any admissible type vector with a k-spread as a projected code, we apply a puncturing process to the code C defined in (15). This process was already used in [2] to get optimum distance flag codes having a planar spread as a projected code. Let us recall it. Fix an admissible type vector (t 1 , . . . , t r ), that is, a type vector such that k ∈ {t 1 , . . . , t r } ⊆ {1, . . . , k, n− k, . . . , n − 1}. Consider a flag F W i in the code C in (15). The punctured flag of type (t 1 , . . . , t r ) associated with F W i is the sequence The punctured flag code of type (t 1 , . . . , t r ) associated with C is the code given by Observe that the projected codes of C (t 1 ,...,t r ) are, in particular, projected codes of C. Hence, the next result follows straightforwardly from this fact, together with Theorem 3.2. (t 1 , . . . , t r ), the code C (t 1 ,...,t r ) defined as above is an optimum distance flag code on F n q with the spread S as a projected code. Its cardinality, which is L = q n −1 q k −1 , is maximum for optimum distance flag codes of this type.
Example
We conclude the paper with an example of our construction of an optimum distance flag code of type (2, 4) on F 6 2 having a 2-spread as its first projected code. To do this, we follow the steps given in Sect. 3.2.
Consider the bipartite graph Then, we have that F 4 = {0, 1, α, α 2 }. By using the package GRAPE of GAP and following the process described in [3], we have obtained the next perfect matching of Observe that every line l ∈ G 4 (1, 3) is a subspace of the (hyper)plane M(l). Even more, we have expressed every subspace M(l) as the rowspace of a 2×3 matrix whose the first row is precisely a generator of the line l. In this way, we obtain the optimum distance flag code of type (1, 2) on F 3 Now, let f (x) = x 2 + x + 1 be the minimal polynomial of α and consider its companion matrix P = 0 1 1 1 . If φ is the field isomorphism in (1), we have that φ(0) = 0 2 , φ(1) = I 2 and φ(α) = P. Taking the previous matching M and the field reduction ϕ induced by φ (2), we define the following optimum distance flag code of type (2, 4) on F 6 2 C = {(ϕ(l), ϕ(M(l))) | l ∈ G 4 (1, 3)}.
Conclusions and future work
In this paper, we have addressed the problem of obtaining flag codes of general type (t 1 , . . . , t r ) on a space F n q with the maximum possible distance and the property of having a k-spread as a projected code whenever k divides n. Firstly, we have showed that the existence of such codes might be not possible for an arbitrary type vector and have characterized the admissible ones. They have to satisfy the condition: k ∈ {t 1 , . . . , t r } ⊆ {1, . . . , k, n − k, . . . , n − 1}.
Given an admissible type vector, we have proved the existence of optimum distance flag codes of such a type with a spread as a projected code by describing a gradual construction starting from type (1, n − 1), following with type (k, n − k), to finish with the full admissible type {1, . . . , k, n − k, . . . , n − 1}. This construction is mainly based on two ideas: on the one side, we exploit the existence of perfect matchings in the bipartite graph with set of vertices given by the lines and the hyperplanes of F n q and edges given by the containment relation. On the other hand, we use the properties of the field reduction map that allow us to translate the spread of lines to a k-spread and to build our code from it. Our construction provides codes with the best possible size among optimum distance flag codes of any arbitrary admissible type vector.
In future work, we investigate the algebraic structure and features of this family of codes and explore other possible constructions. We also study the family of flag codes from spreads not necessarily having the maximum distance as well as the existence and performance of decoding algorithms for them.
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. | 2020-12-15T02:10:26.802Z | 2020-05-19T00:00:00.000 | {
"year": 2021,
"sha1": "9b69724009dc6149fecf72b90e2dbd94d818c4ff",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10801-021-01086-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "08aaf3062e57f59d579679209296280793cf4def",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
55516322 | pes2o/s2orc | v3-fos-license | Empirical Optimization of Undulator Tapering at FLASH2 and Comparison with Numerical Simulations
In a free-electron laser equipped with variable-gap undulator modules, the technique of undulator tapering opens up the possibility to increase the radiation power beyond the initial saturation point, thus enhancing the efficiency of the laser. The effectiveness of the enhancement relies on the proper optimization of the taper profile. In this work, a multidimensional optimization approach is implemented empirically in the x-ray free-electron laser FLASH2. The empirical results are compared with numerical simulations.
Introduction
FLASH [1] is the free-electron laser (FEL) facility at the Deutsches Elektronen-Synchrotron (DESY) in Hamburg, Germany. It contains two undulator beamlines, FLASH1 and FLASH2, driven by the same linear accelerator. While FLASH1 consists of fixed-gap undulator modules, FLASH2 is equipped with variable-gap undulator modules. The variable-gap feature enables the simultaneous operation of FLASH1 and FLASH2 at different wavelengths [2]. It also enables the implementation of undulator tapering in FLASH2.
Undulator tapering involves the variation of the undulator parameter K as a function of the distance z along the undulator line, for the purpose of enhancing the radiation power (and hence the efficiency) of the FEL. This has been demonstrated empirically in x-ray FELs, such as LCLS [3] and SACLA [4]. In order to maximize the enhancement of radiation power, the taper profile K(z) needs to be properly optimized.
Present-day imaging experiments at x-ray FELs call for an increased number of photons within a shorter pulse duration [5,6]. To meet the stringent demand on the radiation power, the theory of taper optimization has been revisited in recent years. In Ref. [7], an important step is made towards the formulation of a universal taper law. In Refs. [8,9], taper optimization methods based on the classic Kroll-Morton-Rosenbluth (KMR) model [10] are demonstrated in numerical simulations. In Refs. [11,12], a multidimensional optimization method is performed in numerical simulations, whereby the optimal taper profile K(z) is obtained by scanning through a parameter space comprising the taper order (such as linear and quadratic), the taper start point, the taper amplitude etc.
The multidimensional optimization approach is relatively straightforward. Guided by the theoretical studies, this approach is implemented empirically in FLASH2 at a wavelength of 44 nm, and the results are presented in this article. The empirical results of the taper optimization are then compared with the corresponding numerical simulations. The agreement and discrepancies between the empirical and simulation results are analyzed. The article concludes by excluding a number of otherwise possible causes of the discrepancies.
Machine Parameters
FLASH2 contains a total of 12 undulator modules. Between every two adjacent modules, there is a drift section for beam focusing, trajectory correction, phase shifting, diagnostics etc. Table 1 shows the known machine parameters. For machine parameters not listed in Table 1, the nominal design values [1]
Taper Optimization Scheme
Each of the 12 undulator modules (m = 1, 2, ..., 12) is set to an undulator parameter K m . Within each module, the undulator parameter is uniform. The taper profiles considered in this empirical study are defined by three parameters: the taper order d, the start module n and the taper amplitude ∆K/K. These taper profiles are given by the ansatz In Eq. (1), K is the initial undulator parameter, in resonance with the initial energy of the electron beam. The undulator parameter remains K from modules 1 to n − 1, and decreases in steps from module n onwards. The taper order d equals 1 for linear tapering, and 2 for quadratic tapering. The taper amplitude ∆K/K is defined such that the undulator parameter of the last module is Multidimensional optimization is performed by scanning d, n and ∆K/K empirically for the highest final radiation energy. The same type of multidimensional optimization in numerical simulations is presented in Refs. [11,12].
Phase Shifter Configuration
In the drift section between every two undulator modules, there is a phase shifter for the proper matching of the phase between the electron beam and the optical field. The phase shifters are characterized in Ref. [13]. The required phase shift in each drift section depends solely on the undulator parameter of the preceding undulator module. The phase shifts are implemented automatically by a baseline procedure to ensure constructive interference between the optical fields emitted before and after each drift section. The procedure also accounts for the phase advance caused by the fringe fields at the two ends of each undulator module.
Radiation Energy Measurement
The lasing of FLASH2 takes place through the process of self-amplified spontaneous emission (SASE). The energy of a radiation pulse is measured with a micro-channel plate (MCP) detector [14], located downstream after the 12 undulator modules. The MCP detector offers a relatively high accuracy over a dynamic range of radiation intensities. To account for the shot-to-shot variability, each energy measurement is averaged over about 100 pulses. Quadratic Taper Taper starts from module 6 Taper starts from module 7 Taper starts from module 8 Figure 1: Empirical data. The final pulse energy is plotted as a function of the taper amplitude ∆K/K for (a) linear tapering (d = 1) and (b) quadratic tapering (d = 2). The blue solid curve, red dashed curve and yellow dotted curve correspond respectively to start modules n = 6, 7 and 8. Figure 2: Empirical data. The evolution of the (a) optical pulse energy and (b) input undulator parameter along the undulator line. The undulator parameter is normalized to the initial value. The blue solid curve, red dashed curve and yellow dotted curve correspond respectively to no taper (∆K/K = 0), the optimal linear taper (n = 7, ∆K/K = 4%) and the optimal quadratic taper (n = 6, ∆K/K = 6%).
The gas-monitor detector (GMD) [15], which is also located downstream after the 12 undulator modules, measures the optical pulse energy in parallel. The GMD reading is used as a cross check.
With all the 12 undulator modules engaged, the MCP and GMD measure the final pulse energy. To examine the evolution of the pulse energy along the undulator line, it is necessary to measure the intermediate pulse energy upstream. To measure the pulse energy immediately after an upstream undulator module, the gaps of all subsequent modules are opened, so that the optical pulse propagates towards the detectors without further interacting with the electron beam. During the propagation, the optical pulse undergoes vacuum diffraction, and its transverse size can increase. So long as the detectors collect the signal of the entire optical pulse, the pulse energy remains unchanged.
Empirical Results
The final optical pulse energy is measured for different taper profiles given by Eq. (1). The measurement is done for taper orders d = 1, 2 and for start modules n = 6, 7, 8. The results are shown in Fig. 1. Each data point in Fig. 1 is obtained with the MCP detector, and is the average over 140 ± 50 pulses. The error bar indicates the standard deviation of the MCP readings. Among all the taper profiles considered in Fig. 1, the optimal linear taper occurs at n = 7 and ∆K/K = 4%, whereas the optimal quadratic taper occurs at n = 6 and ∆K/K = 6%.
For the optimal linear taper, the optimal quadratic taper and no taper, the intermediate pulse energies are measured. The evolution of the pulse energy along the undulator line is shown In the absence of tapering, the saturation of pulse energy is reached in module 8 [see solid curve in Fig. 2(a)]. In other words, the start modules (n = 6, 7, 8) considered in Fig. 1 are in the vicinity of the initial saturation point.
Simulation Parameters
The empirical results are compared with numerical simulation, after the experiment has been completed. The simulation is performed using the three-dimensional and time-dependent simulation code GENESIS [16], with parameter values as close as possible to the empirical ones (see Table 1). Parameters not specified in Table 1 are assumed to have the nominal values shown in Table 2.
Parameter
Symbol Value Peak current I 0 1.5 kA RMS bunch length σ z 24 µm RMS energy spread σ γ m e c 2 0.5 MeV Normalized emittance ε x,y 1.4 mm mrad Average of beta functionβ x,y 6 m In the simulation, the initial values of the optical functions and the quadrupole strengths are chosen self-consistently to give the desired average beta value, independent of the values used in the experiment.
Simulation Results
The same multidimensional optimization is performed in simulation. Using Eq. (1) as the ansatz, the parameters d, n and ∆K/K are scanned for the highest final radiation energy. The results are shown in Fig. 3. Among all the taper profiles considered in Fig. 3, the optimal linear taper occurs at n = 7 and ∆K/K = 4%, whereas the optimal quadratic taper occurs at n = 6 and ∆K/K = 6%. For the optimal linear taper, the optimal quadratic taper and no taper, the simulated pulse energy evolutions along the undulator line are shown in Fig. 4(a). The corresponding taper profiles are shown in Fig. 4(b) for reference.
Comparing Empirical and Simulation Results
Comparing Fig. 1 (empirical) and Fig. 3 (simulation), the optimal taper profiles are consistent. In both cases, the optimal linear taper occurs at n = 7 and ∆K/K = 4%, and the optimal quadratic taper occurs at n = 6 and ∆K/K = 6%. Figs. 1 and 3 also show good agreement in the overall trend for the final optical pulse energy E. In both cases, the overall trend for linear tapering (d = 1) is whereas the overall trend for quadratic tapering (d = 2) is E(n = 6, ∆K/K) > E(n = 7, ∆K/K) > E(n = 8, ∆K/K).
However, Figs. 1 and 3 show disagreement in terms of the absolute pulse energies. The range of pulse energies is generally higher in the simulation than in the experiment.
Next, the pulse energy evolution along the undulator line is compared between simulation [see Fig. 4(a)] and experiment [see Fig. 2(a)]. In both cases, the pulse energy remains in the order of 1 µJ before module 5, and exceeds the 10-µJ threshold in module 5. In the absence of tapering, the initial saturation point is situated around module 8 in both cases (see solid curves). With the optimal linear and quadratic tapers, final saturation is reached within the 12 undulator modules in both simulation and experiment, but occurs earlier in the experiment than in the simulation (see dashed and dotted curves).
In the experiment, the optimal linear taper and the optimal quadratic taper yield almost identical final pulse energy. But in the simulation, the final pulse energy for the optimal quadratic taper is 1.2 times higher than that for the optimal linear taper.
In the experiment, the enhancement factor is E(optimal taper) E(no taper) = 1.5.
But in the simulation, the enhancement factor is E(optimal taper) E(no taper) = 3.9, which is 2.6 times higher than that in the experiment.
Discussion on the Taper Start Point
In both the simulation and empirical results, the optimal linear taper starts from module 7, while the optimal quadratic taper starts from module 6. The reason for this difference in the optimal start point is that the undulator parameter decreases much more slowly at the beginning of the quadratic taper. This is seen in Figs. 2(b) and 4(b). In module 6 from which the quadratic taper starts, the undulator parameter K 6 is effectively identical to the initial value K 1 , as K 6 = 99.88% × K 1 ≈ K 1 . It is in module 7 where the undulator parameter starts to show a significant difference from the initial value. In other words, the optimal quadratic taper starts effectively from module 7, the same module from which the optimal linear taper starts.
Refs. [7,17] suggest that the optimal taper start point is two gain lengths before the initial saturation point. In one-dimensional theory, the gain length is given by where ρ = 1 4 is the dimensionless Pierce parameter, I A = m e c 3 /e = 17.045 kA is the Alfvén current, σ x is the rms radius of the electron beam, and f B = J 0 (ξ) − J 1 (ξ) is the Bessel factor for planar undulators, With the parameters in Tables 1 and 2, the Pierce parameter is ρ = 3.51 × 10 −3 , and the gain length is L g = 0.41 m. Thus, the optimal taper start point is predicted to be 2L g = 0.82 m before the initial saturation point, excluding the length of the drift section between undulator modules. If we assume that the precise initial saturation point is at the beginning of module 8, then the optimal taper start point should lie within module 7. This rough prediction agrees with the simulation and empirical results.
Relating Optimal Taper Profiles to the KMR Model
The Kroll-Morton-Rosenbluth (KMR) model [10] is a theoretical analysis of undulator tapering in FELs based on a one-dimensional relativistic Hamiltonian formulation. In Refs. [8,9], the KMR model is used as a method to optimize FEL taper profiles in numerical simulations. After choosing the resonant phase ψ R (z), the taper profile K(z) is computed from the differential equation where E 0 is the on-axis field amplitude and z is the position along the undulator line. With a constant ψ R , the optimization is known as the ordinary KMR method. With a variable ψ R which increases gradually from zero, the optimization is known as the modified KMR method. With the simulation results at hand, the evolution of the resonant phase ψ R along the undulator line can be back-calculated from Eq. (4). This back-calculation requires the taper profile K(z) [see Fig. 4(b)] and the field amplitude evolution E 0 (z) [see Fig. 5
(a)] as inputs.
Carrying out this back-calculation for the optimal linear taper, the optimal quadratic taper and no taper, the resulting ψ R (z) functions are shown in Fig. 5(b).
The optimal linear and quadratic tapers start from module 7 and module 6, respectively. Before the taper starts, ψ R = 0 [see dashed and dotted curves in Fig. 5(b)]. This is expected, as dK/dz = 0 implies ψ R = 0 according to Eq. (4). For the same reason, in the absence of any tapering, ψ R remains zero at all times [see solid line in Fig. 5(b)].
When the optimal linear taper starts in module 7, ψ R increases abruptly from 0 to 12 • , and remains almost constant afterwards [see dashed curve in Fig. 5(b)]. When the optimal quadratic taper starts in module 6, ψ R increases gradually and monotonically from 0, until it reaches a value of 36 • in the final module [see dotted curve in Fig. 5(b)]. The ψ R (z) function for the optimal linear taper resembles one used in the ordinary KMR method, whereas the ψ R (z) function for the optimal quadratic taper resembles one used in the modified KMR method.
General Remarks
The empirical and simulation results are in good agreement in terms of: • the (n, ∆K/K) values for the optimal linear and quadratic tapers; • the overall trend in the plots of the final energy E versus ∆K/K (see Figs. 1 and 3); and • the module in which the exponential gain crosses the 10-µJ threshold (see Figs. 2 and 4).
However, there are three main discrepancies between the empirical and simulation results: • In the parameter space (d, n, ∆K/K) considered, the E range is generally lower in the experiment than in the simulation (see Figs. 1 and 3).
• The enhancement factor E(optimal taper)/E(no taper) is 3.9 in the simulation, but only 1.5 in the experiment.
• With the optimal linear and quadratic tapers, final saturation occurs earlier in the experiment than in the simulation (see Figs. 2 and 4).
The exact causes of these discrepancies are not known. Yet, it is possible to exclude a number of otherwise possible causes, such as the shot-to-shot variability, drift of the machine and wakefield effects. These are addressed in the upcoming subsections.
The discrepancies in question can also be caused by incorrect assumptions of parameter values. For the simulation, the nominal FLASH2 parameter values in Table 2 are assumed. The assumed nominal values in the simulation can be different from the unknown actual values in the experiment.
As illustrated in the sensitivity study in Ref. [18], a slight change in the emittance, energy spread or peak current can have a huge impact on the optimized radiation power of a tapered FEL. In other words, if the actual emittance, energy spread or peak current is worse than assumed, then the optimized radiation energy will be lower than expected. This will, in turn, influence the enhancement factor. This can possibly explain the discrepancies in question. However, the proposition that the emittance, energy spread or peak current is worse than assumed will be disproved in the following subsections.
Shot-to-Shot Variability
In the empirical results ( Figs. 1 and 2), the shot-to-shot fluctuations are accounted for by the error bar, which indicates the standard deviation of many shots. All the error bars are within ± 23 µJ, which is too small to account for the discrepancies between the simulation and empirical results.
Drift of the Machine
Consider two scenarios in particular, the optimal linear taper and no taper. Since the optimal linear taper only starts from undulator module 7, the two scenarios are identical before module 7. In principle, the two scenarios should yield the same pulse energy evolution before module 7. This is precisely the case in the simulation [see solid and dashed curves in Fig. 4(a)], which is the ideal case free of any drift. But in the empirical results [see solid and dashed curves in Fig. 2(a)], the two scenarios yield slightly different energies in modules 5 and 6. The energy differences can be partly attributed to the drift of the machine. But despite the drift, the energy differences are still within 24 µJ, which is too small to account for the discrepancies between the simulation and empirical results.
Emittance Underestimated
In order to disprove that the emittance is underestimated, the simulation is repeated with the normalized emittance slightly increased, from 1.4 mm mrad to 1.6 mm mrad. All other parameters in Tables 1 and 2 are kept unchanged. With the average beta functionβ x,y kept unchanged, this requires increasing the RMS beam radius σ x,y from 82 µm to 87 µm. The new simulation results are shown in Fig. 6.
If the emittance were indeed underestimated in the original simulation, then the new simulation (with an increased emittance) would show an improved agreement with the empirical results. But in the new simulation results, the overall trends of the final pulse energy E change. As seen in Fig. 6, the overall trend for linear tapering (d = 1) becomes E(n = 7, ∆K/K) > E(n = 8, ∆K/K) > E(n = 6, ∆K/K), whereas the overall trend for quadratic tapering (d = 2) becomes E(n = 6, ∆K/K) ≈ E(n = 7, ∆K/K) > E(n = 8, ∆K/K).
The overall trends actually become further off from those in the empirical results (see Fig. 1). Meanwhile, there is no improved agreement in the E range and in the enhancement factor. This disproves that the emittance is underestimated in the original simulation. Comparing the two sets of simulation results in Fig. 3 and 6, the increased emittance makes it more favourable to start the taper at a later point down the undulator line. The optimal quadratic taper in Fig. 3 starts from module 6, whereas that in Fig. 6 starts from module 7. As for linear taper, module 7 remains the most favourable start module. Yet, while module 8 is the least favourable of the three start modules considered in Fig. 3, it becomes the second most favourable in Fig. 6.
The shift in the optimal taper start point can be explained as follows. Refs. [7,17] suggest that the optimal taper start point z 0 is two gain lengths before the initial saturation point. In one-dimensional theory, this is given by With the definition of the Pierce parameter in Eq. (3), one can deduce that The proportionality implies that an increased emittance moves the optimal taper start point downstream. It also implies that a further increase in emittance would move the optimal taper start point further downstream, thus making the overall trends of E even further off from those in the empirical results.
Peak Current Overestimated
In order to disprove that the peak current is overestimated, the simulation is repeated with the peak current slightly decreased, from 1.5 kA to 1.2 kA. In order to keep the known bunch charge in Table 1 unchanged, this requires increasing the RMS bunch length from 24 µm to 30 µm. All other parameters in Tables 1 and 2 are kept unchanged. The new simulation results are shown in Fig. 7.
Again, by decreasing the peak current in the simulation, the overall trends in the final pulse energy E become further off from those in the empirical results (see Fig. 1). This disproves that the peak current is overestimated.
Comparing the two sets of simulation results in Fig. 3 and 7, the decreased peak current also makes it more favourable to start the taper in a later undulator module. This agrees with the one-dimensional theoretical prediction from Eqs. (3) and (5) that
Energy Spread Underestimated
In order to disprove that the energy spread is underestimated, the simulation is repeated with the energy spread slightly increased, from 0.5 MeV to 0.7 MeV. All other parameters in Tables 1 and 2 are kept unchanged. The new simulation results are shown in Fig. 8.
Again, by increasing the energy spread in the simulation, the overall trends in the final pulse energy E become further off from those in the empirical results (see Fig. 1). This disproves that the energy spread is underestimated.
Comparing the two sets of simulation results in Fig. 3 and 8, the increased energy spread also makes it more favourable to start the taper in a later undulator module. However, it is impossible to use the one-dimensional formulation to explain the shift in the optimal taper start point caused by the increased energy spread, as it is done for the emittance and the peak current. Nonetheless, the energy spread effects can be explained by similar arguments using the generalized formulation of Ming Xie [19].
Wakefield Effects
In Ref. [16], a simulation study on the effects of wakefields is performed on a case of the TTF-FEL, which is the predecessor of the FLASH1 and FLASH2 facilities. The machine parameters used in the simulation study are in the same orders of magnitude as those in Tables 1 and 2. The study identifies three major sources of wakefields, namely, the conductivity, surface roughness and geometrical changes of the beam pipe along the undulator. The simulation on the TTF-FEL case shows that wakefields can reduce the saturation power of the FEL by three orders of magnitude, while keeping the saturation length almost unchanged. In principle, wakefield effects can be a possible explanation for the discrepancies between our empirical and simulation results for FLASH2. However, this can be disproved as follows.
In the empirical optimization of undulator tapering, the optimal taper profile which maximizes the final radiation energy is also that which best compensates the energy loss due to wakefields [4]. Meanwhile, in the simulation which results in Fig. 3, wakefields are not considered. If wakefield effects were significant, then the optimal taper profile should occur at very different (n, ∆K/K) values in the empirical and simulation results. But as seen in Figs. 1 and 3, this is not the case. In fact, the experiment and simulation yield the exact same (n, ∆K/K) values for the optimal linear taper, and for the optimal quadratic taper. This leads us to the conclusion that wakefield effects are not significant in the experiment, and therefore do not account for the discrepancies in question.
Beam Trajectory Errors
The ideal trajectory of the electron beam is the central axis along the undulator line. But if the electron beam undergoes betatron oscillations as a whole, it deviates from the ideal trajectory and is subject to trajectory errors. These errors can be caused by a combination of many factors, which include • the imperfect alignment of the undulator modules; • the imperfect alignment of the quadrupole magnets; and • the inclined injection of the electron beam to the undulator modules.
Trajectory errors can degrade the FEL performance through a number of mechanisms [20]. A complete analysis of all these mechanisms is not trivial. But in taper optimization studies, it is the undulator parameter K which characterizes a taper profile. The following discussions shall focus on the implication of trajectory errors to K.
The undulator parameter K is associated with the magnetic field strength B 0 on the central axis of the undulator by the definition In the presence of trajectory errors, the electron beam deviates from the central axis. Even if the on-axis field strength B 0 were perfectly accurate, the electron beam would still experience a field strength different from the desired value B 0 , hence an undulator parameter different from the desired value K. As derived in the Appendix, the effective undulator parameter is where y is the deviation of the electron beam from the central axis, and k w = 2π/λ w is the undulator wavenumber. The magnetic field strength experienced by an electron beam with a trajectory error y in an undulator with parameter K is equivalent to that experienced by an on-axis electron beam in an undulator of parameter K eff . The effective undulator parameter K eff also leads to a phase shift error. As mentioned in Section 2.3, the required phase shift in the drift section depends solely on the K value of the preceding undulator module. Given an input value K, the phase shifter is automatically adjusted to ensure proper phase matching at the end of the drift section. But if the effective value is K eff = K, then a phase mismatch will occur. As derived in the Appendix, this phase mismatch is given by Here L D is the drift section length, which is 800 mm in FLASH2. The simulation is now repeated with the K eff and δφ associated with a trajectory error of y = 250 µm, calculated from Eqs. (9) and (10). With a trajectory error of y = 250 µm, the difference between K eff and K becomes comparable to the Pierce parameter ρ, and is therefore significant. The new simulation results are shown in Fig. 9. Again, the overall trends in the final pulse energy E become further off from those in the empirical results (see Fig. 1). Thus, the K eff and δφ associated of a trajectory error of y = 250 µm cannot account for the discrepancies between the empirical and simulation results.
Combination of Different Factors
In the preceding discussions, the different possible causes of the discrepancies in question are considered separately. In the following, combinations of these factors will be discussed.
Shot-to-shot fluctuations and the drift of the machine can each affect the measured optical pulse energy by about 20 µJ. The combined effect is then 40 µJ, which is still too small to account for the discrepancies in question.
The emittance, the peak current and the energy spread have been considered individually. As discussed in Sections 4.4-4.6, if any of these three parameters is worse than assumed, then Quadratic Taper Taper starts from module 6 Taper starts from module 7 Taper starts from module 8 the optimal taper start point z 0 will be shifted downstream [see e.g. Eqs. (6) and (7)]. From this one can deduce that if all three (or at least two of the three) parameters are worse than assumed, then the optical taper start point z 0 will be shifted even further downstream. This will, in turn, make the overall trends of the final pulse energy E even further off from those in the empirical results. Thus, the discrepancies between the simulation and empirical results cannot be explained by the combination of an underestimated emittance, an overestimated peak current and an underestimated energy spread.
There are no indications that the three parameters are much different from their design values. But in principle, one could consider different scenarios where one parameter is worse than assumed while another parameter is better than assumed. One example examined in numerical simulation is the scenario where the normalized emittance is halved while the energy spread is doubled (results not shown). The resulting range of optical pulse energies becomes closer to that in the experiment. Yet, the overall trends of the final pulse energy E, as well as the (n, ∆K/K) values of the optimal tapers, become further off from those in the experiment.
Even though there are possible explanations for some of the discrepancies between the empirical and simulation results, there is no simple explanation that would explain all differences.
Conclusion
A multidimensional optimization method has been implemented empirically in FLASH2, to optimize the taper profile for the maximum radiation energy. The empirical results have been correlated to simulations.
In the empirical study, the taper profile is characterized by the taper order d, the start module n and the taper amplitude ∆K/K. For the optimal linear (d = 1) and quadratic (d = 2) tapers, the evolution of the optical pulse energy along the undulator line was examined.
The empirical results were compared with the corresponding results of numerical simulation. The two sets of results show good agreement in terms of the overall trend in the variation of the final pulse energy E with ∆K/K. They also show good agreement for the optimal linear and quadratic tapers regarding the start module (n), the taper amplitude (∆K/K) and the exponential gain profile. However, there are discrepancies in terms of the general range of pulse energies, the enhancement factor from tapering, as well as the final saturation points for the optimal tapers.
Possible causes of the discrepancies have been examined, and a number of them excluded, such as emittance, energy spread and peak current deviations. Also, shot-to-shot variation, the drift of the machine, wakefield effects, as well as the systematic K and phase shift errors associated with a beam trajectory error have been excluded.
Remaining factors are mainly (i) a poor overlapping between the electron beam and the optical mode, caused by the misalignment and mismatch of the electron optics; and (ii) phase mismatch caused by random errors in the phase shifters. These remaining factors need to be investigated in more detail. Further studies in numerical simulations and empirical measurements are planned for the future. To examine the variation of B y along the y-axis, we set z = 0 and obtain B y (y, 0) = B 0 cosh(k w y).
In other words, if the electron beam has a trajectory error of y, then it experiences a field B y (y, 0) as given by Eq. (18). Analogous to Eq. (8), the effective undulator parameter can be defined as K eff (y) ≡ eλ w 2πm e c B y (y, 0) = eλ w 2πm e c B 0 cosh(k w y) = K cosh(k w y).
A plot of K eff versus y is shown in Fig. 10(a). Note that K eff (y) > K for all y = 0, meaning that any trajectory error in the y-direction effectively increases the undulator parameter from the desired value K.
The difference between the effective undulator parameter K eff and the desired value K can be expressed as δK = K eff − K = K[cosh(k w y) − 1].
This difference of δK in an undulator module, in turn, leads to a phase mismatch in the drift section thereafter. In the drift section, there is a phase advance due to the speed difference between the electron beam and the radiation emitted in the preceding undulator module. For a drift length L D after an undulator module with parameter K, this phase advance is [13] φ = k w L D 1 + The phase shifter in the drift section is configured to perform automatic phase matching for the φ associated with the input value K. Thus, the difference of δK causes a phase shift error of The absolute phase error |δφ| is shown in Fig. 10(b) as a function of y. In this discussion, the additional phase advance due to the fringe fields at the two ends of an undulator module is not considered. | 2016-08-30T11:50:56.000Z | 2016-08-29T00:00:00.000 | {
"year": 2016,
"sha1": "96f83f36114ae3540f380a7f0d593c7c0db17d65",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d8863f4a54717a03dfc0ae58b7fd09b8df0f0478",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
265148407 | pes2o/s2orc | v3-fos-license | Postmastectomy Functional Impairments
Purpose of Review This narrative review aims to offer a thorough summary of functional impairments commonly encountered by breast cancer survivors following mastectomy. Its objective is to discuss the factors influencing these impairments and explore diverse strategies for managing them. Recent Findings Postmastectomy functional impairments can be grouped into three categories: neuromuscular, musculoskeletal, and lymphovascular. Neuromuscular issues include postmastectomy pain syndrome (PMPS) and phantom breast syndrome (PBS). Musculoskeletal problems encompass myofascial pain syndrome and adhesive capsulitis. Lymphovascular dysfunctions include lymphedema and axillary web syndrome (AWS). Factors such as age, surgical techniques, and adjuvant therapies influence the development of these functional impairments. Summary Managing functional impairments requires a comprehensive approach involving physical therapy, pharmacologic therapy, exercise, and surgical treatment when indicated. It is important to identify the risk factors associated with these conditions to tailor interventions accordingly. The impact of breast reconstruction on these impairments remains uncertain, with mixed results reported in the literature.
Introduction
Breast cancer is the most frequently diagnosed cancer on a global scale [1,2].It is projected that the United States will experience approximately 300,590 newly reported cases in 2023, with an overwhelming majority affecting females [1].Surgical intervention is the widely adopted and established standard of care for managing most early-stage breast cancers, offering diverse options such as breast-conserving surgery, unilateral or bilateral mastectomy, and choices regarding immediate, delayed, or no reconstruction [3].In cases where the tumor is large relative to the breast size, there are multiple tumors in different areas of the breast, or if the individual has previously undergone radiation therapy to the breast, mastectomy is the accepted practice [3].Recent statistics from 2020 emphasize a substantial proportion of women diagnosed with early-stage breast cancer in the United States who underwent bilateral mastectomy.Among women aged 31 to 40 years, 33.0% underwent this procedure, while for those aged 30 or younger, the percentage was even higher at 39.9% [4].Age plays a significant role in the choice of mastectomy as a treatment option, with younger women more likely to opt for this procedure compared to older individuals [4].
Although bilateral mastectomy is an effective treatment option, many patients encounter both short-term and longterm postmastectomy sequelae [3].These include various neuromuscular, musculoskeletal, and lymphovascular issues, each with distinct characteristics and implications for patient well-being [5].Neuromuscular issues involve the nerves and/or muscles in the affected area.Nerve damage during surgery or the removal of lymph nodes can result in symptoms such as pain, numbness, tingling, or weakness [3,5,6].Common neuromuscular conditions include postmastectomy pain syndrome (PMPS), which involves chronic pain persisting beyond the expected healing period, and phantom breast syndrome (PBS), which describes the perception of pain or sensations in the breast area following mastectomy [3, 5, 7••].Musculoskeletal issues encompass the bones, joints, and surrounding connective tissue.These conditions can manifest in various ways, including myofascial pain syndrome, which involves the development of trigger points or muscle "knots" that cause both localized and referred pain [3, 5, 7••].Additionally, adhesive capsulitis, widely recognized as frozen shoulder, is a well-documented condition that causes stiffness and restricted mobility in the shoulder joint [3, 5, 7••].Lymphovascular impairments are associated with the lymphatic system and blood vessels.Lymphedema is a common consequence of lymph node removal during mastectomy and is characterized by swelling due to fluid buildup [3, 5, 7••, 8].Another common lymphovascular condition is axillary web syndrome (AWS), also known as cording, where visible or palpable fibrous cords develop in the axilla, causing pain and limited movement [3,5,8].Figure 1 provides an anatomical representation of functional impairments discussed in this review.
The prevalence of postmastectomy functional impairments can be increased by adjuvant therapies, such as radiation, chemotherapy, and endocrine treatments [3,9].Moreover, the inclusion of sentinel lymph node biopsy (SLNB) and axillary lymph node dissection (ALND) during mastectomy, primarily performed for staging, can influence the likelihood of developing certain long-term sequelae [3,9].Despite the potential for sequelae, mastectomy remains a favorable option overall.Recent studies suggest that, in comparison to lumpectomy, mastectomy has fewer postoperative side effects and is associated with less chronic pain [10].
With advancements in medical therapies leading to lower mortality rates and greater 5-year survival rates for breast cancer patients (90.8%, 95% CI-90.5% to 91.1%), enhancing the long-term quality of life (QoL) for this population is of utmost importance [2].It is essential to identify and understand the functional impairments that may arise in breast cancer patients following mastectomy, as these conditions often hinder one's ability to carry out activities of daily living.Equally important is gaining knowledge about the diverse management strategies and interventions available to effectively address these impairments [5].Notably, there can be an overlap between these categories, and some patients may experience concurrent issues [5].By distinguishing between these categories, healthcare professionals can develop optimized treatment plans and support patients throughout their recovery journey, ultimately leading to an improved QoL [5].
Fig. 1 Anatomical representation of functional impairments
This article provides evidence-based resources to enhance understanding of the various functional impairments endured by mastectomy patients.
Postmastectomy Pain Syndrome (PMPS)
PMPS was first documented in the 1970s and continues to be an enduring functional impairment, affecting 20-68% of mastectomy patients [2,11,12].It is characterized by persistent dull, burning, and aching sensations that affect the chest, axilla, and the arm on the side where the mastectomy was performed for a period of at least 3-6 months following surgery.However, the duration of this period may vary slightly [2,11,13].The most common cause of PMPS is thought to be due to intercostobrachial nerve damage and subsequent neuroma formation from surgical dissection, with a higher likelihood in cases where mastectomies are performed alongside ALND [5, 7 ••, 13].However, PMPS is also associated with injury to other regional peripheral nerves including the medial pectoral, lateral pectoral, thoracodorsal, long thoracic nerves, and intercostal nerves II-VI [11,13] (Fig. 1).Risk factors include younger age, higher body mass index (BMI), concurrent radiotherapy treatment, lack of support from family and friends, and belonging to racial/ethnic minorities [2,3,12].In Wang et al.'s meta-analysis of 30 studies and 19,813 postmastectomy patients, it was found that the odds of PMPS increased with decreasing age.Specifically, for every 10-year decrease in age, the odds ratio was 1.36 (95% CI = 1.24-1.48)[14].Though the exact reason remains unclear, it has been proposed that this inverse relationship may be due increased pain receptor sensitivity and risk of nerve damage in younger individuals, as well as a greater likelihood of having a tumor with high histopathological grading, delayed diagnosis, and receiving a more aggressive treatment regimen [2].
Patients with persistent PMPS report significantly lower QoL compared to those for which PMPS has resolved, which points to a need for survivorship and rehabilitation measures [2].There is considerable evidence suggesting the effectiveness of physical therapy in treating pain associated with PMPS.Guidelines generally recommend initiating exercises as early as one day after surgery, with an initial emphasis on gentle range of motion (ROM) movements.Over a period of 6-8 weeks, the regimen progresses to include strengthening exercises, ultimately aiming to restore full ROM [2].In their recent meta-analysis, Kannan et al. demonstrated statistically significant benefits of incorporating exercise to improve both pain and overall QoL for mastectomy-treated breast cancer patients.However, the type of exercise interventions and their specific parameters varied greatly among the reviewed trials, which included resistance training, land-based and water-based aerobic exercise, low-intensity walking, and stretching [11].Other treatment strategies are also available for PMPS.Chappell et al. revealed 10 major treatment modalities for PMPS in their systematic review.These included fat grafting, neuroma surgery, lymphedema surgery, nerve blocks and neurolysis, laser, antidepressants, neuromodulators, physical therapy, mindfulness-based cognitive therapy, and capsaicin [15•].Autologous fat grafting stands out as a highly effective treatment modality for PMPS, supported by strong evidence from the reviewed studies [15•].This is largely due to the regenerative properties of adipose tissue which contains adipose-derived stem cells involved in secreting pro-survival cytokines and growth factors [16].In addition, both amitriptyline and venlafaxine, two commonly used antidepressants, have demonstrated significant effectiveness in reducing PMPS-associated pain [17].For those who choose not to take medication or do not benefit from it, peripheral blocks offer an effective alternative [18] (Table 1).Given the wide range of treatment options available, the choice of therapy for PMPS should ultimately be based on personal preference and individual circumstances, with an emphasis on adopting a multimodal and multidisciplinary approach to effectively manage symptoms [15•].
Phantom Breast Syndrome (PBS)
PBS is characterized by the occurrence of pain or nonpainful sensations such as itching or tingling in the amputated breasts [3,19•] (Fig. 1).A crucial factor in distinguishing phantom breast pain from other forms of pain is the exclusive presence of pain in the absent breast, without any pain reported in the ipsilateral chest wall or arm [7 [21].Grounded in Shapiro's Adaptive Information Processing model, which explains how traumatic experiences hinder natural information processing and contribute to psychological disorders, EMDR utilizes techniques such as eye movements or tapping to facilitate the processing of distressing memories, reduce their emotional impact, and promote healing [21].Continuous paravertebral nerve blocks with ropivacaine have also demonstrated effectiveness in managing PBS symptoms in a randomized, placebo-controlled clinical trial [22].Other treatment options include nerve stabilizers and analgesic agents [5] (Table 1).Further research is needed to explore preventive therapies and pain treatments for PBS, as this condition continues to affect the QoL of breast cancer survivors.
Nerve preservation in mastectomy is an emerging surgical technique aimed at preventing pain and restoring normal sensation in the breast.This procedure focuses on preserving or reconstructing nerves using allograft technologies to avoid nerve damage [23••].Recent studies have shown promising results, with one study reporting preserved nipple/ areolar complex sensation in 87% of breasts and no cases of dysesthesias or neuromas, indicating the potential of nerve-sparing mastectomies in preventing long-term pain and abnormal sensation [24].
Myofascial Pain Syndrome
Myofascial pain syndrome is a condition characterized by the presence of myofascial trigger points (MTrPs), leading to localized pain [7••].It has been found to affect as many as 45% of breast cancer patients [25].MTrPs are found within taut muscular bands in the myofascial tissues and usually elicit pain when compressed, stretched, or overloaded [7••, 25] (Fig. 1).These trigger points can develop after surgery, causing localized pain and tenderness, reduced ROM, and referred pain in specific referral patterns [7••].Factors such as muscle fibrosis resulting from inflammation, fascial dysfunction, and heightened excitability of motor nerves contribute to myofascial pain syndrome, all of which can occur after surgery [15•, 25].Interestingly, active MTrPs have also been found in several muscles of patients with PMPS [15•].In postmastectomy patients, active MTrPs are commonly located in the muscles of the shoulder girdle, specifically in the latissimus dorsi, serratus anterior, pectoralis major, infraspinatus, and upper trapezius muscles [7••, 25].
In a cross-sectional study of 64 breast cancer patients, it was found that, when compared to an untreated control group of breast cancer patients, those that received a mastectomy or lumpectomy had a significantly greater proportion of active MTrPs.No statistical difference was noted between the two surgery groups; however, the location of the MTrPs differed.In the lumpectomy group, the pectoralis major and infraspinatus muscles had the most active MTrPs.In the mastectomy group, the pectoralis major and upper trapezius muscles showed the majority of active MTrPs [25].
Various treatment modalities have been identified for patients with myofascial pain syndrome.The main objective is to gradually alleviate tension in the affected regions, with emphasis on massage techniques and customized physical therapy interventions [25].Multiple interventions have shown efficacy in managing pain and alleviating trigger points among breast cancer patients.A comprehensive physical therapy program incorporating exercises, targeted massage sessions, and techniques such as mobility exercises, stretching, strengthening, and myofascial release, yielded notable reductions in neck and shoulder/axillary pain over an eight-week period [25].Moreover, ultrasound-guided injections for trigger points in the internal rotator muscles of the shoulder have been shown to decrease pain intensity and improve shoulder ROM [25].Additional approaches, including local anesthetic injections, dry needling, electrical stimulation, and acupuncture, have also demonstrated effectiveness in providing relief from chronic pain associated with trigger points [7••, 25].
Adhesive Capsulitis (Frozen Shoulder)
Adhesive capsulitis, commonly known as frozen shoulder, is characterized by pain and significant loss of both passive and active ROM in the glenohumeral joint [5, 7••, 26] (Fig. 1).Postmastectomy patients frequently experience shoulder morbidity, affecting anywhere from 1 to 68% of individuals [26].Restricted ROM can arise due to inflammation and subsequent fibrosis leading to tightening of the glenohumeral joint [7••].Adhesive capsulitis is often regarded as a selflimiting disorder that follows a typical progression through three distinct phases [27,28].The first stage, also referred to as the painful freezing stage, lasts for 2 to 9 months.During this phase, individuals experience sharp, diffuse shoulder pain that tends to worsen at night, as well as a gradual increase in stiffness.The pain begins to lessen as adhesive capsulitis enters the second stage, the frozen stage, which normally lasts between 4 and 12 months, while stiffness and loss of ROM in the glenohumeral joint are at their highest.The third stage, sometimes known as the "thawing stage," involves a gradual regaining of ROM and can take anywhere between 5 months and 2 years to complete [27,28].While adhesive capsulitis may resolve on its own, investigations suggest that a sizable fraction (20% to 50%) of patients have symptoms that last longer than two years [29].Factors such as age (50 to 59 years), breast reconstruction, lymphedema, lymph node dissection, and aromatase inhibitor therapy may independently contribute to the risk of developing adhesive capsulitis [7••, 29].Although mastectomy itself does not directly cause damage to the glenohumeral joint, the accompanying pain, tightness in the pectoral muscles, and changes in biomechanics can result in protective postures that place stress and tension on the joint capsule.This can lead to restricted mobility and the subsequent development of secondary adhesive capsulitis [29,30].
The treatment approach for adhesive capsulitis involves addressing both ROM improvement and pain management.While NSAIDs or acetaminophen can be used to treat initial pain, intra-articular corticosteroid injections administered directly into the glenohumeral joint have demonstrated great effectiveness in alleviating adhesive capsulitis-related pain and improving ROM in both the short and long term [7••, 31].Further enhancements in treatment outcomes have been observed when these injections are combined with a home exercise program in the later stages of adhesive capsulitis that includes passive mobilization, stretching, and electrotherapy [7••, 31].The inclusion of progressive banded strengthening exercises and scapular stabilization maneuvers have also been shown to improve shoulder ROM and overall QoL in postmastectomy patients [28].Other therapies include hyaluronic acid injections, platelet-rich plasma injections, hydrodistention, extracorporeal shockwave therapy, low-level laser therapy, and calcitonin [32,33].Surgical procedures, such as manipulation under general anaesthetic or arthroscopic capsular release, may be explored after other therapies have failed [32] (Table 1).
Lymphedema
Lymphedema is characterized by limb swelling, heaviness, tightness, restricted mobility, and, in certain instances, pain resulting from impaired lymphatic system function [3, 7••] (Fig. 1).Breast cancer-related postmastectomy lymphedema is a well-recognized phenomenon, with an incidence ranging from 8 to 52% within the initial two years following surgery.Notably, approximately 75% of cases manifest within the first year [34].The variation in incidence rates can be partially attributed to the lack of standardized criteria for defining and measuring lymphedema [35].A recent retrospective analysis identified several key risk factors for lymphedema-related events occurring within two years after mastectomy.These factors include higher comorbidity levels at baseline, longer hospitalization duration, more recent mastectomy procedures, higher BMI, younger age, non-Asian race, and hypertension [36].Both SLNB and ALND are linked to an increased risk of lymphedema, with around 5% of SLNB recipients and up to 50% of ALND patients experiencing this condition [36].Furthermore, regional lymph node radiation has been extensively documented as a major risk factor for the development of lymphedema [35,36].
Cancer-related lymphedema has been associated with anxiety, depression, and low body confidence [35].Several interventions have been developed to address postmastectomy lymphedema.Complete Decongestive Therapy (CDT) is considered the gold standard treatment for lymphedema, consisting of a two-phase approach [34, 37•].The initial phase, known as the reduction phase, aims to decrease limb volume and alleviate symptoms.This is accomplished through various interventions, including manual lymphatic drainage (MLD), multiple layer compression bandaging, exercise, and proper skin care [7••, 37•].Once maximum reduction is achieved, the maintenance phase is initiated.During the maintenance phase, the primary objective is to sustain the reduced limb volume achieved in the reduction phase.This involves transitioning to compression garments, incorporating self-MLD techniques, engaging in regular exercise, and maintaining a diligent skin care regimen [37 •].The maintenance phase plays a crucial role in preserving the outcomes achieved during the reduction phase and preventing the recurrence or exacerbation of lymphedema symptoms [37•].The use of pneumatic compression devices that apply intermittent pressure to the limbs in combination with CDT may further enhance the effectiveness of MLD [7••].In more advanced stages, surgical procedures including debulking, lymphovenous anastomosis, and vascular lymph node transplantation have shown promise in reducing the severity of lymphedema [7••, 38] (Table 1).
Axillary Web Syndrome (AWS)
AWS, also known as cording, is characterized by the presence of a singular taut, narrow cord or multiple cords, approximately 1 mm wide, within the subcutaneous tissue of the axilla.These cords extend downwards, along the medial or medial-volar surface of the upper arm, and in certain instances, can also be observed along the lateral chest wall [3,5,39] (Fig. 1).The palpable cord tightens and causes pain, particularly during shoulder abduction, significantly limiting shoulder ROM [3,5,39].AWS typically occurs 2-8 weeks after breast cancer surgery and resolves spontaneously within 3 months, although some cases can persist for years.Recent studies indicate that AWS can develop as well as recur within months to years after surgery [5,8,39].The reported incidence of AWS varies widely, from 6 to 86%, partly due to misdiagnosis and confusion with scar tissue [8,39,40].
Although the pathogenesis is unclear, cording is believed to be caused by lymphatic vessel and tissue damage during procedures like SLNB and ALND, commonly performed alongside mastectomy [5,40].ALND surgeries have a higher incidence of cording (36%-72%) compared to SLNB surgeries (11%-58%), and patients with a prior or concurrent mastectomy are at the highest risk of developing AWS [39].Other factors associated with a higher incidence include lower BMI, younger age, higher education, frequent exercise, increased number of lymph nodes removed, extensive surgery, and adjunctive chemotherapy or radiation therapy [39,41].AWS may also be associated with an increased risk of postmastectomy lymphedema, with patients experiencing AWS having a 44% higher likelihood of developing this breast cancer-related lymphedema [8].
Physical therapy plays a crucial role in AWS treatment, focusing on exercises to improve flexibility, strength, ROM, and abduction of the affected limb [41].Licensed practitioners provide in-clinic treatments including myofascial release, soft tissue mobilization, cord manipulation, and stretching while the arm is abducted, specifically focusing on softening the cord [40,41].During soft tissue mobilization, it is not uncommon for cords to spontaneously break [5].Analgesics, NSAIDs, and proangiogenic drugs are used to manage pain, and when combined with physical exercise, analgesics may expedite recovery [41].Surgical intervention is reserved for severe cases to remove fibrous cords, but it is generally not recommended due to the increased risk of edema [41] (Table 1).
Breast Reconstruction Considerations
The literature presents mixed evidence regarding the influence of breast reconstruction on functional impairments.While some studies indicate a heightened risk of impairments when reconstruction is performed alongside mastectomy, others report no significant increase in risk.Limited data are available that directly compare the rates of these impairments based on the type or timing of reconstruction [37•].
A recent systematic review by Guliyeva et al. suggests that implant-based breast reconstruction does not increase the risk of PMPS when compared to other surgical techniques or mastectomy alone [42].Among the eleven publications included in the review, most reported no elevated risk of PMPS following implant-based reconstruction, and some studies even suggested a potential lower risk of chronic pain with this approach [42].However, other studies suggest that tissue expander/implant-based reconstruction may increase the likelihood of PMPS [43].Additionally, data indicate that both implant-based and autologous breast reconstruction techniques may have the ability reduce the risk of breast cancer-related lymphedema [37•].
Concerns have been raised regarding "breast implant illness," which refers to a constellation of symptoms patients attribute to their breast implants including fatigue, chest pain, hair loss, headaches, chills, photosensitivity, skin rashes, and persistent pain [44•].Despite its popularity on social media, there is a lack of evidence supporting these claims.Extensive data, backed by the FDA, reaffirms the safety of silicone breast implants.Currently, no conclusive evidence exists to support the existence of "breast implant illness" [44•].
Abdominally-based autologous reconstruction is a commonly utilized technique for breast reconstruction, involving the use of abdominal tissue [3,45].In the transverse rectus abdominis myocutaneous (TRAM) flap procedure, the breast is reconstructed using a portion of the rectus abdominis muscle, along with skin and fat from the lower abdomen.In contrast, the deep inferior epigastric perforator (DIEP) flap procedure preserves the abdominal muscles and utilizes only the skin and fat from the lower abdomen [45].A prospective study by Roth et al. revealed increased pain after two years in patients who underwent TRAM/DIEP surgeries compared to those who had tissue expander/implant-based reconstruction [46].According to a retrospective analysis conducted by Yang et al., it was observed that latissimus dorsi (LD) flap reconstruction resulted in a reduction in shoulder muscle strength, while implant-based and abdominally-based reconstruction did not have any significant impact on shoulder muscle strength [47].Furthermore, studies have shown that LD reconstruction is associated with the highest occurrence of overall shoulder morbidity, followed by tissue expander/ implant-based reconstruction, with the lowest rate observed among DIEP patients [48].
Further research is essential to comprehensively evaluate the potential risks associated with breast reconstruction, aiming to address patient concerns, alleviate anxiety, and facilitate informed decision-making.It is crucial for future studies to distinguish between different types and timing of reconstruction to provide more precise and tailored insights.
Conclusion
This article presents a comprehensive overview of the prevalent functional impairments encountered by breast cancer survivors undergoing mastectomies, along with the interventions designed to effectively mitigate these challenges.The key findings emphasize the widespread occurrence of postmastectomy functional impairments, encompassing neuromuscular, musculoskeletal, and lymphovascular complications.A thorough understanding of these categories is imperative for the development of tailored interventions and optimized treatment plans for patients, thereby improving QoL.Central to the management of functional impairments among postmastectomy individuals is the pivotal role played by cancer rehabilitation, coupled with other strategic interventions.This holistic approach encompasses a diverse array of therapeutic modalities, exercises, and support services.Its objective is to effectively address the physical, psychological, and functional challenges experienced by breast cancer survivors, thereby promoting their recovery, rehabilitation, and overall well-being. | 2023-11-14T06:19:03.899Z | 2023-11-13T00:00:00.000 | {
"year": 2023,
"sha1": "eb87652f9104c453922cef0d387624ce6dc651c6",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11912-023-01474-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5e5a7e7f077654b25698b2ad47e308aced15e7c5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244911078 | pes2o/s2orc | v3-fos-license | Perinatal Mental Health Problems in Rural China: The Role of Social Factors
Background: Perinatal mental health is important for the well-being of the mother and child, so the relatively high prevalence of perinatal mental health problems in developing settings poses a pressing concern. However, most studies in these settings focus on the demographic factors associated with mental health problems, with very few examing social factors. Hence, this study examines the prevalence of the depressive, anxiety and stress symptoms among pregnant women and new mothers in rural China, and the associations between these mental health problems and social factors, including decision-making power, family conflicts, and social support. Methods: Cross-sectional data were collected from 1,027 women in their second trimester of pregnancy to 6 months postpartum in four low-income rural counties in Sichuan Province, China. Women were surveyed on symptoms of mental health problems using the Depression, Anxiety, and Stress Scale (DASS-21) and social risk factors. Multivariate logistic regression analyses were conducted to examine social risk factors associated with maternal mental health problems, with results reported as odds ratios (OR) and 95% confidence intervals (CI). Results: Among all respondents, 13% showed symptoms of depression, 18% showed symptoms of anxiety, 9% showed symptoms of stress, and 23% showed symptoms of any mental health problem. Decision-making power was negatively associated with showing symptoms of depression (OR = 0.71, CI: 0.60–0.83, p < 0.001) and stress (OR = 0.76, CI: 0.63–0.90, p = 0.002). Family conflict was positively associated with depression (OR = 1.53, CI: 1.30–1.81, p < 0.001), anxiety (OR = 1.34, CI: 1.15–1.56, p < 0.001), and stress (OR = 1.68, CI: 1.41–2.00, p < 0.001). In addition, social support was negatively associated with depression (OR = 0.56, CI: 0.46–0.69, p < 0.001), anxiety (OR = 0.76, CI: 0.63–0.91, p = 0.002), and stress (OR = 0.66, CI: 0.53–0.84, p < 0.001). Subgroup analyses revealed that more social risk factors were associated with symptoms of anxiety and stress among new mothers compared to pregnant women. Conclusion: Perinatal mental health problems are relatively prevalent among rural women in China and are strongly associated with social risk factors. Policies and programs should therefore promote individual coping methods, as well as target family and community members to improve the social conditions contributing to mental health problems among rural women.
INTRODUCTION
A growing area of cross-disciplinary research has highlighted the importance of perinatal mental health for the well-being of mothers and children. Multiple studies have shown that perinatal mental health problems are a leading cause of maternal mortality among childbearing women due to higher risks of suicide (1,2). Poor mental conditions among perinatal women have also been associated with deteriorative obstetric outcomes (3)(4)(5). For example, Andersson et al. found that perinatal women with depression and anxiety symptoms had significantly increased nausea and vomiting, prolonged sick leave during the pregnancy and increased number of visits to the obstetrician (3).
As a core part of the early childhood environment, perinatal mental health also plays a crucial role in early childhood development. Research has shown that perinatal mental health problems reduce the effectiveness of child-rearing activities and can contribute to multiple early childhood developmental problems, including impaired cognitive, social, and academic functioning (6,7). Children of mothers with mental health issues are also two to three times more likely to develop adjustment problems than children of mothers without mental health issues (8). Even as infants, children of depressed mothers are fussier, less responsive to facial and vocal expressions, more inactive, and have elevated levels of stress hormones compared to infants whose mothers are not depressed (9,10). Literature has also shown that poor mental health is associated with a lack of engaged parenting (7,11,12), which may explain why perinatal mental health disorders in caregivers are often associated with developmental delays among infants and young children (11,13,14).
Unfortunately, perinatal mental health problems are prevalent in developing settings. In comparison to the 10-13% rate for perinatal depression globally, the prevalence of perinatal depression in developing countries is as high as 20% (15,16). With the outbreak of the COVID-19, perinatal mental conditions may have become even more prevalent in some settings due to social distancing, disruption of regular antenatal care, and financial difficulties (17). Given the already high prevalence of perinatal mental health problems, along with the documented consequences of mental health problems for the well-being of both mothers and children, a growing number of studies have pointed to perinatal mental health issues as an important area for global public health research (18,19). In particular, more research is needed to investigate underlying factors that may lead to perinatal mental health problems.
The existing literature describes several demographic and medical risk factors associated with maternal mental health problems in low-and middle-income countries (LMICs). Low levels of family wealth and low maternal education have both been associated with higher rates of perinatal mental health problems (20,21). Additionally, infant gender (in countries with a preference for males) and parental migration status (in countries where parents leave their children behind to outmigrate for work) may also be related to the mental health of women in LMICs (22,23). Perinatal health problems such as preterm birth or pregnancy complications also have been shown to be associated with higher rates of mental health problems (24)(25)(26)(27).
In comparison to the literature on demographic risk factors, however, fewer studies have examined the role of social risk factors in perinatal mental health. The studies that do exist, however, have identified significant links between social factors and maternal mental health. A study conducted in urban China, for example, found that mothers who receive more social support from family, friends or spouses tended to be at lower risk for stress or depression (27). Another study from rural Pakistan found associations between positive mental health outcomes and maternal autonomy and decision-making power over areas such as finances (14). The same study also found that relationship problems with the spouse or family were associated with worse mental health in mothers (14).
China is the largest developing country in the world, accounting for 15% of the global population. Despite this, little is known about how social factors shape perinatal mental health, especially in rural areas of China, which is home to over half of the country's population. The few studies that have examined mental health issues among rural women and caregivers in China found rates of depression between 23 and 28% and rates of anxiety between 21 and 33% (12,28). There is also evidence that poor mental health among mothers in rural areas of China is associated with lower levels of interactive parenting and lower levels of cognitive skills among children (12), pointing to longterm consequences of poor mental health beyond the mother's welfare. Unfortunately, no studies to date have examined the association between perinatal mental health and social risk factors in rural China, leaving a considerable knowledge gap.
The overall goal of this study is to investigate the prevalence and risk factors associated with perinatal mental health problems in rural China. Specifically, we pursue two objectives. First, we describe the prevalence of perinatal mental health problems (depression, anxiety and stress) among pregnant women and new mothers in rural China. Second, we identify associations between demographic and social risk factors and symptoms of perinatal mental health problems.
METHODS
This study was conducted in four nationally-designated rural poverty counties in Nanchong prefecture, Sichuan Province. In terms of GDP per capita, Sichuan province ranks 16th out of China's 31 provinces in 2020 (29), and can be considered a middle-income province. Nanchong prefecture, however, ranks relatively low in GDP per capita in Sichuan Province (15 out of 21), and the four study counties in Nanchong have been nationally-designated as poverty-stricken counties (30). The four rural counties are also majority Han ethnicity, the ethnicity that makes up 95% of China's population (31). Therefore, the study area can be considered relatively representative of poor, Han-majority rural areas in the typical Chinese province.
Sampling
The research team followed a three-step sample selection protocol. First, of the nine counties in Nanchong Prefecture, four nationally-designated poverty counties were selected for sampling. Second, within the sample counties, the research team selected sample townships. The sampling frame excluded nonrural townships and townships with fewer than 10,000 people. Of the remaining townships, 20 townships per county were randomly included in the study, totaling 80 townships.
Third, the research team selected sample households and participants. A list of all households in each sample township with pregnant women beyond their second trimester or infants under 6 months was obtained from the local county-level Maternal and Child Hospital. The research team aimed to recruit 25 eligible households per township. If the township had fewer than 25 eligible households, the sampling frame was expanded to include villages up to 60 mins away from the township. Following this strategy, 1,296 households were sampled. The majority of households in the sample had only one eligible participant (pregnant women or infant). In households with more than one eligible participant, one participant was selected at random. For the purposes of this study, 269 households with infants who were not primarily cared for by their birth mother were excluded. In our final analysis, we use cross-sectional data from 1,027 women, including 309 pregnant women and 718 new mothers.
Data Collection
Data were collected in November and December, 2019. Trained survey enumerators recruited from public health and medical programs at local universities administered one-on-one survey interviews with participants. All enumerators were native Mandarin speakers and used Mandarin during the training and surveys. Enumerators were supervised by members of the research team during the survey. The survey included three blocks: perinatal mental health (depression, anxiety, and stress), social risk factors (decision-making power, family conflict, and perceived social support), as well as demographic risk factors (characteristics of women, families, and infants).
Perinatal Mental Health
To measure the mental health of sample women, enumerators administered the Depression, Anxiety, and Stress Scale-21 (DASS-21), a 21-item short-form version of the DASS-42. DASS-21 was created by Lovibond and Lovibond, and has been validated in China (32,33). Participants were given a list of 21 statements (7 each for the depression, anxiety and stress subscales) and asked to rank how much the statement applied to them in the past week using a Likert-type scale from 0 = "never" to 3 = "almost always." Following DASS-21 scoring guidelines, scores for the depression, anxiety and stress subscales were calculated by summing all responses for a given subscale and multiplying the sum by 2. The possible scores for each subscale therefore range from 0 to 42. It is important to note that the resulting score is not a clinical diagnosis but rather a measure of the severity of depression, anxiety, or stress symptoms. The DASS-21 manual also assigns cutoff scores for each subscale, which indicate a relatively high severity of symptoms. Following the DASS-21 manual, women are considered symptomatic of a mental health issue if they scored above 9 on the depression subscale, above 7 for the anxiety subscale, above 14 for the stress subscale. A series of studies examining maternal mental health in rural China to date have used DASS-21 as a measurement for both the symptomatic and severity of mental health issues (12,28). In our study, the DASS-21 has strong reliability among participants, with a Cronbach's α of 0.824 for the depression subscale, 0.719 for the anxiety subscale, and 0.815 for the stress subscale.
Social Risk Factors
The survey assessed three social risk factors: decision-making power, family conflict, and social support. Decision-making power and family conflict were assessed using questions adapted from Shroff et al. (34). and Peterman et al. (35). Women were given eight topics on household decision-making (e.g., family meals, childcare, major purchases, etc.), and were asked to answer (1) whether she had a say in decision-making for this topic, and (2) whether or not there had been a disagreement on this topic in the last month. Supplementary Tables 1, 2 present the itemized responses for decision-making power and family conflict, respectively. Responses were summed to create raw measures of the woman's overall decision-making power and family conflict. Index scores were also created using exploratory factor analysis, which were used in the multivariate analyses.
To measure social support, enumerators administered the Multidimensional Scale of Perceived Social Support (MSPSS), a 12-item subjective assessment of social support created by Zimet et al. and previously validated in China (36)(37)(38). Women were given a list of statements that characterized the support they received from family, friends, and significant others (e.g., "I can talk about my problem with my family") and were asked to rank the statements on a Likert-type scale from 0 (strongly disagree) to 7 (strongly agree). The statements were grouped into three subscales representing family support, friend support, and significant other support, as well as a total social support score (see Supplementary Table 3). The total and subscale scores were calculated by averaging the responses to all questions. The original version of the MSPSS has very good internal reliability, with an α coefficient of 0.88 for the total scale, 0.87 for the family subscale, 0.85 for the friends subscale, and 0.91 for the significant others subscale (38). In addition, its test-retest reliabilities among mothers in our sample were 0.831, 0.849, and 0.801 for the family, friends, and significant others subscales, respectively.
Demographic Risk Factors
Data were collected on the individual and family characteristics of sample women, and new mothers were also surveyed on the characteristics of their infants. For individual characteristics, women were asked about their age and education level, whether they were originally from their current town or village, whether they had previously out-migrated for work, whether they planned to out-migrate in the future, whether this was their first pregnancy, and whether they had experienced previous miscarriages. Family characteristics included the husband's age and education, as well as a measure of family assets. To provide a quantifiable estimate of family assets, questions were asked about access to certain household items, such as tap water, computer, internet, a car, and more. A family asset index was then calculated using polychoric principal component analysis (39). Infant characteristics for new mothers were obtained from each infant's birth certificate and included gender, age in months, whether the infant was premature, and whether the infant had low birth weight. New mothers were also asked whether the child was delivered via vaginal birth or by cesarean section (C-section).
Statistical Analysis
Statistical analyses were performed using STATA 15.1 P-values below 0.05 were considered statistically significant. Multivariate logistic regressions of the associations between mental health problems and demographic and social risk factors were conducted among the full sample, as well as among pregnant women and new mothers separately. For the full sample and the pregnant women subgroup, the regressions controlled for the following potential confounders: age, education level, whether the woman was from the village, whether she had previously outmigrated, whether she planned to out-migrate, whether this was her first pregnancy, previous miscarriages, husband's education level, and family asset index score. For new mothers, additional controls were added for infant age, gender, whether the infant was delivered via vaginal birth, whether the infant was premature, and whether the infant had a low birth weight. Table 1 presents the summary statistics for the demographic characteristics of respondents. Column 1 reports characteristics of the full sample. The average age of the sample was around 28 years, and about 40% of women had graduated from high school. Just over half (55%) of women were from the village they were living in. A majority of women (78%) had out-migrated for work before, but only 27% planned to out-migrate in the future. About 46% of husbands had completed high school. Among new mothers, the average age of infants was about 3 months, and 55% of sample infants were male. In addition, 44% were born via vaginal birth (the remainder by C-section), 4% of infants were born prematurely, and 3% had low birth weight. Columns 2 and 3 report the characteristics of pregnant women and new mothers, respectively, while Column 4 compares the differences between the two subgroups. The results show no significant differences in the characteristics of pregnant women and new mothers. Table 2 reports the measures of social risk factors (decisionmaking power, family conflict, and perceived social support) for the full sample (Column 1), pregnant women (Column 2) and new mothers (Column 3). For decision-making power and family conflict, both the raw scores (measuring the number of topics for which women reported decision-making power or family conflict) and the index scores (generated using exploratory factor analysis) are presented. The average raw decision-making power score of respondents was 7.20 out of 8, with a standard deviation (SD) of 1.32. The average raw family conflict score was 0.54 out of 8 with a SD of 1.26, and the average social support (MSPSS) score was 5.34 out of 7 with SD of 0.90. When comparing the social risk factors of pregnant women and new mothers (Column 4), the results show a small but statistically significant difference in raw decision-making power scores, with pregnant women reporting greater decision-making power. This difference, however, was not significant when comparing the index scores for decision-making power, and no other differences in social risk factors were found between the two groups. Table 3 presents the prevalence of depression, anxiety, and stress symptoms among all respondents (Column 1), pregnant women (Column 2) and new mothers (Column 3). Column 4 measures the difference between the subgroups of pregnant women and new mothers. Within the full sample, 13% of women showed symptoms of depression, 18% showed symptoms of anxiety, and 9% showed symptoms of stress. Overall, 23% of the women in the study were experiencing symptoms of at least one of the three mental health problems measured. Pregnant women and new mothers showed similar rates of depressive symptoms (14 and 13%, respectively) and stress symptoms (8 and 10%, respectively). However, the prevalence of anxiety among pregnant women was 7 percentage points higher than among new mothers (23 vs. 16%, p < 0.01). Pregnant women were also significantly more likely than new mothers to show symptoms of any mental health problem (28 vs. 21%, p < 0.05).
Perinatal Mental Health and Demographic Risk Factors
Supplementary Tables 3-6 present the multivariate logistic regression of associations between demographic risk factors and mental health problems among the full sample, pregnant women, and new mothers, respectively. The results show maternal age and having previously out-migrated were significantly associated with perinatal mental health problems. However, no other demographic risk factors were significantly associated with symptoms of depression, anxiety, or stress. When examining these associations by subgroup, the results show that maternal age and having previously out-migrated were associated to mental health problems among new mothers, but not pregnant women. *p < 0.05; **p < 0.01; ***p < 0.001. Participants were considered to have symptoms of a given mental health issue if they scored above 9 for depression, 7 for anxiety, and 14 for stress. Values are presented as adjusted OR (95% CI). Adjusted regressions include controls for mother's age and education, mother from village, mother migrated before, mother plans to migrate, first pregnancy, previous miscarriage, husband's education, and family asset index. For new mothers, the adjusted regression also controlled for infant gender, age (in months), vaginal birth, prematurity, and low birth weight. Data source: Authors' survey.
Perinatal Mental Health and Social Risk Factors
1.49, p < 0.001). Finally, perceived social support (row 3) was associated with lower odds of having symptoms of depression (OR: 0.56, p < 0.001), anxiety (OR: 0.76, p < 0.01), stress (OR: 0.66, p < 0.001), and any mental health problem (OR: 0.65, p < 0.001). Panels B and C present the social risk factors associated with mental health problems among pregnant women and new mothers, respectively. The results show fewer significant associations between social risk factors and symptoms of mental health problems among pregnant women compared to new mothers. Among pregnant women (Panel B), increased decision-making power (row 4) was associated with significantly lower odds of having symptoms of depression (OR: 0.58, p < 0.01) and any mental health problem (OR: 0.60, p < 0.01); and family conflict (row 5) was associated with higher odds of depressive symptoms (OR: 1.63, p < 0.01) and stress symptoms (OR: 1.63, p < 0.01). Social support (row 6) was significantly linked to lower odds of depressive symptoms (OR: 0.67, p < 0.05) and symptoms of any mental health problem (OR: 0.73, p < 0.05). However, none of the social factors assessed in this study were significantly associated with symptoms of anxiety, and only family conflict was significantly associated with symptoms of stress among pregnant women. In contrast, among new mothers (Panel C), all of the associations were significant except one: decision-making power (row 7) was not significantly associated with symptoms of anxiety.
DISCUSSION
This study examined perinatal depression, anxiety, and stress symptoms in rural China. Drawing on a large-scale survey of 1,027 pregnant women and new mothers up to 6 months postpartum, we described the prevalence of depression, anxiety, and stress symptoms among the sample. We also examined whether these outcomes were associated with various social risk factors, including decision-making power, family conflicts, and social support.
Prevalence of Perinatal Mental Health Problems in Rural China
The results of this study show that 13% of pregnant women and new mothers had symptoms of depression, 18% had symptoms of anxiety, 9% had symptoms of stress, and 23% had symptoms of at least one of these mental health problems. The prevalence of perinatal mental health problems in rural China is relatively high compared to the overall global rates among pregnant women (10%) and new mothers (13%) reported by the World Health Organization (16). The results, however, are similar to the findings of a 2012 systematic review of perinatal mental health problems in LMICs, which found that the prevalence of mental health problems was 15.9% among pregnant women and 19.8% among postpartum mothers (15). These results echo the general finding that perinatal mental health problems are more prevalent in LMICs compared to both high-income countries and the overall global prevalence. Importantly, however, the prevalences found in this study are slightly lower than reported in previous studies in rural China (also using the DASS-21 with the same cut-off scores). Previous studies have reported rates of maternal depression between 23 and 28% and rates of anxiety between 21 and 33% (12,28). While the reason for this variation is unclear, one possible explanation is due to regional/geographical differences in the samples, as past studies were conducted in northwestern rural areas of China, whereas the present study was conducted in rural areas of Sichuan, in southwestern China.
When examining differences between the subgroups, although the prevalence of depression and stress were similar among pregnant women (14 and 8%, respectively) and new mothers (13 and 9%, respectively), the results found a significant disparity in the prevalence of anxiety symptoms among pregnant women (23%) and new mothers (16%). Although this finding stands in contrast to studies in other LMICs such as Pakistan and India, which have found pregnant women to be at lower risk for perinatal mental health problems (14), it is consistent with studies in Nigeria and Thailand which have found pregnant women to generally have higher rates of mental health problems compared to new mothers (40,41). One explanation may be due to the seasonal timing of data collection for this study. A study of perinatal mental health by Martini et al. (42). found anxiety in pregnant women to be significantly linked to fears about their infant's health, including fear of viral infections. Considering that data for this study were collected in the winter, a season which historically has higher rates of infectious diseases, it is possible that such fears among pregnant women may explain to higher rates of anxiety symptoms. Nevertheless, taken together with the literature, these findings point to a need for more research to compare the prevalence of mental health problems among pregnant women and new mothers and identify underlying causes.
Perinatal Mental Health and Demographic Risk Factors
Of the demographic risk factors examined in this study, the results found only two significant associations: younger mothers were significantly more likely to have symptoms of all of the mental health problems measured, and mothers who had previously out-migrated were more likely to have symptoms of depression. Subgroup analyses revealed that both variables are only significant among new mothers (and not pregnant women). However, the effect sizes of the associations, while significant, were all relatively small, and no other demographic factors were significantly associated with perinatal mental health problems among the sample.
The absence of associations between other demographic risk factors and perinatal mental health contradicts the findings of previous studies, which have consistently found measures of socioeconomic status, such as education levels and family wealth, to be associated with lower risk for mental health problems (21,43). One explanation for this discrepancy may be that the sample was selected from nationally-designated poverty counties, meaning that despite some variations in family asset index scores, the respondents were all experiencing similar levels of poverty. Were this sample compared to middle-class urban women in China's cities, it is possible that there would be a more significant relationship between family wealth and mental health. The comparison to urban women would be a complex one, as it would need to account for differences in socioeconomic status, as well as differences in the quality of life and struggles experienced by women in rural and urban areas. Another possible explanation is that the levels of income among households in rural China are higher than other LMICs, given its recent rise to upper-middle income status (44). In either case, these findings indicate that among women in rural China, higher socioeconomic status is not a protective factor in perinatal mental health. The exact reason, however, requires additional research.
Additionally, among new mothers, infant and birth characteristics were not associated with symptoms of mental health problems. Of particular interest, infant gender was not associated with any perinatal mental health problems measured. Although previous studies in China have found that mothers with female infants tend to have higher rates of mental health problems (45,46), the results of this study are consistent with more recent studies (28), suggesting that the historical cultural preference for males may be shifting.
Perinatal Mental Health and Social Risk Factors
In contrast to demographic risk factors, social risk factors are strongly and significantly associated with perinatal mental health among both pregnant women and new mothers. For example, women with more decision-making power and social support were 24% and 35% less likely to have symptoms any mental health problems, respectively, whereas women with greater family conflicts were 49% more likely to have symptoms of any mental health problem. These findings agree with a mixed methods study that was conducted in rural northwestern China (a different region from that of our study), which found that lack of social support and lack of agency within the household were two common factors among depressed caregivers (28). The results are also similar to one study in urban China, which found that spousal support contributed to reduced depression symptoms among postpartum mothers (27). These findings are also consistent with the small number of studies in other LMIC settings, which have shown the importance of social factors, particularly within the home, for perinatal mental health (14). There are also a number of studies from high-income countries that have found supportive social dynamics, both within the household and in the community more broadly, to be strongly protective against perinatal mental health problems (42). To date, however, there are relatively few studies examining the relation of social factors to perinatal wellbeing among women in the global south, and more research is needed to better understand factors that shape mental health in poor rural communities.
Although this study is unable to examine the potential mechanisms underlying the associations between social factors and mental health problems among to-be and new mothers, past studies suggest that self-efficacy may be a key mechanism in maternal mental health. Self-efficacy has not only been found to predict mental health symptoms; it has also been identified as a mediator between social factors and mental well-being (47). For example, a longitudinal analysis found that self-efficacy was directly associated with social support and indirectly associated to depression through its relation to social support (48). It is also possible that decision-making power may relate to selfefficacy, however, to date no studies have examined self-efficacy and decision-making power in the context of mental health. Moreover, to the best of our knowledge, however, few studies have examined how self-efficacy shapes the relations between social factors and perinatal mental health in underdeveloped areas. Moreover, to the best of our knowledge, few studies have examined how self-efficacy could shape the relations between social factors and perinatal mental health in underdeveloped areas. Understanding how self-efficacy relates to both social factors and perinatal mental health among rural women in China should be a focus area for future research.
While the associations between social risk factors and depression were similar among all respondents, the associations with anxiety showed differences between pregnant women and new mothers. Interestingly, none of the three social risk factors (decision-making power, family conflict, and perceived social support) were significantly associated with anxiety symptoms among pregnant women, while both family conflict and perceived social support were significantly associated with anxiety symptoms among new mothers. This is striking, considering pregnant women were found to have much higher rates of anxiety symptoms compared with new mothers, and this finding suggests that other factors may be contributing to increased anxiety among pregnant women. One such factor may be the pressure to have a good pregnancy outcome, as past studies have suggested. Further research is needed to identify other possible factors that may be influencing the high rates of anxiety among pregnant women.
Strengths and Limitations
This study has a number of strengths. To date, the literature on perinatal mental health has focused disproportionately on high income, developed countries, with few studies conducted in developing settings such as rural China. This is one of the first studies to identify risk factors associated with perinatal mental health problems, including mental health problems during pregnancy, among women in rural China. Given that the time around birth is a very unique period in a woman's life, our results positively contribute to the global literature on both mental health and maternal health. Our findings also provide evidence for policymakers and practitioners to design interventions to improve perinatal mental health among women in rural China.
We also acknowledge three limitations. First, due to the cross-sectional nature of the data, we are unable to make temporal or causal inferences regarding the associations between risk factors and perinatal mental health problems. Second, the scales used in this study, although proven to be validated in other settings within China, have not been validated for perinatal women in rural areas of China. In addition, due to the self-report nature of the DASS-21 scale, it is possible that the prevalence of depression, anxiety and stress symptoms are underestimated among women in the sample due to stigma against reporting symptoms of mental health problems. Future research should develop validated scales and collect longitudinal data to better understand the social risk factors that shape perinatal mental health in rural China, and how perinatal mental health impacts other maternal and child health outcomes.
CONCLUSIONS
Among pregnant women and new mothers in rural China, the prevalence of perinatal depression, anxiety, and stress symptoms is relatively high. Pregnant women appear to exhibit higher rates of anxiety symptoms compared to new mothers. Although demographic risk factors are not strongly associated with mental health problems, social risk factors are strongly and significantly associated with depression, anxiety and stress symptoms. These associations are stronger among new mothers compared to pregnant women.
These findings indicate that perinatal mental health is a prevalent problem in rural China that requires greater attention from researchers, policymakers and health practitioners. Policies and programs should be developed to screen pregnant and postpartum women for mental health problems and provide targeted intervention. The strong associations between mental health problems and social risk factors in our sample point to a need for future research examining the extent to which social risk factors may causally contribute to mental health issues, and whether programs targeting in the social environment of women in rural China may reduce the prevalence of depressive, anxiety and stress symptoms during the perinatal period.
DATA AVAILABILITY STATEMENT
De-identified data will be made available by the corresponding author on reasonable request.
ETHICS STATEMENT
This study received ethical approval from the Stanford University Institutional Review Board (Protocol # 44312). Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
their assistance in conducting this study. We would also like to thank the Sichuan Provincial Center for Women and Children Health and the county-level Maternal and Child Hospitals in our study area for their assistance in identifying and recruiting eligible participants. | 2021-12-07T14:22:51.578Z | 2021-12-07T00:00:00.000 | {
"year": 2021,
"sha1": "a1b7312b42132034c280bcc5c751549a99e5f99f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2021.636875/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a1b7312b42132034c280bcc5c751549a99e5f99f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
235814819 | pes2o/s2orc | v3-fos-license | Remote Sensing Image Augmentation Based on Text Description for Waterside Change Detection
: Since remote sensing images are difficult to obtain and need to go through a complicated administrative procedure for use in China, it cannot meet the requirement of huge training samples for Waterside Change Detection based on deep learning. Recently, data augmentation has become an effective method to address the issue of an absence of training samples. Therefore, an improved Generative Adversarial Network (GAN), i.e., BTD-sGAN (Text-based Deeply-supervised GAN), is proposed to generate training samples for remote sensing images of Anhui Province, China. The principal structure of our model is based on Deeply-supervised GAN(D-sGAN), and D-sGAN is improved from the point of the diversity of the generated samples. First, the network takes Perlin Noise, image segmentation graph, and encoded text vector as input, in which the size of image segmentation graph is adjusted to 128 × 128 to facilitate fusion with the text vector. Then, to improve the diversity of the generated images, the text vector is used to modify the semantic loss of the down-sampled text. Finally, to balance the time and quality of image generation, only a two-layer Unet++ structure is used to generate the image. Herein, “Inception Score”, “Human Rank”, and “Inference Time” are used to evaluate the performance of BTD-sGAN, StackGAN++, and GAN-INT-CLS. At the same time, to verify the diversity of the remote sensing images generated by BTD-sGAN, this paper compares the results when the generated images are sent to the remote sensing interpretation network and when the generated images are not added; the results show that the generated image can improve the precision of soil-moving detection by 5%, which proves the effectiveness of the proposed model.
Introduction
With the rapid development of remote sensing technology [1], it is relatively easy to acquire a remote sensing image, but there are still problems: the acquired image cannot be used immediately and often requires a cumbersome processing process. Among them, the obtained samples lack the corresponding label, which requires a high sample label for the research of deep learning. Researchers need to spend a great deal of energy to annotate the existing image, and this has greatly hindered the widespread use of remote sensing images. How to save time and labor costs with the labeling of high-quality samples has become an urgent problem to be solved. As an effective means to solve this problem, data augmentation has become a hot research topic.
As an important branch in remote sensing, remote sensing dynamic soil detection has a high demand for remote sensing images. However, there is a lack of remote sensing data, and the diversity of samples is not enough to improve the generalization ability of the network. Taking the research on change detection (including dynamic soil detection) as an example, some studies ignore the problem of the lack of images [2] and the security reason to share images [3,4], but others pay attention to this problem and propose various data augmentation strategies to solve it [5,6]. Why is data augmentation strategy needed? The reasons are as follows. The current training flow commonly used by remote sensing interpretation networks (i.e., the detection network in the change detection task) is shown in Figure 1. As can be seen from Figure 1, the staff need to select the better-quality remote sensing image for the interpretation task, but the time cost of this process is huge. This problem is caused by the low quantity and poor quality of remote sensing images. With the development of artificial intelligence, data augmentation is an effective method to solve this problem. It can enlarge the sample in a small amount of data and satisfy the requirement of deep learning. Therefore, data augmentation is used to expand the remote sensing image data, and the accuracy of the remote sensing interpretation network is improved. Data augmentation steps are added to the training flow of remote sensing interpretation, as shown in Figure 2.
Remote
A certain category of spots with a quantity greater than 5% A certain category of spots with a quantity less than 5%
Data augmentation
Remote Sensing Interpretat ion model output Data augmentation generally includes traditional data augmentation algorithms and data augmentation algorithms based on deep learning [7]. The former includes rollover, scaling, cropping, and rotation [8]. These algorithms perform geometric transformations on existing images to increase the number of images. The latter includes variational autoencoder VAE [9] and generative adversarial network GAN [10], both are based on multilayer neural networks. VAE can map low-dimensional inputs to high-dimensional data, but they need prior knowledge; it is more convenient to use GAN for data augmentation without knowing the complicated reasoning process in advance. The training process for data augmentation of GAN is shown in Figure 3. . GAN training flow chart. G represents the generator of GAN and D represents the discriminator of GAN. The function of G is to learn the mapping rules of the random noise to the generated data and then obtain the generated image (the false sample). D is used to determine whether a sample is a real sample or a false sample.
In recent years, good progress has been made in image data augmentation. To facilitate the work, the related research is introduced from these directions: conditional generative adversarial network (cGAN), image generation, and image semantics and text semantic loss.
Conditional Generative Adversarial Network
Compared with the original generative adversarial network, the conditional generative adversarial network adds the constraint information at network's input. Still, it has made great progress in image generation. P. et al. regarded the conditional generation antagonism network as a general solution for image generation [11]. The network proposed by P. takes the sketch of the image as the conditional constraint information and generates the image from the sketch [12]. The generation of remote sensing data also belongs to the field of image generation. Herein, the research is based on the generative adversarial network.
Image Generation
At present, image generation based on GAN can be divided into two categories: the first is to generate the image of the specified category; the second is to generate the image matching the text description.
In 2014, Based on cGAN, J. et al. used random noise and specific attribute information as input, and randomly used conditional data sampling in the training process to generate a good face image [13]. In the framework of the Pierre-Simon Laplace pyramid, E. and his colleagues constructed a cascade generation confrontation network in 2015, which can generate high-quality natural images from coarse to fine [14]. In 2016, C.K. et al. applied the GAN to the image super-resolution problem. In the process of training the network, backpropagation of the gradient estimation after deagitation was performed. Good results were achieved in natural image generation in the ImageNET dataset [15]. A.M. et al. proposed a new method of image generation, DGN-AM, which is based on a prior DGN (deep generator network) and combined with the AM (activation maximization) method. By maximizing the activation functions of one or more neurons in the classifier, a realistic image is synthesized [16]. In 2017, A. et al. proposed PPGN based on DGN-AM, consisting of a generator G and a conditional network C that tells the generator to generate classes; it generated high-quality images and performed well in image repair tasks [17]. W.R. et al. proposed an ArtGAN to generate natural images such as birds, flowers, faces, and rooms [18].
In 2016, S. et al. encoded the text description into character vector as part of the input of generator and discriminator, respectively, based on the conditional generative adversarial network, the assumption that text descriptions can be used to generate images was validated on general datasets such as MS COCO [19][20][21]. S. et al. proposed a GAWWN network, in which a constraint box is added to guide the network to generate a certain attitude image at a given position [22]. In 2017, H. and others applied the idea of distributed generation to the generation of confrontation network and proposed a StackGAN model [23,24]. The first step is to generate a relatively fuzzy image, mainly the background, contour, etc. The second step is to take the image generated in the first step as the input; at the same time, text features are fused to correct the loss of the first stage, resulting in a high-definition image. In 2018, H. et al. improved the StackGAN model by using different group generators and discriminators to train at the same time. Images with different accuracy were generated. The low-accuracy images were trained in the high-accuracy generators, different group generators and discriminators use the same text features as constraints, resulting in better results than other generation models [25]. T. and others improved the StackGAN model using the attention mechanism, proposed the ATTNGAN model, paid more attention to the related words in the text description in the process of the phased generation, and generated more detailed information in different subregions of the image [26,27]. S. and others put forward a model of image generation based on semantic layout. Firstly, the corresponding semantic layout of the text is obtained by a layout generator, then the corresponding images are generated by an image generator. Finally, the validity of the model is verified on the MS-COCO dataset, and a natural image of diversity is generated [28,29]. Although the abovementioned GANs have achieved good results in the field of image generation, most of these were generated for natural images. Remote sensing images are different from natural images because of their unique spectral characteristics and huge amount of data, requiring high quality, speed, and diversity. The proposed model (BTD-sGAN) is suitable for remote sensing image generation to solve these problems.
In addition, generating the corresponding image from the text description involves the knowledge of multimodal representation learning. In 2021, F. et al. proposed a network named EAAN that can correlate visual and textual content [30], and also performed research on natural images. This paper attempts to study remote sensing images.
Image Semantics and Text Semantic Loss
In image processing, semantic loss is inevitable in the process of image convolution or downsampling. To avoid semantic loss of the image, T. et al. [31] proposed a new conditional normalization method, called SPADE, which solves the problem of semantic loss in batch normalization, but does not pay attention to the semantic loss of text. Therefore, this paper improves the downsampling process of the generator, adds the text feature to constrain, reduces the semantic loss of the text, and improves the diversity of the generated images.
Herein, the work is based on the structure of GAN because of the excellent effect of GAN on several datasets [32,33]. The task of target detection and image segmentation based on remote sensing image needs not only the generated image, but also the corresponding label of the image. Although GAN has achieved good results in natural image generation, there is little research on remote sensing image generation in GAN. Herein, the following problems will be solved: (1) the number of tagged remote sensing images is little; (2) the diversity of remote sensing image samples is insufficient.
Herein, an improved model named BTD-sGAN (Text-based Deeply-supervised GAN) is proposed. To solve the problem of insufficient samples with labels, we use the network segmentation graph as input in the input of the BTD-sGAN, which can restrict the process of image generation to avoid the final image of the secondary annotation. To solve the problem of insufficient sample diversity, the main body of BTD-sGAN is the deeply-supervised generation network, D-sGAN (Deeply-supervised GAN) [34], the generator structure is still Unet++ network and the discriminator structure is FCN network. BTD-sGAN takes the image segmentation graph, Perlin and text vector, which are fused as input. At the same time, to reduce the semantic loss of the text, the text vector is always used as a supervisor to correct the loss during the downsampling process. The experimental results for BTD-sGAN show that the improved network can not only increase the number of generated samples with tags, but also increase the diversity of generated samples.
Methods
Herein, the practical application direction is as a remote sensing dynamic soil detection project data generation module, mainly for China remote sensing data for the experiment. Remote sensing dynamic soil detection is used to identify and label some types of buildings that violate regulations through the image segmentation network, but due to the lack of remote sensing data, interpretation accuracy is faced with a breakthrough bottleneck. Therefore, this paper is based on the above remote sensing data for the study of data augmentation.
The improved model (BTD-sGAN) is based on D-sGAN, and the training process is similar. It should be noted that the Gaussian noise at the input of the generator is replaced by Perlin noise, and the segmented image and encoded text vector are fused. The discriminator also adds a text vector as a constraint. The improved generative adversarial network learns the mapping of segmentation graph x, image z, and text vector v to real image y. The image z follows the Perlin distribution. The training flow for the entire network is shown in Figure 4. In Figure 4, an image segmentation graph x is added to the input to solve the problem that the GAN-INT-CLS [19] model cannot capture localization constraints in the image. Herein, the experiment verifies the effectiveness of adding a segmentation graph at the input end.
Lower Sampling Procedure
Different from the downsampling module in the D-sGAN model, to improve the diversity of the generated samples and reduce the semantic loss of the text, the method of using segmentation graph to monitor was not used, only the real text feature vector was used to supervise the sampling process. It is important to note that this subsampling procedure was applied to generators and discriminators. The down-sampling module of BTD-sGAN is shown in Figure 5.
BTD-sGAN Structure
The Unet++ network uses a "dense link" network structure [35], which can effectively combine the features from the encoder and the decoder to reduce the semantic loss of the image, so the model of BTD-sGAN based on Unet++ is improved. In the D-sGAN, the idea of using multiple discriminators to supervise the generator was put forward, which can improve the quality of image generation and reduce the generation of image at the same time. Although the main structure of the generator was based on Unet++, discriminators (the first and second discriminators of BTD-sGAN L 4 in Figure 6) were only used to monitor the output of the second and fourth layers. The down-sampling module mentioned in Section 2.1.1 was used for both the generator and discriminator. A schematic of the entire network structure is shown in Figure 6.
Loss Function
The BTD-sGAN loss function consists of two parts, the generator part and the discriminator part, which can be expressed as The matching text feature vector v, true image y, and mismatched text feature vector v * are represented. The discriminator only detects true when the real image and text match, false when the real image and text do not match, and false when the generated image and text match.
In particular, the discriminator is used to monitor the two-layer and four-layer outputs of Unet++, so it can be expressed as The generator tries to minimize the loss, and the discriminator tries to maximize the loss. Herein, we used λ k (k = 1, 2) to represent the subnet's weight, and the parameters satisfy the relation λ 1 + λ 2 = 1 and λ 1 < λ 2 .
Datasets
Existing generation models based on text description (such as GAN-INT-CLS, Stack-GAN++) are mostly studied on the basis of natural images. For fairness, the natural image dataset Oxford-102 [36,37] was used to compare the effects of BTD-sGAN model and other models. At the same time, in order to observe the performance of BTD-sGAN model in the actual remote sensing image generation task, remote sensing datasets from the Jiangxi and Anhui provinces in China were used for training and testing.
Oxford-102 Dataset
Oxford-102 belongs to the natural image dataset, which contains images of flowers, including 102 different flower species and a total of 8189 images. Some images of the Oxford-102 dataset are shown in Figure 7.
Remote Sensing Datasets of Jiangxi and Anhui Provinces, China
The remote sensing datasets of the Jiangxi and Anhui provinces in China were shot by China Gaofen Satellite with a ground resolution of 2 m and the original remote sensing image resolution of 13,989 × 9359. In this paper, the image was cropped to 128 × 128 size. A partial image of the remote sensing dataset is shown in Figure 8.
Evaluation Metrics
The proposed model (BTD-sGAN) focuses on the diversity of generated images. To evaluate the quality and diversity of the generated images, the recently proposed evaluation metric-Inception Score (abbreviated as IS) [38]-was selected. At the same time, to evaluate whether the generated sample matches the given text description, an artificial evaluation method called "Human Rank" was adopted. For the generation time of BTD-sGAN, the evaluation metric called "Inference Time" was proposed. To evaluate the effect of the proposed model on the actual remote sensing dataset, the generated image was sent into the training set of the remote sensing interpretation model, and the effect of the proposed model was reflected through the interpretation accuracy, which is called "Interpretation Score". These evaluation metrics are detailed as follows.
Inception Score
The IS (Inception Score) evaluation index can comprehensively consider the quality and diversity of the generated images. The evaluation equation can be expressed as where x represents the generated image and y represents the prediction label of x for Inception model [39,40]. For a good generation model, it is expected that the model can generate images of high quality and diversity. Therefore, the KL divergence between edge distribution p(y) and conditional distribution p(y|x) should be as large as possible.
Interpretation Score
This index is proposed according to the actual remote sensing interpretation task. It is assumed that there are n remote sensing images in the dataset used by the interpretation model, including kn remote sensing images generated by the generation model. There are (1 − k)n remote sensing images in the actual remote sensing dataset (such as remote sensing images of the Jiangxi and Anhui provinces in China), where k is the mixing coefficient and the value range is [0, 1]. Two thirds of this dataset was used as the training set and 1/3 as the test set. Then, remote sensing interpretation models (such as Unet and FCN) were trained and tested on the n remote sensing data images, and the interpretation accuracy of the interpretation model is called "Interpretation Score". Herein, the interpretation types of remote sensing images only include map spots (illegal ground object targets) and nonmap spots (ground object targets other than map spots). If the "overlap ratio" of interpretation results is used to represent interpretation accuracy, the expression of "Interpretation Score" is shown as Interpretation Score = P 11 P 11 + P 12 + P 21 + P 22 P 22 + P 21 + P 12 2, where P 11 represents the number of spot pixels interpreted as spot pixels, P 12 represents the number of spot pixels interpreted as nonspot pixels, P 21 represents the number of nonspot pixels interpreted as spot pixels, and P 22 represents the number of nonspot pixels interpreted as nonspot pixels.
Human Rank
IS (Inception Score) cannot reflect the matching degree between the generated image and the text description, so the artificial evaluation method was used. The specific evaluation methods are as follows: 30 text descriptions are randomly selected from the dataset, 3 images are generated for each model, 10 evaluators are selected to rank the results of each model, and the average value of the ranking is taken as the artificial evaluation score of the model. The smaller the ranking is, the better the model effect is. This artificial evaluation method is called "Human Rank". Suppose that the score given by the ith person for the ranking of a model is R i , then, the score of the model can be expressed as where i represents the serial number of people who rank the model.
Inference Time
"Inference Time" refers to the time of image generation, i.e., the time taken by the generation model to generate multiple remote sensing images. It usually means the time taken to generate mKB remote sensing images, where m represents the amount of memory occupied by the generated image. The unit of "Inference Time" is second.
Results
To evaluate the effectiveness of the proposed algorithm in different scenarios, two evaluation experiments are carried out. In the first part, the natural image dataset Oxford-102 (universal dataset) is used as the training set, and the effects of BTD-sGAN, GAN-INT-CLS [19], and StackGAN++ [25] are compared. In the second part, to verify the diversity of the generated remote sensing images, remote sensing images of the Jiangxi and Anhui provinces in China are used as training sets to test the performance of BTD-sGAN on the actual remote sensing datasets. At the same time, BTD-sGAN is compared with GAN-INT-CLS and StackGAN++ in the second experiment.
Experiment 1
In this experiment, Oxford-102 flower dataset is used, and images in the whole dataset are described manually to form an "image-text description" data pair. Two thirds of the data pairs in the dataset are taken as the training set, and 1/3 of the data pairs are taken as the test set. BTD-sGAN, GAN-INT-CLS, and StackGAN++ are trained and tested. Finally, the generated results of several models are obtained. During training, the three models use the same data pair. The experimental parameters are 50 epochs, each epoch iterates 150 times, and each time 64 samples are trained. During testing, the three models obtain the generated results and evaluation scores according to the same text description. The experimental process of model comparison is shown in Figure 9. The 3KB text descriptions in the test set are randomly selected for testing. The generated results of the different models are shown in Figure 10. At the same time, in numerical terms, "Inception Score", "Human Rank", and "Inference Time" are used to compare the effects of different models. The performance comparison of different models is shown in Table 1. To more intuitively show the generation performance differences of different models, the scores are also shown in Figure 11. The results in Table 1 show that the BTD-sGAN is higher in "Human Rank" than GAN-INT-CLS and StackGAN++, increasing by 0.8 (from 1.98 to 1.18) and 0.57 (from 1.75 to 1.18), respectively. Compared with GAN-INT-CLS and StackGAN++, BTD-sGAN has an increase of 1.10 (from 2.56 to 3.66) and 0.14 (from 3.52 to 3.66) in "Inception Score", respectively. In addition, 14 s (from 54 s to 40 s) and 22 s (from 62 s to 40 s) are reduced in the "Inference Time", respectively. Figure 11 more intuitively shows that BTD-sGAN has a shorter generation time, smaller ranking score, and larger IS score compared to the other two models.
Experiment 2
The ultimate purpose of constructing BTD-sGAN is to enhance the data of remote sensing images and serve those researches based on remote sensing images, such as remote sensing interpretation tasks. Therefore, remote sensing images from the Jiangxi and Anhui provinces of China are used as datasets to train and test the effect of BTD-sGAN. In particular, the image is a multispectral remote sensing image. The experiment only uses the data of RGB channels, and the final image results from the fusion of RGB channels.
Similar to Experiment 1, the remote sensing dataset is described manually to form a "remote sensing image-text description" data pair. Among them, 2/3 of the data pairs are used as the training set and 1/3 of the data pairs are used as the test set. Text description is randomly selected for testing, and the model generates 3 remote sensing images according to each text description. The generation effect of BTD-sGAN on the actual remote sensing dataset is shown in Figure 12.
A road next to several houses Two roads beside several buildings A road goes through the forest As can be seen from Figure 12, BTD-sGAN can generate various remote sensing images according to text description. Taking the text description, "a road next to several houses", as an example, BTD-sGAN generates three different shapes of roads according to this description that all meet the requirements of this text description. The above results show that BTD-sGAN can generate diverse images and meet the needs of image diversity in the remote sensing image generation task.
On the basis of China remote sensing datasets, the generation results of BTD-sGAN, GAN-INT-CLS, and StackGAN++ are also compared. The results are shown in Figure 13.
BTD-sGAN
Text description: some roads next to houses. In Figure 13, compared with GAN-INT-CLS and StackGAN++, the remote sensing image generated by BTD-sGAN is clearer and matches the text description.
Furthermore, the performance of BTD-sGAN is evaluated numerically. A new method is used to evaluate BTD-sGAN, namely, "Interpretation Score". The idea of this method is as follows: The generated data is sent to the remote sensing interpretation network to see if the generated image is helpful to improve the accuracy (equivalent to the "Interpretation Score") of the interpretation network. The higher the value of "Interpretation Score", the better the effect of model generation. A flow chart of the experiment is shown in Figure 14. After mixing different proportions of the generated images in the dataset, the change of "Interpretation Score" with mixed proportions is shown in Figure 15. In Figure 15, mixture ratio represents the ratio of the generated images to the original images in the dataset.
Discussion
In the results section, two experiments were used to verify the effectiveness of BTD-sGAN. Experiment 1 is based on the universal dataset (Oxford-102 flower dataset), which can ensure the fairness of all models in the comparative experiment. For the dataset, Figure 10 shows the generated results of different models. Two conclusions can be drawn from the generated results. 1) BTD-sGAN can generate images according to text description, which proves the rationality of the model. 2) Visually, compared with GAN-INT-CLS and StackGAN++, the generation results of BTD-sGAN are clearer and of better quality; the performance of BTD-sGAN was evaluated quantitatively. Table 1 and Figure 11 show that BTD-sGAN is superior to other models in the three indexes of "Inception Score", "Human Rank", and "Inference Time", which indicates that BTD-sGAN can generate clearer and more diverse images according to text description, and shorten the time of image generation to meet the needs of the actual generation task. Experiment 2 is based on remote sensing datasets of the Jiangxi and Anhui provinces, China. This experiment is used to test BTD-sGAN's performance on the actual remote sensing dataset. First, whether BTD-sGAN can generate a variety of remote sensing images according to the text description is tested. Figure 12 shows that BTD-sGAN can generate various remote sensing images, which proves that BTD-sGAN can be used in the actual remote sensing generation task. Then, the performance of different generation models is compared. In Figure 13, BTD-sGAN generates clearer images than others. The previous part evaluates BTD-sGAN according to vision. Numerically, the metrics "Interpretation Score" is used. Figure 15 shows that the scores of remote sensing interpretation after mixing can be improved compared with that of unmixed samples, and when the mixing ratio is 1:1, the precision can be improved by 5%. This is because the diversity of the generated samples is higher than that of the original images and the generalization ability of the network is improved. However, when the mixture ratio reaches 2:1, the interpretation accuracy will decrease. Due to the large proportion of generated samples, the network learns the features of the generated samples and the insufficient learning of the features of the original remote sensing images.
Conclusions
Aiming at the lack of samples in the deep learning-based remote sensing image detection project, a new text-based generative adversarial network called BTD-sGAN is proposed for the data augmentation of remote sensing image. Two experiments were used to verify the effect of BTD-sGAN. The first experiment was used to test the performance of BTD-sGAN on the universal dataset, and the second experiment was used to test the performance of BTD-sGAN on the actual remote sensing dataset. In Experiment 1, BTD-sGAN generated higher quality images than other models. Compared with GAN-INT-CLS and StackGAN++, BTD-sGAN increased by 1.10 and 0.14 in "Inception Score" and 0.8 and 0.57 in "Human Rank", and decreased by 14 s and 22 s in "Inference Time", respectively. In Experiment 2, BTD-sGAN produced clearer and more varied remote sensing images than GAN-INT-CLS and StackGAN++. The results show that the remote sensing image generated by BTD-sGAN can help improve the accuracy of remote sensing interpretation network by 5%. In general, BTD-sGAN can be applied to the actual remote sensing generation tasks, and can also provide the data support for remote sensing interpretation (e.g., soil-moving detection) and other tasks.
However, BTD-sGAN still has some limitations. The text vectors are used to correct text semantic loss during downsampling, which leads to image semantic loss to a certain extent. In other words, the quality of the generated image is sacrificed. In contrast, the diversity of the generated image is gained. The results presented herein were limited to only RGB bands. The effectiveness of the method for other spectral bands, such as Near-Infrared and Red Edge that are used for various purposes, requires further investigation and is subject to future work. In addition, there are many related types of research in the field of remote sensing based on deep learning, and the demand will be different. The future direction is to improve the model to meet the need of remote sensing generation. This paper will also try to apply the model to some other fields (such as Internet of Vehicles [41]) for data augmentation, so as to further test the practical applicability of the model. Data Availability Statement: Restrictions apply to the availability of these data. Data was obtained from the local water utilities and are available from the authors with the permission of the City. | 2021-07-14T13:25:46.530Z | 2021-05-12T00:00:00.000 | {
"year": 2021,
"sha1": "5ab8ead7a99c26c4f82baada38e8d43f6fde95c6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/13/10/1894/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0bfe9df20c3a4984d6c0c6c0914e7cb96c313545",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119667528 | pes2o/s2orc | v3-fos-license | Time-dependent methods in inverse scattering problems for the Hartree-Fock equation
The inverse scattering theory for many-body systems in quantum mechanics is an important and difficult issue not only in physics---atomic physics, molecular physics and nuclear physics---but also mathematics. The major purpose in this paper is to establish a reconstruction procedure of two-body interactions from scattering solutions for a Hartree-Fock equation. More precisely, this paper gives a uniqueness theorem and proposes a new reconstruction procedure of the short-range and two-body interactions from a high-velocity limit of the scattering operator for the Hartree-Fock equation. Moreover, it will be found that the high-velocity limit of the scattering operator is equal to a small-amplitude limit of it. The main ingredients of mathematical analysis in this paper are based on the theory of integral equations of the first kind and a Strichartz type estimates on a solution to the free Schr\"{o}dinger equation.
Background
Inverse scattering problems in quantum many-body systems are important and difficult problems not only quantum physics but also mathematics. Some results on inverse scattering problems for N-body Schrödinger equations were investigated. A reconstruction problem of identifying the two-body interactions from the high-energy asymptotics was studied in Wang [17], Enss and Weder [2], Novikov [10] and Vasy [15]. Uhlmann and Vasy [14] studied a low-energy inverse scattering problem.
As is well known, the solution of the N-body Schrödinger equation on R n is a high-dimensional complicated function on R nN , which usually causes exact or numerical calculations impractical. Therefore, methods of approximation in understanding the many-body problem in quantum mechanics have most often been proposed. A result on inverse scattering problems in nuclear physics by using the optical model, which is one of the method of approximation in the many-body problems, was reported by Isozaki-Nakazawa and Uhlmann [5].
The time-dependent Hartree-Fock approximation, which is one of the simplest approximate theories for solving the many-body Hamiltonian, has received much attention due to its effect of calculations and having a wide field of applications (see, e.g., Goeke and Reinhard [3], and Kramer and Saracero [6]). Time-dependent Hartree equations have also play an important role in the development of mathematical analysis due to its non-linear structure, which cause interesting behavior of the solutions (see, e.g., Cazenave [1]).
In this paper, we are interested in an inverse scattering problem for a time-dependent Hartree equation and a Hartree-Fock equation. Consider the N-body system of identical particles with the interaction potential V int consisting of sum of two-body force: V int = i<j V (x i −x j ). Here we denoted the position of the j-th particle by x j . The indistinguishability of identical particles permits that the interaction potential is symmetric: where H 0 = − 1 2 ∆ and u j = u j (t, x) is an unknown function in (t, x) ∈ R×R n .
The terms V H (x, u)u j (x) and V F (x, y)u j (y)dy are called the Hartree term and the Fock term, respectively. The problem considered in this paper is to reconstruct the interaction potential V (x) from the corresponding scattering operator defined below.
The H equation and the HF equation are non-linear Schrödinger equations with cubic convolution non-linearity. Thus, our inverse problems can be labeled as the inverse non-linear scattering problems of identifying the nonlinearity from the scattering operator. As is well-known, inverse scattering problems are non-linear problems even if governing differential equations are linear equations. From an analytical point of view, inverse problems of nonlinear differential equations are one of the most difficult problems in inverse problems.
Initial attempts for inverse non-linear scattering problems focused on identifying the coefficients of power type non-linearity from the small scattering data. The approach called small-amplitude method was developed by Strauss [13], Weder [24,26,25,28,27,29,30] and Angeles-Romero-Weder [11], which has been shown to be powerful to reconstruct coefficient functions of non-linear Schrödinger equations. However, the approach is valid only for small data. Reconstruction of coefficient functions from large scattering data requires an alternative approach. Recently, in [23], the author establishes the unique reconstruction of the power type non-linearity from the large scattering data by using the method of high-velocity limit developed by Enss and Weder [2].
On the inverse scattering problem for Hartree equations, most references we know are concerned with uniqueness results [12] and reconstructions for interactions with special form [18,19,20,22]-all focused on identifying the interactions from the small scattering data.
With regard to the inverse scattering problem for Hartree-Fock equations, little work has been done even for a uniqueness problem. The only reference we know is the work [21] where the reconstruction formula is given for the special interactions of the form V j (x) = λ j |x| −σ j , 1 ≤ j ≤ 3 in the case of 3-body systems.
In this paper, we deal with the inverse scattering problem for both H equation (1.1) and HF equation (1.2). This paper presents a explicit reconstruction formula on recovering the two-body interaction V (x) from the high energy asymptotics of the scattering solutions for the H equation (1.1) and the HF equation (1.2), respectively. A uniqueness theorem in the inverse scattering problem for the HF equation is also proved. As is mentioned above, the fundamental ingredients in mathematical analysis to investigate the non-linear inverse scattering problems are now two methods-the smallamplitude method and the high-velocity method. This paper will uncover the relation between the two methods.
Methods
Our method consists of the following analysis: • Asymptotic expansion of the scattering operator acting on the function Φ v (x) = e iv·x ϕ(x) as |v| → ∞.
• Invertibility of the transformation by using the Picard's theorem for an equation of the first kind with a compact operator.
Previous researches [2,23] explain that the high-velocity analysis of the scattering operator gives the Radon transform of the unknown coefficient functions. Due to the inversion formula of the Radon transform, the highvelocity analysis therefore provides a reconstruction formula for unknown coefficient functions.
With regard to Hartree equations, however, high-velocity analysis gives a transformation which is different from the Radon transform-the transformation has a complicated integral kernel due to those non-linearities. Invertibility of the transformation was unclear.
In order to overcome this difficulty, we employ the high-velocity analysis with scale transform, which leads an integral equation of the first kind for the unknown interaction V . It will be shown that the integral kernel is equicontinuous and equibounded in some function spaces. The Ascoli's lemma therefore gives the compactness of the integral operator. Then, the Picard's theorem for an equation with a compact operator can give an explicit solution of the integral equation. We also remark that the Picard's theorem dose not imply uniqueness of the solution. Construction of the proper initial data for the free Schrödinger equation such that the non-linear interactions of the free solution can be localized at any fixed point in R n lead us to prove a uniqueness theorem. This is done in Section 4.
It should be mentioned that the Picard's theorem can be applicable on a Hilbert space. This paper finds the proper Hilbert space to apply the Picard's theorem and consequently the proper function space to reconstruct the interaction V .
The fundamental ingredient in our proof is a time-space L 4 estimate on a solution to the free Schrödinger equations.
Results
We summarize our main results. Let W k,l (R n ) be the usual Sobolev space in L l (R n ). We abbreviate W k,2 (R n ) as H k (R n ).
We first state results for the restricted Hartree equation (RH equation), which is the case for N = 2 with u 1 = u 2 in equation (1.1). Proofs of theorems for H equation and HF equation are reduced to the proof of theorems on RH equation.
Restricted Hartree equation
Consider the RH equation: where V H (x, u)u = (V * |u| 2 )u. In order to formulate our inverse problem, let us state a result on the large date scattering problem. We denote solutions u(t) := u(t, x) of the equation (1.3) with initial data f as U(t)f and the unitary group of the self-adjoint operator H 0 = − 1 2 ∆ with a domain H 1 (R n ) as U 0 (t).
Then for any f − ∈ H 1 (R n ), there exists a unique pair of functions f + ∈ H 1 (R n ) and ϕ ∈ H 1 (R n ) such that In addition, the scattering operator We term solutions constructed in Theorem 1.1 scattering solutions.
We now formulate our inverse scattering problem. Inverse scattering problem: Given the scattering operator S with the domain H 1 (R n ), determine the interaction potential V . This paper presents a reconstruction procedure of the interaction potential V (x) from the scattering operator defined in Theorem 1.1.
We denote the multiplication operator with a fixed function V (x) as V, the Schwartz class as S, the weighted L 2 space as L 2,s and the set of compactly supported smooth functions as C ∞ 0 . The Fourier transform of f is denoted as f or F f . Let < ·, · > L 2 be the inner product in L 2 (R n ) and put Theorem 1.2. Let 3 ≤ n ≤ 6 and 2δ > n. Assume that V ∈ V ∩L 2,1+δ (R n ) is a radial, non-negative and non-increasing function. In addition, suppose that V is a compact operator from L 2 (R n ) to L 2,1+δ (R n ). Then for any ϕ ∈ S 0 , we have lim As is proved in [12], the identity holds for any ϕ ∈ H 1 (R n ). Hence we have Corollary 1.1. Let 3 ≤ n ≤ 6 and 2δ > n. Assume that V ∈ V ∩ L 2,1+δ (R n ) is a radial, non-negative and non-increasing function. In addition, suppose that V is a compact operator from L 2 (R n ) to L 2,1+δ (R n ). Then for any ϕ ∈ S 0 and ε > 0, we have This corollary shows that in the case of the RH equation, the high-velocity method on the inverse scattering problem is equivalent to the small-amplitude method on it.
Let ϕ λ (x) = ϕ((λ + 1)x) and put Consider the integral equation of the first kind: Then for any ϕ ∈ H 1 (R n ), the integral operator T G : The Picard's theorem for an equation of the first kind with a compact operator gives an explicit solution to the integral equation (1.4) (see, e.g., Kress [7,Theorem 15.18]). To state our theorem on the reconstruction problem, we give a definition of the singular system of the compact operator. Definition 1.1. Let X and Y be Hilbert space, A : X → Y be a compact linear operator, and A * : Y → X be its adjoint. Singular values of A is the non-negative square roots of the eigenvalue of non-negative self-adjoint compact operator A * A : X → X. The singular system of A is the system {µ n , ϕ n , g n }, n ∈ N, where ϕ n ∈ X and g n ∈ Y are orthonormal sequences such that Aφ n = µ n g n and A * g n = µ n φ n for all n ∈ N.
We denote the null-space of the operator T by N (T ).
is a radial, non-negative and non-increasing function. In addition, suppose that V is a compact operator from L 2 (R n ) to L 2,1+δ (R n ). Then for any ϕ ∈ S 0 , the function is the L 2 -function on a compact set Γ ⊂ R. Moreover, letting {µ n , ϕ n , g n }, n ∈ N be a singular system of the compact operator T G , the Fourier transform of the interaction potential is reconstructed by the formula: Remark 3. Uniqueness theorem on the inverse scattering problem of identifying V (x) holds for a bounded continuous function V (x) such that for some C > 0 and V is the continuous function on R n (see [12]).
Hartree equation
Consider the Hartree equation (1.1). The following theorem on the scattering problem is obtained easily from the proof of Theorem 1.1 because the Hartree term V H (x)u j has the same structure as in the RH equation. We denote a vector-valued Theorem 1.5. Let n ≥ 3. Assume that V ∈ V is a radial, non-negative and non-increasing function. Then for any In addition, the scattering operator Remark 4. The scattering operator is represented as We consider now a inverse scattering problem of identifying the interaction potential V (x) from the scattering operator S with the domain [H 1 (R n )] N . Theorem 1.6. Let 3 ≤ n ≤ 6 and 2δ > n. Assume that V ∈ V ∩L 2,1+δ (R n ) is a radial, non-negative and non-increasing function. In addition, suppose that V is a compact operator from L 2 (R n ) to L 2,1+δ (R n ). Then for any ϕ (j) ∈ S 0 , j = 1, 2, · · · , N, we have As is discussed in sub-subsection 1.3.1, the high-velocity limit of the scattering operator is equal to the small-amplitude limit of it.
is a radial, non-negative and non-increasing function. In addition, suppose that V is a compact operator from L 2 (R n ) to L 2,1+δ (R n ). Then for any ϕ (j) ∈ S 0 , j = 1, 2, · · · , N and ε > 0, we have Let ϕ λ (x) = ϕ((λ + 1)x) and put Consider the integral equation of the first kind: The same argument as in sub-subsection 1.3.1 leads us a reconstruction formula.
Theorem 1.8. Let 3 ≤ n ≤ 6 and 2δ > n. Assume that V ∈ V ∩L 2,1+δ (R n ) is a radial, non-negative and non-increasing function. In addition, suppose that V is a compact operator from L 2 (R n ) to L 2,1+δ (R n ). Then for any ϕ (j) ∈ S 0 , j = 1, 2, · · · , N, the function is the L 2 -function on a compact set Γ ⊂ R. Moreover, letting {µ n , ϕ n , g n }, n ∈ N be a singular system of the compact operator T H , the Fourier transform of the interaction potential is reconstructed by the formula:
Hartree-Fock equation
Consider the HF equation (1.2). In contrast to H equation, the large data scattering for HF equation in H 1 (R n ) space dose not follow in the same way as in the proof of Theorem 1.1. It still remains to be a poorly understood problem, although basic results-the global existence and the L 2 -conservation law of solutions-was obtained by Isozaki [4]. We here state results on the small data scattering in the space H 1 (R n ) and the large data scattering in a weighted space because of a difference for assumptions on V .
The following theorem on the small data scattering follows from the result in Mochizuki [8].
In addition, the scattering operator S : The large data scattering is stated as follows: Then for any f − ∈ ℓ,m , there exists a unique pair of functions f + ∈ ℓ,m and ϕ ∈ ℓ,m such that as t → ±∞.
In addition, the scattering operator S : Remark 5. Due to the fact that in each case, the scattering operator is represented as t) dy, j = 1, 2, · · · , N, our reconstruction formula given below is valid for both of the scattering although assumptions on V are different.
We consider now a inverse scattering problem of identifying the interaction potential V (x) from the scattering operator S with the domain [H 1 ε (R n )] N or ℓ,m . Theorem 1.11. Let 2 ≤ n ≤ 6. Assume that V satisfies the assumption in Theorem 1.9 or Theorem 1.10. In addition, suppose that V is a compact operator from L 2 (R n ) to L 2,1+δ (R n ) with 2δ > n. Then for any ϕ (j) ∈ S 0 , j = 1, 2, · · · , N, we have Similarly to the RH equation and H equation, the high-velocity limit of the scattering operator is equal to the small-amplitude limit of it. Corollary 1.3. Let 2 ≤ n ≤ 6. Assume that V satisfies the assumption in Theorem 1.9 or Theorem 1.10. In addition, suppose that V is a compact operator from L 2 (R n ) to L 2,1+δ (R n ) with 2δ > n. Then for any ϕ (j) ∈ S 0 , j = 1, 2, · · · , N and ε > 0, we have Let ϕ λ (x) = ϕ((λ + 1)x) and Consider the integral equation of the first kind: (1.6) Theorem 1.12. Let Γ ⊂ R be a compact set. Assume that 2 ≤ n ≤ 6. Then for any ϕ (j) ∈ H 1 (R n ), j = 1, · · · , N, the integral operator T HF is a compact operator from H k (R n ) to L 2 (Γ) for k > n/2.
Similarly to the RH equation and the H equation, the Picard's theorem allows us to obtain a reconstruction formula of V in term of the singular system of T HF . Theorem 1.13. Let 2 ≤ n ≤ 6. Assume that V satisfies the assumption in Theorem 1.9 or Theorem 1.10. In addition, suppose that V is a compact operator from L 2 (R n ) to L 2,1+δ (R n ) with 2δ > n. Then for any ϕ ∈ [S 0 ] N , the function is the L 2 -function on a compact set Γ ⊂ R. Moreover, letting {µ n , ϕ n , g n }, n ∈ N be a singular system of T HG , the Fourier transform of the interaction potential is reconstructed by the formula: Theorem 1.14. Let 2 ≤ n ≤ 6. Assume that V ♯ , ♯ = 1, 2 satisfy the assumption in Theorem 1.11. Let S ♯ are the scattering operator for the HF equation The structure of this paper is as follows. Section 2 is devoted to an analysis of high-velocity analysis of the scattering operator. A time-space estimate on V H (x, u)U 0 (t)Φ v plays a important role. We give proofs of Theorem 1.3, Theorem 1.7 and Theorem 1.12 in Section 3. It will be shown that the set of functions {T G f }, {T H f } and {T HF f } are equicontinuous and equibounded in the set of continuous functions C(Γ). Section 4 gives a proof of Theorem 1.14.
High velocity limit of the scattering operator
In this section, we analyze the asymptotic behavior of the scattering operator for the RH equation, the H equation and the HF equations. Due to the similarity of the proof, we give a proof in detail only for the case of the RH equation.
Consider the RH equation (1.3). Let S be the scattering operator for (1.3) defined in Theorem 1.1. Our goal in this section is to prove the following theorem.
Preliminary lemmas
In order to prove Theorem 2.1, we need some lemmas.
Lemma 2.1. Let n ≥ 2 and s > 1. Assume that q is a compact operator from L 2 (R n ) to L 2,s (R n ). Then for any ϕ ∈ S 0 , there exist a positive constant C such that for |v| large enough.
Proof. The proof will be found in [2, Lemma 2.2] and its proof. We establish a similar estimate for the RH equation.
Lemma 2.2. Let n ≥ 3 and s > 1. Assume that V ∈ V is a radial, nonnegative and non-increasing function. Suppose that V is a compact operator from L 2 (R n ) to L 2,s (R n ). Then for any ϕ ∈ S 0 , there exist a positive constant C such that From this inequality and the identity V H = V F (|u| 2 ), one has Then, applying Lemma 2.1 to the right hand side of the above inequality achieves the desired estimate. Let u be the scattering solution to (1.3). Consider the wave operator Lemma 2.3. Let n ≥ 3 and Φ v = e iv·x ϕ. Assume that V (x) satisfies the same condition as in Lemma 2.2. Then for any ϕ ∈ S 0 , we have as |v| → ∞ uniformly in t ∈ R.
Proof. In view of the representation of the wave operator, one has Then, Lemma 2.2 and the duality argument enables us to obtain Here C is a positive constant independent of t. This completes the proof.
Lemma 2.4. Let n ≥ 2, δ > 0 and Φ v = e iv·x ϕ, v ∈ R n . Assume that V is a compact operator from L 2 (R n ) to L 2,1+δ (R n ). Then for any ϕ ∈ S 0 and y ∈ R n , there exist positive constants C 1 , C 2 and C 3 such that Proof. The proof of this lemma is almost the same as in [2]. We denote by L p (R; L q ) the set of L q -valued L p functions.
Proof of Theorem 2.1
We are now in a position to prove Theorem 2.1. Let F RH (u) = (V * |u(t)| 2 )u(t). We break the scattering operator in four parts: Here u(t) is the scattering solution to (1.3).
To calculate the leading term L(v), we first observe that the identity holds. In fact, by using the identity and the change of variables s = t/|v|, x ′ = x − vt and y ′ = y − vt, we obtain The Plancherel's theorem implies that Then, the Fubini's thoerem yields the expression of the leading term Here we note that L(v) is bounded. In fact, thanks to Lemma 2.5, one gets Next, we will show that R 3 (v) = O(|v| −2 ) as |v| → ∞. Thanks to Lemma
and Lemma 2.3, one has
due to the fact that u(t) = Ω − (U 0 (t)φ). We will claim that R j (v) = O(|v| −2 ), j = 1, 2 as |v| → ∞. Due to the fact that we obtain Thanks to Lemma 2.3 and Lemma 2.4, one has Let us prove that R (1) 1 (v) ≤ C|v| −1 for some C > 0. Thanks to Lemma 2.3, it is easy to verify that We will show that R 1 (v) ≤ C|v| −1 for some C > 0. By using Lemma 2.4 and estimate U 0 (t)Φ v L ∞ ≤ ϕ L 1 , one gets The polar coordinate yields that where |S n−1 | is the surface area of the unit (n − 1)-sphere in n-dimensional Euclidean space. It is easy to verify that for n < 2(1 + δ). Then we have for n < 2(1 + δ), where C 3 is a positive constant depending only on n and δ. Therefore, one has for n/2 < δ, due to the change of variables 3|vt|/8 = s. We now conclude that |R 1 (v)| ≤ C/|v| 2 . Similarly, the remainder term R 2 (v) is estimated as for n/2 < δ.
which proves the theorem.
Integral equations
In this section, we will show that the integral operators T G , T H and T HF are compact operators. After giving a proof in detail for T G in subsection 3.1, we give a sketch of proofs for T H and T HF in subsection 3.2 and 3.3, respectively.
Integral operator T G
Consider the integral operator T G : and ϕ λ (x) = ϕ((λ + 1)x). Due to the Sobolev embedding theorem H k (R n ) ֒→ L ∞ (R n ) for 2k > n, in order to prove Theorem 1.3, it suffices to verify that T G is a compact operator from L ∞ (R n ) to C(Γ).
Theorem 3.1. Let Γ ⊂ R be a compact set. Assume that 2 ≤ n ≤ 6. Then for any ϕ ∈ H 1 (R n ) the integral operator T G is a compact operator from L ∞ (R n ) to C(Γ).
Proof. Due to the fact that ||u| 2 − |v| 2 | ≤ |u − v| 2 + 2(|u| + |v|)|u − v|, we have Thanks to Lemma 2.5, one has Similarly, for the function T (2) G , one gets Thus we obtain for λ ∈ Γ. This implies that {T G f } is equicontinuous and equibounded for f L ∞ . It therefore follows from the theorem Ascoli that {T G f } contains a Cauchy subsequence in C(Γ), which implies the integral operator T G is a compact operator from L ∞ (R n ) to C(Γ). The proof is complete.
Integral operator T H
Consider the integral operator T H : Theorem 3.2. Let Γ ⊂ R be a compact set. Assume that 2 ≤ n ≤ 6. Then for any ϕ j ∈ H 1 (R n ), j = 1, · · · , N the integral operator T H is a compact operator from L ∞ (R n ) to C(Γ).
Proof. It is clear that where The same technique as in the estimate on T for some c j , c k > 0. This estimate implies that for λ ∈ Γ. Due to the same argument as in the subsection 3.1, the integral operator T H is the compact operator from L ∞ (R n ) to C(Γ). The proof is complete.
Integral operator T HF
Consider the integral operator T HF : ( Theorem 3.3. Let Γ ⊂ R be a compact set. Assume that 2 ≤ n ≤ 6. Then for any ϕ j ∈ H 1 (R n ), j = 1, · · · , N the integral operator T HF is a compact operator from L ∞ (R n ) to C(Γ).
Proof. We write
Due to the fact that |H (1) , where H (1) and H (2) are defined in the proof of Theorem 3.2, we have for some C > 0 and for λ ∈ Γ.
For H (2) HF , thanks to the inequality Thanks to Lemma 2.5, one gets for some C > 0 and λ ∈ Γ. In the same way as in the proof of Theorem 3.1, we also obtain for some C > 0 and λ ∈ Γ.
Consequently, we obtain for some C > 0 and λ ∈ Γ. Hence one gets for some C > 0 and λ ∈ Γ. This implies that the operator T HF is a compact operator from L ∞ (R n ) to C(Γ), due to the same argument as in the proof of Theorem 3.1. The proof is complete.
Uniqueness
In this section, we prove the following uniqueness theorem in the inverse scattering problem for the HF equation (1.2).
Proof of Theorem 4.1
We are now in a position to prove Theorem 4.1.
Then, thanks to Lemma 4.2, there exist ϕ (♭) ∈ S 0 , ♭ = j, k such that the function N k=1 ( |G 2 [ϕ (k) , ϕ (j) ]| 2 dt)(ξ) is a non-negative function with the support in B δ (p). This and the positivity Re (w(ξ)) > 0 on B δ (p) show that the integral of the left hand side in the identity (4.2) never vanishes for such functions ϕ (♭) . This contradicts the identity (4.2). Thus, we conclude that w ≡ 0 on R n , which proves Theorem 4.1. | 2019-01-31T02:21:19.000Z | 2019-01-31T00:00:00.000 | {
"year": 2019,
"sha1": "01cebe59d13ee31ba0e970e6adf2a89b562de118",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1901.11175",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "01cebe59d13ee31ba0e970e6adf2a89b562de118",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
16987890 | pes2o/s2orc | v3-fos-license | Overcoming evasive resistance from vascular endothelial growth factor a inhibition in sarcomas by genetic or pharmacologic targeting of hypoxia-inducible factor 1α
Increased levels of hypoxia and hypoxia-inducible factor 1α (HIF-1α) in human sarcomas correlate with tumor progression and radiation resistance. Prolonged antiangiogenic therapy of tumors not only delays tumor growth but may also increase hypoxia and HIF-1α activity. In our recent clinical trial, treatment with the vascular endothelial growth factor A (VEGF-A) antibody, bevacizumab, followed by a combination of bevacizumab and radiation led to near complete necrosis in nearly half of sarcomas. Gene Set Enrichment Analysis of microarrays from pretreatment biopsies found that the Gene Ontology category “Response to hypoxia” was upregulated in poor responders and that the hierarchical clustering based on 140 hypoxia-responsive genes reliably separated poor responders from good responders. The most commonly used chemotherapeutic drug for sarcomas, doxorubicin (Dox), was recently found to block HIF-1α binding to DNA at low metronomic doses. In four sarcoma cell lines, HIF-1α shRNA or Dox at low concentrations blocked HIF-1α induction of VEGF-A by 84–97% and carbonic anhydrase 9 by 83–93%. HT1080 sarcoma xenografts had increased hypoxia and/or HIF-1α activity with increasing tumor size and with anti-VEGF receptor antibody (DC101) treatment. Combining DC101 with HIF-1α shRNA or metronomic Dox had a synergistic effect in suppressing growth of HT1080 xenografts, at least in part via induction of tumor endothelial cell apoptosis. In conclusion, sarcomas respond to increased hypoxia by expressing HIF-1α target genes that may promote resistance to antiangiogenic and other therapies. HIF-1α inhibition blocks this evasive resistance and augments destruction of the tumor vasculature. What’s new? Despite their initial promise, anti-angiogenic therapies have been a disappointment in the clinic. One reason is that solid tumors often become resistant to these drugs. Tumors that respond poorly to this type of therapy have increased activation of the hypoxia-induced transcription factor HIF-1α which can enhance tumor survival and progression. In this study, the authors report that this evasive resistance can be overcome by adding low-dose doxorubicin or shRNA to inhibit HIF-1α activity. They are thus developing a clinical trial combining the angiogenesis inhibitor bevacizumab with metronomic doxorubicin in sarcoma patients.
Soft tissue sarcomas arise in nearly 10,000 persons in the United States each year, striking individuals of all ages (median age of 50 years), with roughly 40% of patients dying of either locoregional recurrence or distant metastasis. 1 The treatment of primary tumors usually includes surgery and radiation, and sometimes chemotherapy. Local recurrence after aggressive surgery alone can be as high as 33% for extremity tumors and 82% for retroperitoneal tumors. 2,3 Radiation therapy has been prospectively demonstrated to decrease local recurrence for extremity and truncal tumors, 2,4 and retrospective studies suggest that radiation therapy can reduce local recurrence for retroperitoneal and pelvic tumors. 5,6 Despite aggressive surgery and radiation, sarcomas adjacent to vital structures (e.g., major vessels and nerves) and all retroperitoneal and pelvic tumors still have a significant risk of local recurrence. Furthermore, up to 50% of patients with large, high-grade sarcomas develop distant metastases, most frequently to the lung. 7 The benefit of adjuvant chemotherapy in preventing local and distant recurrence is modest at best. 8 It is now well established that regions within solid tumors including sarcomas experience mild to severe hypoxia owing to aberrant vascular function. 9 The oxygen diffusion limit from blood vessels is about 145 lm, and thus, new tumor vasculature or co-option of existing vessels is required for tumors to grow beyond a microscopic size. 10 As tumors expand, there exists a critical balance between tumor angiogenesis, or new blood vessel formation, and hypoxia. Tumor cells respond to hypoxic stress through multiple mechanisms, including stabilization of hypoxia-inducible factor 1a (HIF-1a). 11 Stabilized HIF-a is then transported to the nucleus, where it forms a dimer with the constitutively expressed aryl hydrocarbon receptor nuclear translocator (ARNT) subunit. HIF dimers then bind hypoxia-responsive element DNA sequences and consequently activate expression of at least 150 genes whose products orchestrate adaptive responses including those mediating tumor angiogenesis e.g., vascular endothelial growth factor A (VEGF-A), 9 invasion (e.g., c-Met), 12 cellular metabolism [e.g., carbonic anhydrase 9 (CA9)], 13 and metastasis (e.g., FOXM1). 14,15 Agents targeting HIF-1a are in various stages of clinical development, 16 and the most commonly used chemotherapeutic drug for sarcomas, doxorubicin (Dox), was recently found to block HIF-1a binding to DNA at low metronomic doses. 17 VEGF-A is likely the most important factor driving tumor angiogenesis. 18 We have previously shown that HIF-1a upregulates the expression of VEGF-A in sarcomas, 19 and circulating levels of VEGF-A are elevated on average tenfold in patients with sarcoma compared to controls. 20 The expression of VEGF-A in sarcomas correlates with extent of disease and survival. 21 Inhibition of VEGF-A or its receptors can effectively suppress tumor angiogenesis in mouse sarcoma models, 19,22 and numerous anti-VEGF agents are in various phases of clinical trials or approved for patients with cancer. 23 The effect of VEGF-A inhibition on intratumoral hypoxia and HIF-1a activity may vary between different tumors. Tumor blood vessels are immature, dilated, tortuous and highly permeable with erratic flow, 24,25 and many of these abnormalities can be attributed to the overexpression of VEGF-A. 26 These characteristics led to areas of hypoxia in tumors. Administration of anti-VEGF agents can result in reduced vessel irregularity, diameter and permeability and can transiently improve the delivery of oxygen. 27 However, sustained anti-VEGF therapy can ultimately lead to loss of tumor vessels and increased hypoxia. 28 There has been recent significant controversy on the effects of VEGF inhibition on primary tumor invasiveness and metastatic potential. 29 Casanovas and coworkers 30,31 found that VEGFR-2 inhibition of RIP1-Tag2 mouse pancreatic endocrine tumors led to increased intratumoral hypoxia along with increased tumor invasiveness and liver metastases, and Ebos et al. 32 found that sunitinib (which targets VEGF and other pathways) increased liver and lung metastases for both experimental and spontaneous metastases. This conflicts with other preclinical studies showing inhibition of metastases with VEGF inhibitors 33 as well as clinical studies demonstrating that bevacizumab as single-agent therapies can prolong patient survival against metastatic renal cell cancer and other cancers. 34,35 The effects of VEGF inhibition in primary sarcomas on hypoxia, HIF-1a activity and HIF-related phenotypes such as tumor progression, metastasis and radiation response are currently unknown.
We recently completed a Phase II clinical trial of neoadjuvant bevacizumab and radiation therapy for patients with resectable soft tissue sarcomas. 36 Twenty patients with intermediate-or high-grade soft tissue sarcomas of !5 cm in size received bevacizumab (an anti-VEGF-A antibody) for 2 weeks followed by 6 weeks of bevacizumab combined with radiation therapy (50 Gy). Tumor tissue samples were obtained before treatment and 10 days after the start of bevacizumab. Bevacizumab and radiation resulted in a good response (defined as !80% pathologic necrosis) in nine of 20 tumors (45%), which is over double the historical response rate seen with radiation alone. High initial microvessel density (MVD; s ¼ 0.53, p ¼ 0.0031) and decrease in MVD after bevacizumab alone (s ¼ 0.43, p ¼ 0.0154) significantly correlated with a good response to the combination of bevacizumab and radiation. As part of this clinical trial, gene expression microarray data were obtained on tumor samples prior to the start of treatment. Tumors with a good response versus poor response to combination therapy with bevacizumab and radiation were distinguished by a 24-gene signature that included PLAUR (plasminogen activator, urokinase receptor), a gene which is transcriptionally regulated by HIF-1a. 36 In our study, further analysis of gene expression microarrays from this clinical trial suggested that a strong HIF-1a transcriptional program in sarcomas may contribute to treatment resistance and progression. Thus, we analyzed anti-VEGF treatment and HIF-1a inhibition in sarcoma cell lines in vitro as well as in a sarcoma mouse model and demonstrated the therapeutic potential of this novel strategy.
Microarray analysis
Tumor samples were obtained from a Phase II clinical trial of neoadjuvant bevacizumab and radiation therapy for resectable soft tissue sarcomas as previously described. 36 RNA was isolated from tumor tissue using the Qiagen RNeasy kit (Qiagen, Valencia, CA). RNA quality was assessed using 2100 What's new? Despite their initial promise, anti-angiogenic therapies have been a disappointment in the clinic. One reason is that solid tumors often become resistant to these drugs. Tumors that respond poorly to this type of therapy have increased activation of the hypoxia-induced transcription factor HIF-1a which can enhance tumor survival and progression. In this study, the authors report that this evasive resistance can be overcome by adding low-dose doxorubicin or shRNA to inhibit HIF-1a activity. They are thus developing a clinical trial combining the angiogenesis inhibitor bevacizumab with metronomic doxorubicin in sarcoma patients.
The supervised hierarchical clustering of 140 genes transcriptionally regulated by HIF-1a was performed using 1 À r (Pearson's correlation) as a distance metric with a complete linkage. Gene Set Enrichment Analysis (GSEA) was used to identify the Gene Ontology (GO) functional categories with significantly different expression between good and poor responders. 37 GO categories were obtained from MSigDB (c5 GO category; http://www.broadinstitute.org/gsea/msigdb/ index.jsp). The significance of enrichment was measured by phenotypic label permutation. Microarray data have been uploaded in Gene Expression Omnibus (GEO) (GEO submission #GSE31715).
Cell lines
MS4515 mouse pleomorphic undifferentiated sarcoma cells and MS5907 mouse pleomorphic undifferentiated sarcoma cells were derived from genetically engineered mouse models of sarcoma (LSL-Kras G12D/þ /Trp53 fl/fl and LSL-Kras G12D/þ ; Ink4A/Arf fl/fl ), which we have previously described. 22,38 HT1080 human fibrosarcoma cells, SKLMS-1 human leiomyosarcoma cells and DC101 hybridoma cells were obtained from the American Type Culture Collection (ATCC). Human umbilical vein endothelial cells (HUVECs) and human dermal microvascular endothelial cells (HDMECs) were obtained from Lonza (Basel, Switzerland). All endothelial cells were used within eight passages. Cancer cell lines were actively passaged for less than 6 months from the time that they were received from ATCC or the NCI Tumor Repository, and the UKCCCR guidelines were followed. 39 All human and mouse sarcoma cell lines were maintained in Dulbecco modified Eagle medium supplemented with 10% fetal bovine serum, 100 U/ml penicillin, 100 lg/ml streptomycin and 2 mM L-glutamine. All endothelial cells were grown in EGM-2-MV media (Lonza). DC101 antibody was produced from DC101 hybridoma cells using the BD CELLine 1000 system (BD Biosciences, San Jose, CA) following the manufacturer's instructions or purchased from BioXCell (West Lebanon, NH). Dox was purchased from Teva Pharmaceuticals (Petah Tivka, Israel).
In vitro assays
Cell proliferation and migration were determined as previously described. 40 In brief to determine cell proliferation, equal numbers of cells were plated in 24-well plates and incubated for 16 hr under normoxia (21% O 2 ) or hypoxia (0.5% O 2 ) under the specified conditions. Cell number was then determined using a thiazolyl blue tetrazolium bromide (MTT; Sigma, St. Louis, MO) assay, with optical density read at 550 nm with a reference wavelength of 650 nm. To determine cell migration assay, equal numbers of cells were placed in a modified Boyden chamber under normoxia or hypoxia under the specified conditions for 4-18 hr. Nonmotile cells were removed from the top of the chamber insert using a cotton swap. Cells were then washed with PBS, fixed in methanol, permeabilized with 0.1% Triton-X 100 (Sigma) and stained with DAPI (Invitrogen, Carlsbad, CA). Cells were imaged using an inverted Olympus IX81 fluorescence microscope using Slidemaker software (Geeknet, Fairfax, VA). Cells were counted using ImageJ software (NIH).
Quantitative RT-PCR
For analysis of mRNA expression, cells were incubated in 21% or 0.5% oxygen for 24 hr. RNA was isolated from cell lines using Trizol (Invitrogen) following the manufacturer's instructions. Total RNA was isolated from tumor tissue preserved in RNA Later using RNeasy Mini Kit (Qiagen) following the manufacturer's instruction. RNA concentration was determined by Nanodrop 1000 (Thermoscientific, West Palm Beach, FL). cDNA was synthesized using the Superscript First-Strand Synthesis System (Invitrogen) with random hexamers. Quantitative real-time PCR analysis was performed using the 7900HT Fast Real-Time PCR System (Applied Biosystems, Foster City, CA) using 100 ng of cDNA product and Syber Green PCR Master mix (Applied Biosystems), per manufacturer's instructions. Primers for human genes were as follows: Primers for mouse genes were as follows:
Enzyme-linked immunosorbent assays
For analysis of secreted VEGF-A protein, conditioned media was collected after 24-hr incubation. Secreted human and mouse VEGF-A level was measured using the following commercially available enzyme-linked immunosorbent assay (ELISA) kits: Human VEGF-A Duoset and mouse VEGF-A Duoset (all from R&D Systems, Minneapolis, MN). Manufacturer's protocols were followed, and samples were measured in duplicate. Mean values were used as the final concentration. ELISA plates were read using the Emax Precision Microplate Reader (Molecular Devices, Sunnyvale, CA). Cell population was measured by MTT assay. Optical density values from the result of ELISA were divided into the optical density from MTT assay to normalize for cell number.
Hypoxyprobe and necrosis
Hypoxia in tumors was measured using the Hypoxyprobe TM -1 kit (HPI, Burlington, MA) following manufacturer's instructions. Standard hematoxylin and eosin (H&E) staining was also performed on tissue sections. Images from each section were stitched together in Adobe Photoshop to create a large scan image of the whole section. Areas of necrosis (based on H&E sections) and areas of hypoxia (based on Hypoxyprobe TM -1 staining) were quantified using ImageJ Software.
CD31 immunohistochemical localization and analysis of MVD were performed as previously described. 42 For detection of EC apoptosis, TUNEL assay and CD31 immunofluorescence were performed. Paraffin-embedded sections were deparaffinized and incubated with goat anti-CD31 Ab (1:100, SC-1506; Santa Cruz Biotechnology) overnight at 4 C. Following washing, sections were incubated with rabbit anti-goat Alexa 594-conjugated secondary antibody (1:500, A11080; Molecular Probes, Carlsbad, CA) for 1 hr at room temperature. Then, sections were treated with the DeadEnd Fluorometric TUNEL system (Promega, Madison, WI) to detect apoptosis, following the manufacturer's instruction. Subsequently, sections were stained with DAPI (0.2 lg/ml) for 3 min. Images were obtained on a Zeiss microscope and analyzed using Axio-Vison 4.0 software (Carl Zeiss, Thornwood, NY).
Mouse studies
All mouse protocols were approved by the Massachusetts General Hospital Subcommittee on Research Animal Care. To generate subcutaneous flank tumor, 10 6 HT1080 cells were resuspended in 100 ll of Hank's balanced salt solution and injected subcutaneously into the right flank of athymic nude mice following xylazine/ketamine anesthesia. Six mice were used for each group. Tumors were measured three times per week for a maximum of 3 weeks, and tumor volume (TV) was calculated by using the following formula: TV ¼ length  (width) 2  0.52. Treatment began when the TVs were $50 mm 3 , and DC101 (400 lg per mouse), isotype control IgG 1 s (40 lg per mouse) and/or Dox (1.0 mg/kg) were injected intraperitoneally three times a week.
Statistical analysis
Groups were compared using GraphPad 3.10 software (InStat). p values were calculated using Student's t-test. For comparisons between more than two groups, treatment groups were compared to the control group using one-way ANOVA with Bonferroni adjustment for multiple comparisons. p values < 0.05 were considered significant.
Results
From the Phase II clinical trial of neoadjuvant bevacizumab and radiation for sarcoma described above, we had gene expression microarray data from tumors prior to the start of treatment. 36 We explored the differential gene expression between tumors with a subsequent good pathological response (!80% necrosis) versus poor pathological response (<80% necrosis) to bevacizumab and radiation using GSEA of 1,454 GO categories. Overall, 14 and 18 GO categories showed significant (False discovery rate < 0.1) upregulation in poor and good responders, respectively (Supporting Information Table 1). The GO category ''Response to hypoxia,'' which contains 28 genes, was upregulated in tumors with a poor response to bevacizumab and radiation (nominal p value ¼ 0.08; Fig. 1a). One of the genes that most highly contributed to the identification of the enrichment of this GO category set was HIF-1a. Thus, we performed a supervised hierarchical clustering analysis of 140 genes transcriptionally regulated by HIF-1a (Supporting Information Table 2) and found that these genes reliably clustered tumors into those with a good versus poor response to bevacizumab and radiation (Fig. 1b). Paired biopsy specimens taken before and after the start of bevacizumab were available for seven patients, five of which had a good response to treatment and two of which had a poor response. In the two tumors that had a poor response to bevacizumab and radiation, quantitative RT-PCR for VEGF-A, a HIF-1a target gene, demonstrated significant upregulation (Fig. 1c). These data suggest that a strong HIF-1a-mediated transcriptional program in sarcomas may contribute to treatment resistance and tumor progression. This analysis led to the underlying hypothesis for our study: VEGF-A and HIF-1a play critical and interdependent roles in regulating sarcoma progression.
To better determine the role of HIF-1a in various sarcomas, we next examined HIF-1a and HIF-1a target genes in four sarcoma cell lines, two of human origin (HT1080 human fibrosarcoma and SKLMS human leiomyosarcoma) and two of mouse origin (MS4515 and MS5907). HIF-1a Genes (xaxis) are ordered by their t-statistics comparing poor and good responders. The upregulated and downregulated genes in poor responders are placed on the left and right, respectively. The enrichment score (y-axis) is a cumulative sum reflecting the degree of over-representation for the genes in this category compared to the rest of the genes; a high enrichment score indicates the presence of hypoxia-related genes among the genes that are significantly different in the good/poor responder phenotype. The locations of the 28 hypoxia-related genes are shown in the middle bar, with the gene symbols for the 11 genes that contribute to the maximum value of the enrichment score shown in gray. The relative expression levels of the 11 genes are also shown in a heat map (bottom). Red and blue represent the relative upregulation and downregulation, respectively, compared to its average expression. (b) Supervised hierarchical clustering analysis of 140 hypoxia-responsive genes. Poor and good responders are indicated by red and green, respectively. This analysis demonstrates that the vast majority of good and poor responders can be differentiated based on the expression of hypoxia-related genes. Dendogram is shown at top. (c) Relative mRNA levels of VEGF-A in sarcomas before and after bevacizumab (BV) treatment. Relative value is in relation to the lowest level of expression, which was assigned a value of 1. Poor and good responders are indicated by red and green boxes, respectively. Bars represent standard deviation. *p < 0.05 compared to pretreatment level. levels were upregulated in all four cell lines in response to 0.5% O 2 (Fig. 2a). We examined the levels of HIF-1a target genes associated with angiogenesis (VEGF-A), metabolism (CA9) and metastasis (FOXM1) by qRT-PCR (Fig. 2b). VEGF-A was upregulated in all four cell lines under hypoxia by 1.7-to 2.4-fold. In the human sarcoma cell lines, CA9 was upregulated 18-to 36-fold, whereas FOXM1 remained relatively unchanged or slightly decreased. In the mouse sarcoma cell lines, CA9 was upregulated 24-to 49-fold, whereas FOXM1 levels again remained relatively unchanged or decreased slightly. We confirmed changes in VEGF-A at the protein level by measuring the secretion of VEGF-A from these cell lines under hypoxia (Fig. 2c). VEGF-A protein secretion increased 2.1-to 5.7-fold in all cell lines following exposure to hypoxia. Of note, the differences in upregulation of VEGF-A mRNA versus VEGF-A protein under hypoxic conditions may be related to inherent differences in the cell lines in their response to hypoxia, differences in translation of VEGF-A mRNA or differences in secretion or degradation of VEGF-A protein. Thus, sarcoma cell lines respond to hypoxic stress by upregulating the expression of certain HIF-1a target genes.
We examined our sarcoma cell lines for the expression of VEGF receptors 1 and 2 (VEGFR-1 and VEGFR-2) and found little or no expression in any of these cell lines (data not shown). Using HT1080 fibrosarcoma flank tumor xenografts generated in athymic nude mice, we examined the levels of hypoxia and HIF-1a in control tumors and the tumors treated with DC101, an anti-VEGFR-2 antibody. Tumorbearing mice were treated with DC101 or control IgG once they reached 50 mm 3 in size. Control tumors took $12 days to reach 1,000 mm 3 (data not shown). When analyzed for the levels of hypoxia using Hypoxyprobe immunohistochemistry, hypoxia levels were found to increase in control tumors as tumors increased from 200 to 500 mm 3 , with no further increase in hypoxia as tumor grew beyond 500 mm 3 (Fig. 3a). HT1080 xenografts treated with DC101 showed delayed tumor growth, with tumors taking about 5-7 days longer to reach 1,000 mm 3 in size. The levels of hypoxia in HT1080 tumors were not significantly affected by DC101 after controlling for tumor size (i.e., comparing similar-sized tumors in each group). The levels of nuclear HIF-1a correlated with the levels of intratumoral hypoxia, with levels increasing as tumors increased in size from 200 to 500 mm 3 and then remaining stable (Fig. 3b). Of note, DC101 treatment did increase nuclear localization of HIF-1a in small tumors (200-300 mm 3 ; Figs. 3c and 3d). This would suggest that there is no direct correlation between the levels of hypoxia as measured by Hypoxyprobe staining and the stabilization of HIF-1a protein, with the latter possibly being a more sensitive measure of hypoxia. Alternatively, DC101 may have effects on HIF-1a nuclear localization, which are independent of hypoxia levels.
To determine the role of HIF-1a in sarcoma cell proliferation and migration in vitro and tumor growth in vivo, we used shRNA knockdown of HIF-1a in HT1080 sarcoma cells that showed strong upregulation of HIF-1a in response to 0.5% hypoxia (Fig. 4a). This upregulation of HIF-1a was effectively blocked by shRNA knockdown of HIF-1a. shRNA knockdown in HT1080 cells inhibited proliferation under normoxic and hypoxic conditions (Fig. 4b) and reduced the migration of these cells under hypoxic conditions (Fig. 4c). Following HIF-1a knockdown in MS5907 sarcoma cell lines, we found no effect on proliferation; however, we found decreased migration under hypoxic conditions (Supporting Information Figs. S1A-S1C). HT1080 cells with stable knockdown of HIF-1a and control HT1080 cells were then grown as flank xenografts with or without treatment with DC101. The knockdown of HIF-1a or DC101 treatment inhibited tumor growth; however, the combination of HIF-1a knockdown and Figure 4. (a) Western blot analysis of HIF-1a in HT1080 cells in 21% oxygen (normoxia) and 0.5% oxygen (hypoxia) following treatment with HIF-1a shRNA or scrambled (Scr) control shRNA. GAPDH blot serves as loading control. Proliferation (b) and migration (c) of HT1080 cells after transduction with HIF-1a shRNA or scrambled (Scr) shRNA. (d) Growth of HT1080 cell transduced with HIF-1a shRNA or scrambled (Scr) shRNA following subcutaneous injection in athymic nude mice. Some groups treated with DC101 and IgG were used as antibody treatment control. Metronomic doxorubicin (Dox) was given to one group. (e) Relative mRNA levels of CA9 in HT1080 tumor groups. Bars represent standard deviation. *p < 0.05 compared to control group; ns, not significant (p > 0.05); **p < 0.05 compared to Scr shRNA þ IgG control group, HIF-1a shRNA þ IgG group and Scr shRNA þ DC101 group. DC101 caused the greatest degree of growth inhibition (Fig. 4d). We assessed the levels of one HIF-1a target gene, CA9, in treated HT1080 tumors by qRT-PCR and found that HIF-1a knockdown did indeed repress CA9 levels (Fig. 4e).
Lee et al. 17 found after screening 3,120 drugs from the Johns Hopkins Drug Library that Dox is a potent inhibitor of HIF-1a by blocking HIF-1a binding to DNA. Dox is the most commonly used chemotherapeutic agent for soft tissue sarcomas, 17 and thus, the use of this agent to block HIF-1a induction of target genes in sarcomas makes therapeutic sense. Dox blocked proliferation of all four sarcoma cell lines and two types of endothelial cells at IC 50 concentrations of 0.005 to 0.1 lM, with endothelial cells generally being more sensitive to Dox than cancer cell lines (data now shown). As noted earlier for the four sarcoma cell lines, VEGF-A and CA9 were upregulated in hypoxia. For HT1080 cells, CA9 showed the most hypoxic upregulation of the four genes examined, and low-dose Dox completely potently blocked this hypoxic upregulation (Fig. 5a). Low-dose Dox was also able to decrease VEGF-A secretion from all four sarcoma cell lines following hypoxia (Fig. 5b). In HT1080 cells, we compared the ability of HIF-1a shRNA and Dox to block induction of VEGF under hypoxic conditions and found that HIF-1a and Dox blocked VEGF secretion equally (Supporting Information Fig. S2). Combining HIF-1a shRNA with Dox had no additive effect (Supporting Information Fig. S2). We next examined the effects of DC101 and/or metronomic Dox on HT1080 xenografts. DC101 or metronomic Dox inhibited tumor growth by 30-31%, and the combination of DC101 or metronomic Dox inhibited tumor growth by 67% (Fig. 5c). Tumors were harvested at the end of treatment and examined for the expression of CA9. As seen in our in vitro studies, HT1080 xenografts exposed to metronomic Dox had significantly lower expression of CA9 (Fig. 5d), and only combination therapy induced significant tumor necrosis (Fig. 5e).
We next explored the effects of DC101 along with HIF-1a inhibition (either with HIF-1a shRNA or metronomic Dox) on tumor vasculature. Given the first metronomic Dox likely has off-target effects beyond HIF-1a inhibition, we examined whether adding metronomic Dox to DC101 and HIF-1a knockdown increased HT1080 xenograft tumor growth delay and found that there was no significant additional effect (Fig. 4d). Second, we examined MVD and found that DC101 reduced MVD by 42%, HIF-1a shRNA or metronomic Dox by 25-27% and the combination of HIF-1a shRNA or Dox and DC101 by 71-72% (Fig. 6a). Overall apoptosis in tumors was increased by 2.3-to 4.5-fold by treatment with DC101 and/or HIF-1a inhibition (Fig. 6b), whereas endothelial cellspecific apoptosis was increased at least twofold only with the combination of DC101 and HIF-1a inhibition (Fig. 6c). Of note, the addition of Dox to HIF-1a knockdown did not add any further effect on tumor growth inhibition and also did not further decrease MVD or increase endothelial cell-specific apoptosis.
Given the combined VEGF and HIF-1a inhibition appeared to have a synergistic effect on tumor vasculature, we further examined the effect of VEGF-A deprivation plus HIF-1a inhibition on endothelial cells in vitro. The removal of VEGF-A from the cell culture media combined with HIF-1a knockdown inhibited the proliferation of HUVECs under both normoxic and hypoxic conditions (Supporting Information Fig. S3A). Similar results were obtained when this experiment was performed with HDMECs (data not shown). We substituted HIF-1a genetic inhibition (i.e., HIF-1a shRNA) with pharmacologic inhibition using low-dose Dox in HDMECs and found a similar synergistic attenuation of proliferation when VEGF-A withdrawal was combined with Dox (Supporting Information Fig. S3B).
Discussion
Our study was initiated following the examination of correlative science studies from a Phase II clinical trial of bevacizumab and radiation therapy for sarcomas. In this clinical trial, the addition of VEGF-A inhibition to radiation significantly increased the proportion of tumors with a good response to radiation therapy to nearly 50%. Studies of tumor tissue obtained before treatment and during treatment allowed us to ask why the other 50% of tumors were resistant to therapy. In our study, analysis of gene expression microarrays suggested that high expression of HIF-1a and HIF-1a target genes contributed to resistance to the combination of bevacizumab and radiation. Thus, we examined the role of hypoxia and HIF-1a in sarcoma progression as well as the combination of VEGF-A and HIF-1a inhibition in sarcomas. We found that four different sarcoma cells lines upregulate HIF-1a and specific HIF-1a target genes under hypoxic conditions. Genetic deletion of HIF-1a using shRNA or the pharmacologic blockade of HIF-1a binding to target DNA using low-dose Dox act synergistically with VEGF inhibition to suppress the growth of sarcoma xenografts. Our analysis of treated tumors reveals that the one mechanism for this effect is via the induction of tumor endothelial cell apoptosis. These findings have significant implications in the future treatment of sarcomas with antiangiogenic therapies.
When antiangiogenic therapies were initially proposed for inhibiting solid tumors, it was thought that such therapies would be less susceptible to resistance given the target was genetically stable tumor endothelial cells as opposed to genetically unstable cancer cells. Now after several years of antiangiogenic therapies being used in patients with solid tumors, oncologists have found that antiangiogenic therapies generally result only in a transitory inhibition or delay in tumor growth with ultimate regrowth of tumors. Mechanisms of resistance to antiangiogenic therapies include upregulation of alternative proangiogenic signals, protection of the tumor vasculature either by recruiting proangiogenic inflammatory cells or by increasing protective pericyte coverage, accentuated invasiveness of tumor cells into local tissue to co-opt normal vasculature and increased metastatic seeding and tumor cell growth in lymph nodes and distant organs. 43 Hypoxia and HIF-1a are known to play important roles in human sarcomas. We have previously shown that the expression of HIF-1a and 25 other hypoxia-related genes in human sarcomas is highly upregulated compared to normal tissues, 19 and hypoxia in human sarcomas is associated with a higher risk of recurrence and decreased overall survival. 44,45 HIF-1a upregulates the expression of VEGF-A in sarcomas, 19 and circulating levels of VEGF-A are elevated on average tenfold in patients with sarcoma compared to controls. 20 In our study, we confirmed upregulation of HIF-1a under hypoxic conditions in four different sarcoma cell lines. The hypoxia response in terms of target genes was fairly similar between these sarcoma cell lines with uniform upregulation of VEGF-A and CA9.
The effect of VEGF inhibition on intratumoral hypoxia and HIF-1a activity may vary depending on the specific tumor type. The administration of anti-VEGF agents can result in reduced vessel irregularity, diameter and permeability and can transiently improve the delivery of oxygen. 27 However, sustained anti-VEGF therapy can ultimately lead to loss of tumor vessels and increased hypoxia. 28 In our study, we found that anti-VEGFR-2 therapy with DC101 did not significantly change intratumoral hypoxia when comparing similar-sized tumors; however, DC101 did appear to increase nuclear localization of HIF-1a in small tumors (200-300 mm 3 ).
One mechanism by which VEGF inhibition and HIF-1a inhibition synergistically inhibit sarcoma progression is via the targeting of the tumor endothelium. Thus, VEGF inhibition likely only inhibits the vascular compartment of tumors and has little or no effect on the cancer cell compartment. HIF-1a, in contrast, is expressed by both endothelial cells and sarcoma cells and may have effects on both cell types. We analyzed these compartments separately both in vitro and in vivo and found that synergistic effects from combined VEGF inhibition and HIF-1a inhibition on proliferation and apoptosis were only found in the vascular compartment.
Lee et al. 17 screened more than 3,000 drugs from the Johns Hopkins Drug Library and found that Dox at low doses can block HIF-1a binding to DNA. In looking for a clinically applicable inhibitor of HIF-1a for sarcomas, Dox is a very good candidate, given that Dox is already the most commonly used chemotherapeutic drug for sarcomas. 46 However, the traditional means of delivering Dox, that is, at maximum tolerated doses, focuses on maximizing cancer cell cytotoxicity rather than maximizing HIF-1a blockade. The delivery of Dox at low, continuous doses (i.e., metronomic doses) maximizes HIF-1a blockade, 17 and this approach has been used in other patients with solid tumors. The administration of metronomic Dox to heavily pretreated patients with metastatic breast cancer resulted in a partial response in 18% of patients and stable disease in 3%. 47 We found in our study that metronomic Dox can block HIF-1a-mediated upregulation of target genes, and when combined with anti-VEGF therapy has a synergistic effect in blocking sarcoma tumor growth.
There are several limitations to our study. One potential criticism is that metronomic Dox is used as an inhibitor of HIF-1a rather than more specific inhibitor of HIF-1a. Dox is a topoisomerase II inhibitor and a commonly used che-motherapeutic agent. Mechanistically, we confirmed the synergistic effects of HIF-1a and VEGF inhibition on tumor growth and on tumor endothelium by specifically knocking down HIF-1a using shRNA. There are several HIF-1a inhibitors that are in various phases of clinical development; however, none are approved for clinical use. 48 Although Dox clearly has effects other than inhibiting HIF-1a, the primary advantage of using metronomic Dox in our study is that Dox is already clinically approved for the treatment of sarcomas and that findings from our study can be immediately translated into clinical trials combining metronomic Dox with VEGF inhibitors such as bevacizumab.
In conclusion, sarcomas are a heterogeneous group of solid tumors in which hypoxia and HIF-1a activity play varying roles. A strong HIF-1a transcriptional program may be a means of resistance for some sarcomas to anti-VEGF therapy. In addition, HIF-1a inhibition to VEGF inhibition augments destruction of the tumor vasculature in sarcomas, and this strategy should be investigated in clinical trials. | 2016-05-12T22:15:10.714Z | 2012-06-26T00:00:00.000 | {
"year": 2012,
"sha1": "8b6f8d6a57e3e8eaa060a031b5832bec811a8763",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ijc.27666",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "60f73e8345d20a327a9b3051f6dad6b190f3cab2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
48684 | pes2o/s2orc | v3-fos-license | Influence of core thickness and artificial aging on the biaxial flexural strength of different all-ceramic materials: An in-vitro study
The purpose of this study was to investigate the flexural strength of all-ceramics with varying core thicknesses submitted to aging. (IC), IPS e.max Press (EM) and Katana (K) ( n =40), were selected. Each group contained two core groups based on the core thickness as follows: IC/0.5, IC/0.8, EM/0.5, EM/0.8, K/0.5 and K/0.8 mm in thickness ( n =20 each). Ten specimens from each group were subjected to aging and all specimens were tested for strength in a testing machine either with or without being subjected aging. The mean strength of the K were higher (873.05 MPa) than that of the IC (548.28 MPa) and EM (374.32 MPa) regardless of core thickness. Strength values increased with increasing core thickness for all IC, EM and K regardless of aging. Results of this study concluded that strength was not significantly affected by aging. Different core thicknesses affected strength of the all-ceramic materials tested ( p <0.05).
INTRODUCTION
All-ceramics have been increasingly used in prosthetic dentistry to fabricate a wide variety of restorations 1) . The most important disadvantage of all-ceramic restorations is probably brittleness. This property is responsible for the strength behavior of all-ceramics. The limited capacity to undergo plastic deformation results in failure at the first sign of overloading 2,3) . Few of these materials hold adequate mechanical properties for clinical use as aesthetic all-ceramic single crowns and fixed prosthesis, when subjected to high stresses 4,5) . The most commonly used systems can be classified according to laboratory procedure used to obtain the core or fabrication (pressable, slip casting, milling or sintering) 6,7) . The lithium disilicate reinforced glass ceramic, IPS e.max Press (Ivoclar Vivadent, Schaan, Liechtenstein) is preferred due to the strength of the improved heat-pressed all-ceramic material, its superior biocompatibility and its good aesthetic features. However, mechanical properties of high performance alumina and zirconia based ceramics make them important as potential materials for all-ceramic restorations in high stress-bearing areas 8) . In-Ceram Alumina (Vita Zahnfabrik, Bad Sackingen, Germany), glass-infiltrated ceramic core employs the slip-casting technique, has been used as a core material for crowns and short-span bridges since the early 1990s 9) . The ceramic substructure of this system is extremely porous and composed of aluminum oxide, which is then infiltrated with fused glass. The high content of alumina particles combined with the low sinterization shrinkage, enhance the mechanical properties of the material 10) . The most recent core materials are the yttrium oxide partially stabilized zirconia (Y-TZP), were introduced as an alternative material for allceramic dental restorations that are manufactured into blanks and milled to the desired dimensions using the CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) technology 11,12) . Their transformation toughening characteristic leads to the increased fracture srength of Y-TZP ceramics compared with other all-ceramics 13) .
Ceramic core-veneered fixed prosthesis have been used for several applications (anterior and posterior restorations), and these combine the strength of ceramic cores with the esthetics of the veneering porcelains 29) . Although the mechanical characteristics of core materials have continuously been improved, it is uncertain whether thickness change of the core material would necessarily result in the different flexural strength values of core/veneering ceramic systems.
Standart test methods for determining the flexural strength of ceramic materials are either uniaxial or biaxial flexural tests such as piston-on-ring, piston-onthree-ball, ball-on-ring and ring-on-ring tests 30) . Biaxial flexural strength tests have several advantages to [31][32][33] . Some studies showed a number of factors affecting the mechanical properties of the layered all-ceramic materials, such as temperature and moisture 11,26,28,34) . In the oral environment, all-ceramic materials are prone to aging which can lead all-ceramic materials to change color, to lower bending strength, and to reduce antifracture toughness. Artificial aging process simulates the effects of long-term exposure to environmental conditions through an artificial weathering process that involves light exposure, temperature and humidity 35) . Thus, it is possible that the mechanical characteristics of the all-ceramics are altered when subsequently loading them in an aqueous environment.
Although the strength of the all-ceramics is well documented, data on the effect of both varying core thicknesses and artificial aging on biaxial flexural strength of different all-ceramics is limited 4,11,36) . Therefore, the purpose of this study is to investigate in vitro the biaxial flexural strength of double-layer allceramic systems with different core thicknesses which have been subjected to an artificial aging process.
MATERIALS AND METHODS
Three types of core-veneered ceramics were fabricated, including the In-Ceram Alumina glass-infiltrated porcelain (IC) with VM7 as the veneer (Vita Zahnfabrik), the IPS e.max Press (EM) pressable ceramic with IPS e.max Ceram as the veneer (Ivoclar-Vivadent), and the zirconia core Katana (K) with CAD/CAM with CZR as the veneer (Noritake Dental, Nagoya, Japan). Forty disc-shaped, porcelain/ceramic bilayered specimens for each all-ceramic systems with a 10 mm diameter were prepared according to ISO specification 6872 37) . Each all-ceramic group contained two core groups based on the core material thickness as follows: IC/0.5, IC/0.8, EM/0.5, EM/0.8, K/0.5 and K/0.8 mm in thickness (n=20 each). One point zero millimeter of veneer porcelain was applied on all the ceramic specimens. The materials and the groups are shown in Table 1. For each type of the core groups, ten specimens were randomly selected and subjected to artificial aging. All specimens were tested for flexural strength for each group (control and aged specimens).
Preparation of the specimens An aluminum mold was fabricated which consisted of holes with different depths for the preparation of In-Ceram Alumina (IC) discs and duplicated with the investment (Vita In-Ceram Spezialgips, Bad Sackingen, Germany) with the help of vinyl polysiloxane impression material (Express XT, 3M, St.Paul, MN, USA). Discs were prepared according to the manufacturer's instructions and removed from the mold with a diamond burr. Finishing procedures were performed and the final thicknesses of the core discs were verified as 0.5 and 0.8 mm (±30 μm) with a digital caliper (Alpha-Tools Digital Caliper, CA, USA). The alumina core discs were cleaned in an ultrasonic cleaner with distilled water. To prepare the standard 1.0 mm thick veneer porcelain, each disc was placed in a silicon mold (10-mm diameter, 1.5 and 1.8-mm depth). Veneering porcelain (Vita VM 7, Vita Zahnfabrik) was applied on the disc surface with a brush and fired according to the manufacturer's instructions. The final dimensions for each disc were measured with a digital caliper.
Wax pattern discs for the IPS e.max Press (EM) specimens were fabricated using a stainless steel mold and invested in a phosphate-bonded investment cylinder, according to the manufacturer's instructions. Investment cylinders were heated in a furnace for one hour at 850 o C. IPS e.max Press medium opacity ingots (MO1) were selected for fabricating the ceramic discs. The specimens were heat-pressed (Ivoclar EP 600 Combi, Ivoclar Vivadent) at 920°C for 25 min. After the investment had cooled, the specimens were divested with airborne particle abrasion using 50 μm glass beads. The residual investment material on the ceramic discs was cleaned by dipping the discs in invex liquid (Invex Liquid, IvoclarVivadent), which contains less than 1% hydrofluoric acid, for 10 min. The specimens were then left in distilled water for 30 min. The thickness of each specimen was controlled with a digital caliper and reduced until the desired ceramic core thickness was achieved (0.5 or 0.8 mm). The veneering process was completed following the IC method: Veneering porcelain (IPS e.max Ceram, Ivoclar) was applied to the discs and fired in accordance with the manufacturer's recommendations. Katana (K) zirconium oxide discs with thicknesses of 0.5 or 0.8 mm were fabricated by milling pre-sintered KT13 zirconium blocks (94.4% ZrO 2, 5.4% Y2O3) according to the manufacturer's instructions with the Dental Wings CAD/CAM system (DWOS, Montreal, Canada). The zirconium blocks were machined with 1.3 mm in diameter diamond burs in the CAM unit (Yenamak D50, Yenadent, Istanbul, Turkey). All machined discs were designed 21% larger than the desired size to compensate for sintering shrinkage. A digital caliper was used to control the thicknesses. After the milling process, the disc-shaped specimens were sintered at 1,400 o C for two hours. CZR (Noritake Dental) was applied to the core discs as the veneer porcelain. A custom-made silicon index was again used to prepare zirconium oxide disc specimens with standard veneer thicknesses.
Aging process Ten specimens for each core group were subjected to artificial aging procedure consisting of exposure to ultraviolet light and water spray in the weathering machine (Xenotest 150 S+, Atlas Electronic Devices, IL, USA) for 200 h. The back panel temperature varied between 70°C (light) and 38°C (dark), and humidity varied between 50% (light) and 95% (dark). The testing cycle consisted of 40 min of light only, 20 min of light with front water spray, 60 min of light only, and 60 min of dark with back water spray. The dry bulb temperature was 38°C (dark) and 47°C (light), and the water temperature was 50°C.
Biaxial flexural strength test
The flexural strength of the all-ceramic disc specimens were determined with the piston on three-ball test according to ISO 6872 37) , performed in a universal testing machine (Instron, Norwood, MA, USA) (Fig. 1). Disc specimens were supported on three stainless steel spheres (3.2 mm diameter) equally spaced on a circle with a diameter of 10 mm. A 0.5 mm thick plastic sheet was placed between the disc specimen and 1.2 diameter center-loaded piston. The loading surface was the veneer porcelain in the specimens and the core surface of the specimens was in the bottom. The specimens were loaded in the testing machine with a 5 kN load cell, at a crosshead speed of 0.5 mm/min until failure.
Statistical analysis
All statistical analyses and calculations were completed using the SPSS 15.0 (SPSS, Chicago, IL, USA) statistical program. Differences in biaxial flexural strength of allceramic discs based on brand, core thickness or aging were analyzed. The Kolmogorov-Smirnov test was used to evaluate the distribution of the data. The Kruskal-Wallis multiple comparison test was used to analyze the effects of aging on the biaxial flexural strength of all-ceramic systems. The Wilcoxon signed rank test was used to evaluate the effects of core thickness within the groups. To compare flexural strength between two groups, the Bonferroni correction Mann Whitney U test was performed. Statistical significance was determined given a p-value less than 0.05. (Table 3).
RESULTS
Visual inspection of the specimens after flexural test presented that the cracks are constantly asymmetric and the main fracture type was splitting of the specimen. Observation of the fracture surfaces showed no signs of major interfacial delamination between the core and veneer material. Some specimens showed minor delamination, about 1 to 2 mm from the crack margin, while the crack origin was located at the center of the specimen (Fig. 2). In both systems, delamination and splitting occurred at the same fracture load. 0.8 mm core groups fractured in four or more pieces (4)(5) while the other groups fractured in three or less . Especially the K group specimens fractured hardly and catastrophically under the applied stress. For EM and IC groups, there were no noticeable differences in the fracture type of the specimens compared to those obtained in the fracture surfaces.
DISCUSSION
Strength is an important mechanical characteristic that can assist in predicting the performance of brittle materials 31) . In the current study, the biaxial flexural strength of various thickness of all-ceramic core, and, conversely same thickness of the veneer ceramic were evaluated in-vitro. Specimens, which would have been fabricated according to the manufacturer's directions, and testing conditions were chosen carefully to imitate clinical condition as much as possible. Therefore an understanding of actual clinical strength behaviours of all-ceramics is absolutely necessary before results of in vitro strength testing can be considered to have clinical validity 9) . Considering the current results of flexural strength, the fracture load of clinical crowns for various all-ceramic materials, and the maximum occlusal load of 600 N in the clinical situation, it appears to be possible to lead appropriate choices when faced with various clinical conditions, even though the strength of crown depends on not only thickness, but also on shape, size, test method, load direction, adhesive cements.
According to the results of current study; the heatpressed technique for lithium disilicate reinforced glass ceramic, the slip-casting technique and the CAD/CAM technique showed significantly different mean biaxial flexural strength values regardless of the core thickness and aging (p<0.05). For this reason, the fabrication technique may have an effect on the mechanical properties of the tested materials. Current results are in agreement with the findings of several studies, in which reported that CAD/CAM fabricated zirconia were shown to have better mechanical characteristics due to microstructural nature compared to the slip casting ceramics and lithium disilicate reinforced glass ceramics 4,31,38) . Other factors about all-ceramics, such as clinical use, prosthetic restoration design (laminate, single crown or the posterior bridge) and also laboratory processing techniques should also be considered and can be further investigated in future studies.
The effect of specimen thickness is one of the most important factors in the determination of biaxial flexural strength 7) . In recent studies it has been reported that there is a correlation between the thickness of the core/ veneer ceramics and the flexural strength 7,22,24,39) . On the other hand, Thompson et al. 40) , in a fractographic study of clinically failed crowns, concluded that fracture initiation sites of dental ceramics are controlled primarily by the location and size of the critical flaw and not by specimen thickness. Therefore it was emphasized that when it is beyond a specific core thickness for each ceramic material, increase in thickness had little effect on overall flexural strength of the material 4) . Lithium disilicate cores should be fabricated at a minimum thickness of 0.8 mm, and glass infiltrated and yttrium stabilized zirconia cores at 0.5 mm, according to the manufacturers' recommendations 41) . In the current study, specimen core thicknesses were determined to be in 0.5 or 0.8 mm, and all the veneer thicknesses were 1.0 mm. Based on our findings, these variations in thickness would have significantly influenced the biaxial flexural strength values of the selected materials (p<0.05). Strength values increased with increasing core thickness for all IC, EM and K groups. The results also implied that EM/0.8, IC/0.5, and K/0.5 core groups have shown similar strength values both with and without aging (Table 3). In the present study, a same veneer thickness (1.0 mm) was fabricated for all specimens to minimize the influence of the veneering porcelain on the measured strength. Although veneering parameters were eliminated the study, veneering porcelain was used onto the disc specimens, to imitate the clinical crown conditions.
Based on the previous color study; Dikicier et al. 42) reported that all-ceramics color change was less influenced by increasing core thickness. The present study indicated that strength values increased with increasing core thickness. Consequently, comparing the color and strength properties of all-ceramics, core thickness is an important factor to determine where the all-ceramic systems is used clinically. The results may guide for prosthetic restoration design, when aesthetic is important, when occlusal forces are excessive or not.
Dental restorative materials must withstand widely varied conditions in the mouth, including temperature changes, continuous exposure to moisture, and mechanical use of the restoration. Light exposure and humidity changes can be simulated artifical aging process which has been widely used for the testing of dental resin and ceramic materials 11,43,44) . The manufacturer of the universal test machine used in this study claimed that 300 h of artificial aging is equivalent to 1 year of clinic service. Beuer et al. 36) studied the strength of CAD/CAM fabricated all-ceramics with similar aging procedure. They reported that no difference was found in the strength values with or without aging. On the other hand, Flinn et al. 45) revealed that aging for 200 h can cause significant transformation from tetragonal to monoclinic crystal structure, which results in a statistically significant decrease especially in the flexural strength of Y-TZP. In the current study, all-ceramic systems were artificially aged for 200 h to evaluate strength changes. The aging process resulted in a slight decrease in biaxial flexural strength of IC, EM and K groups (Table 3). Nevertheless, strength values of the all specimens not influenced by aging (p>0.05). This results is consistent with a similar invitro study, investigating the fracture strength of Y-TZP ceramics with aging 27) . Although the aging time likely to be experienced in vivo has not been determined, a provisional estimate of oral conditions was suggested. Two hundred hours of the present study might simulate relatively short period, therefore longer lifetime effect of the all-ceramics should be further studied.
Although the method of specimen fabrication is important, the strength test method employed will affect the results 3) . The present study used the biaxial flexural test with piston-on-three-ball method which has the reliable technique for studying brittle materials and standardized by ISO 7,31,32,37,46) .
In this in-vitro study disc-shaped specimens of two different core thicknesses were fabricated out of all-ceramic materials. Biaxial flexural strength was compared using an artificial aging process to simulate oral environmental conditions. It is important to emphasize that the aging process used in this study is only a first step toward predicting clinical performance. Further in vivo studies should be performed on the clinical evaluation of core thickness and flexural strength for better characterization of all-ceramics.
CONCLUSIONS
Within the limitations of the present study; following conclusions were drawn: 1. According to the biaxial flexural test, the K group had significantly stronger than the other all-ceramic groups, and the EM group had lower strength values regardless of core thickness and aging. 2. Aging did not have a significant effect on the biaxial flexural strength of the selected allceramics. 3. Biaxial flexural strength of all-ceramics were affected by core thicknesses. In addition, the double-layered specimens with different ceramic core thickness showed a decrease in strength could be attributed to the reduction of core thickness. | 2018-04-03T01:56:45.100Z | 2017-02-11T00:00:00.000 | {
"year": 2017,
"sha1": "746e5c61b187e00f699bab46b9a88ba4758fa870",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/dmj/36/3/36_2016-157/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "45495b29c0f6a87eca38e2b9152befe0c5c992d3",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
236172277 | pes2o/s2orc | v3-fos-license | An economic valuation of federal and private grazing land ecosystem services supported by beef cattle ranching in the United States
Abstract Beef cattle ranching and farming is a major agricultural industry in the United States that manages an estimated 147 million ha of private land and uses approximately 92% of forage authorized for grazing on federal rangelands. Rangelands, as working landscapes, sustain beef cattle ranching while providing habitat for wildlife, recreation, and open space amenities, as well as spiritual and cultural values that define a way of life. Historically, discussions regarding the economics of beef cattle ranching have focused primarily on the value of beef production but have more recently expanded to consider related ecosystem services. A systematic search of peer-reviewed literature published between 1998 and 2018 found 154 articles that considered ecosystem services from rangelands/grasslands. Of these, only two articles (1%) provided an in-depth economic valuation (monetary measure) of ecosystem services in the United States. To fill this knowledge gap, we primarily used publicly available data to conduct an economic valuation of major ecosystem services associated with beef cattle production in the United States at both the national and state levels. We find that over 186 million ha were actively grazed by beef cattle ranches and farms in the United States in 2017. We estimate the economic value of this land use to be $17.5 billion for wildlife recreation, $3.8 billion for forage production, and $3.2 billion for other ecosystem services related to the conservation of biodiversity—a combined total of $24.5 billion. Ecosystem services from federal rangelands in 16 western states accounted for 35% of the total value. Ecosystem services per beef cow and per kilogram of retail beef were estimated to be $1,043.35 and $2.74, respectively. More studies like these are needed to inform decision-makers at the industry, land management, and federal levels to ensure that the conservation, improvement, and restoration of these ecosystem services are considered in future management and research efforts.
INTRODUCTION
Beef cattle ranching and farming is a major agricultural industry in the United States. The 2017 Census of Agriculture reported that 641,500 beef cattle ranches and farms generated $34.7 billion of annual gross revenue and managed 146.7 million ha of private land (USDA, 2019). This industry utilized 68% (111.0 million ha) of land identified as private permanent rangeland/pastureland in the 2017 Census of Agriculture (USDA, 2019) and an estimated 92% of the forage authorized for grazing on federal rangelands in the United States (USFS, 2017;BLM, 2019). The rangelands that support beef cattle ranching are also thought to offer commodity, amenity, and spiritual values (Maczko et al., 2016), support a way of life (Gentner and Tanaka, 2002), and provide habitat for wildlife while contributing to recreation and open space amenities (Maczko and Hidinger, 2008).
Although discussions regarding the economics of beef cattle ranching have primarily focused on the value of beef production, this is one of a myriad of benefits that humans derive from rangelands. The Millennium Ecosystem Assessment identified four categories of ecosystem services (benefits from ecosystems to humans): 1) provisioning, for instance production of food and water; 2) regulating, which broadly describes the control of climate and disease; 3) supporting, such as nutrient cycles and crop pollination; and 4) cultural, including spiritual, cultural, and recreational benefits (MEA, 2005). Maczko and Hidinger (2008) used a traditional market, non-market tangible, and intangible classification system of ecosystem goods and services specifically for rangelands. Research has also indicated that ranchers care about more than provisioning and also manage for a number of other cultural, regulating, and supporting services (Gentner and Tanaka, 2002;Lind, 2015;Collins, 2019;York et al., 2019). Therefore, following an approach similar to Rashford et al. (2013) and Taylor et al. (2019), we use the term "beef cattle ranching-based ecosystems services" to refer to values beyond the category of provisioning services.
Studies quantifying the economic value of ecosystem services from rangelands are few. A search of peer-reviewed literature published between 1998 and 2018 found 154 articles that considered ecosystem services from grasslands and rangelands. Of these, only two articles provided an in-depth economic valuation (monetary measure) of these ecosystem services in the United States. Rashford et al. (2013) considered ecosystem service values associated with grazing lands for 17 U.S. states in the West, and USDA-NRCS (2010), another study, provided a benefit-cost analysis of the Grassland Reserve Program. More recently, Taylor et al. (2019) provided a valuation of beef cattle ranching-based ecosystems services as associated with private grazing lands but did not include federal grazing lands.
Despite the limited number of valuation studies, some of the literature suggests significant value associated with the flow of ecosystem services from pasture and rangelands to society through beef cattle ranching. Pogue et al. (2018) reviewed the literature and concluded that beef cattle ranching in Canada's prairie provinces had a positive influence on biodiversity, habitat maintenance, cultural heritage, and recreation/tourism. Havstad et al. (2007) identified, but did not quantify, existing and diminished or diminishing ecosystem goods and services from rangelands in the United States. Costanza et al. (2014) considered grassland ecosystem service values at a global scale. Fox et al. (2009) andMaczko et al. (2011) developed a qualitative framework to assess rangeland ecosystem goods and services for the purpose of identifying and weighing potential alternative income streams for ranchers.
The existence of federal (CRP, 2020; NRCS, 2020; SGI, 2020) and nonprofit (TNC, 2013) programs that support working rangelands are also evidence of the societal value in beef cattle ranching. These conservation programs help address existing concerns regarding past and future rangeland conversion to other land uses and declining rangeland health. Expanding crop production (Hongli et al., 2013;Haggerty et al., 2018;WWF, 2018), population growth (Brunson and Huntsinger, 2008;Brunson et al., 2016;Farley et al., 2017;Reeves et al., 2018), and cheatgrass (Bromus tectorum L.) invasion (Brooks et al., 2004;Chambers et al., 2014;Pellant, 2018 are just a few examples of the land use and management challenges involved in conserving the flow of ecosystem services from rangelands. The losses or diminishment of these ecosystem services may be irreversible or difficult to recover (Salles, 2011;Bestelmeyer et al., 2015;Pellant et al., 2018;WWF, 2018).
The purpose of this study is to address informational gaps that exist regarding the sustainability of beef cattle production through conducting a formal and extensive valuation of major U.S. beef cattle ranching-based ecosystem services at the state and national level. Although the value of ecosystem services is difficult to quantify (Torell et al., 2014c;Brown and MacLeod, 2018), such valuation can provide vital information in land management decision-analysis and in assessing the cost to society from changes in land use. This valuation is also useful for identifying alternative income sources for ranchers and to provide information for the development of ecosystem services markets (Maczko et al., 2011). Building on the methods used in Taylor et al. (2019) and Rashford et al. (2013), this study estimates the economic value of ecosystem services from both private and federal lands for three major ecosystem services associated with beef cattle production: 1) wildlife-related recreation, 2) forage production, and 3) other ecosystem services. Rashford et al. (2013) and Taylor et al. (2019), this study used publicly available data to estimate the economic value of major benefits from beef cattle ranching related ecosystem services in the lower 48 U.S. states (Alaska and Hawaii were not included in this analysis due to data limitations). Following Costanza et al. (2014), it is assumed ecosystem services are constant across space, an approach thought to be appropriate for assessing land use change scenarios over larger areas. Dollar amounts are indexed to the year of the most recent Census of Agriculture (2017) using the Consumer Price Index.
As in
The per hectare value of three categories of ecosystem services were estimated: 1) wildlife-related recreation, 2) forage production, and 3) other ecosystem services. The aggregate of the three categories is also presented on a per hectare basis. Private land and federal land values were found separately. Total ecosystem services value on private rangelands was found by multiplying the aggregate value per hectare by the number of hectares of private rangeland and pasture under beef cattle production in each area as reported by the 2017 Census of Agriculture under the North American Classification System (NAICS) code 112111 for "Beef Cattle Ranching and Farming" (USDA, 2019). Similarly, the aggregate per hectare value for federal land was multiplied by the estimated number of hectares grazed by cattle. Details about this estimation process can be found in the Materials and Methods section under "Forage Production." These total ecosystem service values estimated separately for private and federal land were then summed to obtain the combined ecosystem services value associated with beef cattle ranching from both private and federal lands in each geographic area. To calculate ecosystem services value per beef cow, this summed federal and private value was divided by the number of beef cows reported in the 2017 Census of Agriculture (USDA, 2019) under NAICS code 112111. To calculate the ecosystem services value per pound of beef, the value per beef cow was then divided by the number of kilograms of retail beef per beef cow from the Livestock Marketing Information Center (LMIC, 2018). Detailed descriptions of the per hectare value estimation methods for each of the three types of ecosystem service categories follow.
Recreation
Wildlife recreation values were found by combining U.S. Fish and Wildlife Service (USFWS) estimates of the number of recreation days (hunting, freshwater fishing, excluding Great Lakes fishing, and wildlife watching) per year (USFWS, 2014), with USFWS estimates of net economic values for wildlife-related recreation per day (USFWS, 2016). Per hectare values were calculated by dividing the total wildlife recreation value in each state by the number of hectares in non-metro and nonurban land (Headwater Economics, 2018).
Forage Production
Rangeland/pasture forage is an input to livestock production, the value of which depends upon its contribution to the final market value of livestock (Rashford et al., 2013). In a perfectly competitive market, grazers would be willing to pay the amount that this forage contributes to the value of the final good (kilograms of beef in this case). This study therefore approximates forage production values on private lands by using United States Department of Agriculture National Agricultural Statistic Service (USDA NASS) pasture rental rate data (NASS, 2017).
Estimating the value of forage per hectare on federal land required information about the spatial location (state level) and quantity of federal forage as well as the dollar value per Animal Unit Month (AUM). The number of grazing hectares utilized by beef cattle on federal land was found by employing several data sources. First, grazing allotment boundaries were determined from publicly available geographical information system (GIS) data (BLM, 2020; USFS, 2020). Second, vegetated area in these active grazing allotments was found from Landscape Fire and Resource Management Planning Tools (LANDFIRE at https://www.landfire.gov/) Existing Vegetation Type (EVT) (Comer et al., 2003;Rollins, 2009). Land cover classes such as water, urban, agricultural, pasture, and barren were excluded from this data set to represent only natural vegetation (excluding pasture classes if any occurred in the allotment). Third, only federal lands were included in the land cover classes as identified using the Protected Areas Database of the United States (PADUS) (CBI, 2021). Only lands managed by the BLM and USFS were retained for analysis from this spatially explicit database. The grazing allotments and PADUS data that are natively offered in vector format were converted to raster data format at 30-m spatial resolution to match the extent and pixel size of the EVT data. A spatial subset was created by spatially intersecting this data with active grazing allotments data. Finally, the state area found through the GIS analysis was multiplied by the percentage of AUMs grazed by cattle in each state (Table 1) to provide an estimate of federal area specifically grazed by cattle rather than other forms of livestock. Due to data limitations, only BLM and FS owned land was considered for this study. Grazing also occurs on other federally owned lands such as USFWS land, but the majority of cattle grazing on federal land in the United States are on BLM and FS land.
Valuing forage from federal grazing lands is not a straightforward process though many researchers have attempted to find this value in the past outside the concept of ecosystem services (Bartlett et al., 1993(Bartlett et al., , 2002Van Tassell et al., 1997;Torell et al., 2003;Rimbey and Torell, 2011;Vincent, 2019). Federal forage value can be thought of as a non-market good because the federal grazing fee is set by the federal government rather than resulting from a competitive market (Quigley and Tanaka, 1988). Ranchers with public land permits or leases currently pay an annual fee per AUM to graze which is determined by the Public Rangeland Improvement Act (PRIA) fee formula. The PRIA fee formula includes the Beef Cattle Price Index and the Prices Paid Index (Vincent, 2019). This fee has been argued to be purposefully low in order to account for ranchers' "ability to pay" rather than to capture the fair market value of forage or to recoup agencies expenditures (GAO, 2005).
Because of the complex history and nature of assigning values to forage on federal lands, several methods were considered by the authors. The methods reviewed included 1) modeling the value of an AUM lost using linear programing (LP) models of representative cow-calf public land ranches in Idaho, Oregon, Nevada, Wyoming developed from 2017 enterprise budgets (Hilken et al., 2018), 2) the recommendations from the study by Bartlett et al. (1993), and 3) 2017 USDA NASS survey indications of monthly lease rates for private, nonirrigated grazing land for the 16 states governed by the grazing fee (NASS, 2019). LP modeling resulted in an estimated average value of $24.00/AUM compared with $22.60/ AUM as estimated from the NASS private lease rates. The range suggested by the study by Bartlett et al. (1993) was updated to 2017 using the Forage Value Index (per recommendation by that study), which gave an estimated range of $16.27/AUM to $27.12/AUM. While the LP and the NASS private lease rate estimates were within the recommended range from Bartlett et al., 1993), this study used the NASS private lease rate for each U.S. state (NASS, 2019), as it was the most readily available and geographically specific option. The value ($/AUM) was then multiplied by the number of AUMs billed (in the case of BLM) or authorized (in the case of USFS) as given in publicly available annual reports (BLM, 2019;USFS, 2017) to get a total dollar value per state. That total value was then divided by the area grazed by cattle in each U.S. state to arrive at a U.S. state-specific per hectare value for forage on federal lands.
Other Ecosystem Services
The value of other ecosystem services was estimated using CRP Grasslands annual rental payments as a proxy for nonspecified services (FSA, 2018). In this voluntary, federal government program, operators are allowed to graze while receiving financial payments and optional cost-share assistance to maintain animal and plant diversity. These rental payments are only eligible in practice for private lands, but in this study, this value is considered to be applicable to both federal and private grasslands as the best available, geographically specific, monetary estimate of ecosystem services other than from recreation and forage, for example, biodiversity.
Beef Cattle Ranching-and Farming-Based Ecosystem Services-United States
The estimated economic value of ecosystem services from beef cattle ranches and farms was $17.5 billion for wildlife recreation, $3.8 billion for forage production, and $3.2 billion for other ecosystem services. The combined total of these estimates was $24.5 billion, of which 35% originated from federal rangelands and 65% from private rangelands and pasture. This value also represents $1,043.35 of ecosystem services per beef cow and $2.74 of ecosystem services per kilogram of retail beef. Additional details about these results, including ecosystem service values for each U.S. state can be found in Maher et al. (2020). Figure 1 shows the per kilogram estimated ecosystem services value for the United States in 2017 for each ecosystem service category considered. In line with Rashford et al. (2013) and Taylor et al. (2019), this study finds that policy or management analysis that overlooks ecosystem service flows from wildlife recreation and other ecosystem services will considerably underestimate the benefits humans derive from ecosystem services supported by cattle ranches and farms. Rashford et al. (2013) advocated for the inclusion of ecosystem services values from cattle production above that of just forage production in benefit-cost analysis and public policy considerations.
These total values are 65% higher than recent estimates by Taylor et al. (2019). There are two reasons for this difference. The first reason is that Taylor et al. (2019) did not consider ecosystem services from federal grazing lands, which highlights the importance of including them in such valuations. The second reason is that private rangelands and pastures as reported in the 2012 Census of Agriculture were 6% lower than those reported in the 2017 Census of Agriculture. The disparity in production levels may be attributable to economic factors and widespread drought in 2012, indicating that these values as calculated can fluctuate depending on such conditions. Figure 2 illustrates the dollar value of ecosystems services per hectare for each U.S. state as an area-weighted average of federal and private rangeland. Eastern states tend to have the highest ecosystem service values per hectare because they have relatively large population bases, relatively small land areas, and no federal land. As a result, the number of recreation days per unit land area tended to be relatively higher which increased the ecosystem service values per hectare. Recreation days per hectare explained 90% of the variation in the calculated value of recreation per hectare. Nationally, the range of number of recreation days per unit area was wide: Connecticut had 11.8 recreation days per hectare, whereas Nevada had 0.2 recreation days per hectare. The other ecosystem service category of value and forage production values were also higher in eastern states.
The Distribution of Ecosystem Services in the United States
Although there is a high value per hectare in some of the eastern states, this study found that total ecosystem services values (value per hectare multiplied by total hectares in that state) provided the best representation of the geographic distribution of these ecosystem services across the United States. Figure 3 depicts the distribution of beef cattle ranching rangeland/pasture hectares across the country and for each state as a percent of the total number in the continental United States-for example, 15.3% of U.S. hectares used for cattle ranching are in Texas. The majority of cattle ranches and farm rangelands/pastures hectares are located in the western half of the Unites States. The 17 U.S. states from the Great Plains Region and westward contain nearly 95% of the rangeland/ pastures used for beef cattle grazing in the United States. Figure 4 provides the total ecosystem service values for each state. Approximately 52% of the states considered in this study had an ecosystem services value from beef cattle ranches and farms of over $100 million. States with estimated values of $500 million or more were primarily from the Great Plains westward.
The top 10 U.S. states in terms of total value from ecosystem services are shown in Figure 5. Seven of the top 10 states are part of the Great Plains region. Three states (California, Oregon, and Utah) in the top 10 had nearly 50% or more of their value in ecosystem services coming from federal rangelands. Total ecosystem service values were generally higher in the western states, despite lower per hectare values, than in the east. The variation in total value is driven mostly by variation in hectares. The total number of rangeland/pasture hectares in the state was not the only factor, however. Per hectare values also matter. For example, Wyoming had a higher percentage (8.4%) of total calculated beef cattle ranching rangeland/pasture hectares than California (4.2%; Figure 3), yet the estimated per hectare values of ecosystem services on private land and federal land in Wyoming were both less than half that found for California. As a result, Wyoming had a lower total value of ecosystem services as compared to California ($782 million vs. $873 million) even though the state had more rangeland/pasture hectares. Texas stands apart from the other states in the United States, with more than 28.3 million (15.3%) rangeland/ pastures hectares used for beef cattle ranching and all of the estimated ecosystem service value in the state coming from private land. This state produced over 1/5 of the total U.S. ecosystem service values (as summed over each individual state) from beef cattle ranches and farms. These ranches and farms in Texas produced nearly 4 million head of beef cows (17% of industry production) in 2017 (USDA, 2019), making it the largest cattle ranching state in the nation, by far.
Opportunities and Challenges
There are several opportunities and challenges associated with the valuation and incorporation of cattle ranching-based ecosystem services into existing decision-frameworks. One challenge is the temporal fluctuations in values. The values reported in this study correspond to a point in time, but the total and per cow ecosystem services values vary from year to year as rangeland/pasture hectares utilized and beef cow numbers fluctuate. The percent change in ecosystem services value per beef cow declined for most states when using data from the Census of Agriculture in 2017 when compared with 2012. Changes ranged from a decrease of 30% to an increase of 29%. Thirtyfour states saw a decline in value of 5% or more ( Figure 6). The decline in ecosystem services value per beef cow is the result of more beef cows in production in 2017 when compared with 2012. The temporal scope is therefore an important consideration that can be limited by available data. The values from federal land can also fluctuate over time though these differences are not examined here due to data limitations; GIS data used in this analysis to determine federal land area is dated to 2017 and there was no way that the authors could find to draw information from 2012 specifically.
It should also be noted that this study is not meant to be a net accounting of the value to society from cattle production. There are additional economic values associated with cattle ranching that have not been incorporated here. For example, the land, buildings, machinery, and equipment associated with this industry was estimated to be $655.4 billion (up 25% from that reported in the 2012) and the industry employed over 2.1 million (up 10% from that reported in the 2012) workers including operators, hired labor, and family labor in 2017. Also, there are ecosystem service costs associated with beef cattle ranching (Pogue et al., 2018;Rotz et al., 2019). A net accounting requires further analysis. At the management level, evaluating trade-offs and synergies between different ecosystem services (e.g., the relationship between changes in forage provisioning services and erosion control) across space and time is an important next step possibly requiring experiment-based mechanistic studies that have been argued to be lacking in number (Zhao et al., 2020).
This study provides lower bound estimates of the value of all the ecosystem services provided by land used in beef cattle production for two reasons. First, these estimates may not capture important ecosystem services that are more difficult to quantify and value, although some of these may be captured by using the CRP Grasslands payments as an approximation of other ecosystem services. Some of the ecosystem services that may be omitted or undervalued here include the supply of water, being part of alternative energy production such as wind or solar (Brunson et al., 2016), sustaining biodiversity (Havstad et al., 2007), sequestering carbon (Havstad et al., 2007;Rashford et al., 2013), and providing cultural benefits, including protection of a way of life (Gentner and Tanaka, 2002;Lind, 2015;Collins, 2019). Second, due to data limitations, beef production from other industry classifications other than NAICS 112111 was not included. In 2017, other types of agricultural operations produced 32% of agricultural beef cattle in the United States but had most of their operation in other types of livestock or production activities. For example, the sheep and goat farming industry classification reported 4.6 million hectares in rangeland/pasture in 2017. Although this industry produced 98,000 beef cows, it also showed a total inventory of 3.4 million sheep and lambs and 1.6 million goats. Assigning the hectares from rangelands/pastures to these different livestock types is not possible from the data in the Census alone and would require additional assumptions.
The estimated dollar values per hectare provided by this study can be applied in impact analysis or for planning purposes with caution. Ecosystem services values per unit area for each U.S. state can be found in Maher et al. (2020). Important considerations for such applications include 1) possible finer scale variation in value at the project-level than in our state-level estimates, 2) understanding impact functions that may be nonlinear and discontinuous, and 3) the potential for synergistic relationships between private and federal ecosystem services changes. The latter two may be easily overlooked because they are unique to ranching. Finer scale variation in per area values refers to the idea that ecosystem services vary across space. The actual variability in the values estimated may be greater when considering finer spatial resolutions than what is represented by the state-level averages reported here.
Nonlinear and potentially discontinuous impact functions mean that each additional hectare of rangeland/pasture transferred out of ranching and into other land uses does not have a constant effect. The impacts per unit area on operators and/ or ecosystem services may increase at an increasing rate and at some points may experience large discontinuous jumps in impact. For example, ranchers may go out of business even when a part of their forage base is affected (Maher et al., 2013;Torell et al. 2014a;Runge et al. 2019). This may make an operator more likely to convert private rangelands or pastures to cropping, or sell them, possibly for development. Land use change in one area could also affect the set of ecosystem services from nearby lands, for example, declines in the availability of goods and services needed for ranching in an area, disruption of wildlife habitat corridors, increases in stormwater run-off, and/or affect other economic and environmental synergies tied to the existing land use. In summary, there is potential for the project impact to go beyond the boundaries of the project itself.
Ecosystem service values from private and public land could be affected by one another as well, resulting in synergistic effects. However, the estimates presented here are calculated additively. Private and federal grazing use are interrelated because cattle that graze on federal land also graze on private land for part of the year. The result of these synergistic effects would depend on the situation. A decline in the availability of grazing on federal lands could affect demand for private land which would be reflected in higher rental rates. In the West, however, private land has been in short supply or unaffordable leaving few alternatives to federal land forage. Therefore, transportation costs and other factors that affect profit margins can make it more practical to reduce herd sizes or get out of ranching altogether (Torell et al., 2014a). A number of studies have explored the possible unintended consequences of declines in federal grazing land availability (e.g., grazing restrictions) and subsequent declines in beef production (Torell et al., 2014a, b;Runge et al. 2019;Lewin et al., 2019). Such changes in federal land availability could have unforeseen environmental consequences by making development investment opportunities on private land more attractive to private landowners (Runge et al., 2019). Synergistic relationships between private and federal ecosystem service values are an important area for future research.
Policy, Management, and Planning Applications
Conserving rangeland ecosystem services through preserving working rangelands has been the focus of conservationists in the United States for more than two decades (Maestas et al., 2002;Havstad et al., 2007;Brunson and Huntsinger, 2008;Maczko et al., 2011) and remains a societal Translate basic science to industry innovation concern as evidenced in several recent studies (Hongli et al., 2013;Allred et al., 2015;Lark et al. 2015;Haggerty et al., 2018;WWF, 2018;Runge et al., 2019). Helping ranchers stay afloat as competition from other land uses increase can be supported through managing incentives (Havstad et al., 2007;Bryan et al., 2013;Farley et al., 2017) and working groups, such as the Sustainable Rangelands Roundtable (Maczko et al., 2016). Among other support mechanisms, this group developed tools that can weigh the benefits and costs of generating income from less traditional ecosystem services (Maczko et al., 2011;Maczko et al., 2016). Establishing sound methods to incorporate the value of these less traditional ecosystem services into policy and planning is also critical.
The main production unit from beef cattle ranching is the quantity of beef cows produced, and therefore sales value may seem like an obvious choice for estimating the societal value of this production. However, this measurement may undervalue the societal contribution of cattle production in certain areas. Figure 7 shows the distribution of the estimated ecosystem services value on a per cow basis. Figure 8 provides the states in the top 10 of value per beef cow. Hectares per beef cow in the 48 contiguous U.S. states ranged from 84.6 in Nevada to 0.7 in Maryland. Utah and Arizona (both in the top five of rangeland/pasture area per beef cow) were found to have the highest value of ecosystem services per beef cow (Figures 7 and 8). However, both of these states were ranked lower than other states in sales value per beef cow (44th and 37th, respectively). Sales values per beef cow was found by dividing the value from cattle and calves sales in each state by the total number of beef cows produced by beef cattle ranches and farms (NASS, 2017). In 2017, the sales value per beef cow in Utah was $1,007 (vs. $2,674 in ecosystem services) and $757 (vs. $2,367 in ecosystem services) per beef cow in Arizona.
Another example of the importance of considering values other than direct income generation in policy and planning can be seen by comparing the geographic distribution of total cattle ranching-based ecosystem services (Figure 9) versus the distribution of cattle/calves sales (Table 2) using NASS-defined economic regions. In 2017, the Plains (north and south), Mountain, and Pacific regions provided 83% of the total national value of ecosystem services from cattle ranching and 69% of the total value of cattle/calves sales. Comparing regions in the West, the combined sales value in the S. and N. Plains ($13.3 billion) is almost twice that of the Pacific and Mountain regions ($6.9 billion), yet the combined ecosystem services value is approximately 7% more in the Pacific and Mountain regions than in the Plains regions. In addition, the calculated percentage of ecosystem service value from federal rangeland in the Pacific and Mountain regions is 58% and 56%, respectively. This suggests that public land management and policy can greatly influence the value of ecosystem services in these areas.
Future research could consider how different options in land use and management affect the suite of ecosystem services from rangelands. This is especially important on federal grazing lands where there is a significant research gap in the understanding of land management options, land use change, and their impact on ecosystem services (Torell et al., 2014c). In some areas, cattle ranches and farms may rely heavily on federal land management for their operation and ecosystem services in these areas may be affected disproportionately by federal policy and land management decisions as compared to other areas. For example, the states of Utah and Oklahoma ( Figure 10) were found to have similar total values in cattle ranching-based ecosystem services, yet 85% of this value in the state of Utah was from federal grazing lands whereas less than 0.5% of this value was from federal land in Oklahoma. This study provided a systematic look into the potential for valuing ecosystem services associated with beef cattle ranching and the rangelands and pastures that support them. While concern for ecosystem services has increased since the Millennium Ecosystem Assessment (MEA, 2005), it is difficult to ensure their consideration and protection in policy and management. More recently, there has been a drive to incorporate ecosystem services into decision-making at the federal agency level (Donovan et al., 2015;NESP, 2016;Deal et al., 2017;Olander et al., 2018). Studies like the one presented here are needed to inform decision-makers and may help to conserve ecosystem service flows from rangelands and pastures moving into the future.
ACKNOWLEDGMENTS
The study was funded by the Beef Checkoff. | 2021-07-23T05:19:45.104Z | 2021-05-04T00:00:00.000 | {
"year": 2021,
"sha1": "89326534134f7fafcf92687b7023a8b596898beb",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/tas/article-pdf/5/3/txab054/39126530/txab054.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "89326534134f7fafcf92687b7023a8b596898beb",
"s2fieldsofstudy": [
"Environmental Science",
"Economics",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237382662 | pes2o/s2orc | v3-fos-license | Talent identification and location: A configurational approach to talent pools
Purpose: Talent management (TM) has become a strategic priority for companies seeking to identify employees with outstanding performances and the potential to hold strategic positions in the future. In fact, talent is considered an intangible capital that adds value to the organisation. However, there are only a handful of studies in the literature that address the process of identifying talent in organisations for its subsequent development. Thus, the purpose of this paper is to reach a better understanding of the process of identifying and locating talent, while proposing a configurational approach as a theoretical framework for grouping talented individuals into different configurations or talent pools to initiate talent development in firms. Design/methodology: Case study methodology research based on four companies that have implemented TM programmes in Spain. Findings: The research questions formulated here and the case studies shed light on the process of identifying talent and on the criteria for grouping it in order to facilitate its future development. Our results highlight the following. First, talent means people with certain characteristics. Second, companies focus more on developing the talent identified than on considering the innate nature of that talent. Finally, talent can be found throughout an organisation, in both management and non-management positions. In turn, we conclude with the relevant theoretical contribution of the configurational approach to explain that a company's future competitive advantage is based on the different talent pools existing in its organisation that group talent for its differential management. Practical implications: Our results imply major recommendations for companies on how to identify talent and group it into talent pools in order to implement a process of differentiated management involving a range of temporal pathways. Originality/value: The identification and location of talent, as well as grouping it into talent pools, is an essential prior process for proposing the talent architecture that is so much in demand in the literature.
Introduction
The importance of talent and its management is highlighted both for companies and for the academic field. There are numerous reports (Deloitte, 2015;BCG, 2018) contending that Talent Management (TM) in an organisational context is currently a priority issue for companies, as it can be a source of competitive advantages in their dynamic and competitive environments, providing strategic opportunities, and creating value (Lewis & Heckman, 2006;Collings & Mellahi, 2009, Farndale, Scullion & Sparrow, 2010Schuler, Jackson & Tarique, 2011;Meyers & van Woerkom, 2014;Makram, Sparrow & Greasley, 2017;Shulga & Busser, 2019;Sparrow, 2019). In the academic field, the relevance of talent and its management is also highlighted in literature reviews through bibliometric analyses carried out by Iles, Preece and Chuai (2010b), Gallardo-Gallardo, Nijs, Dries and Gallo (2015), Gallardo-Gallardo and Thunnissen (2016) and McDonnell, Collings, Mellahi and Schuler (2017), which provide increasing evidence of the popularity of the research topic.
Further progress in the field of TM calls for studies on the following: (i) clarifying several aspects related to TM that are still imprecise (Thunnissen & Van Arensbergen, 2015;Gallardo-Gallardo & Thunnissen, 2016;Sparrow, 2019;Shulga & Busser, 2019), as despite abundant prior research on the definition of talent (Gallardo-Gallardo et al., 2013) and its operationalization (Nijs, Gallardo-Gallardo, Dries & Sels, 2014), there are still few studies on talent identification and location; (ii) the application of theoretical approaches such as the configurational one, which while arguably increasing the fragmentation of TM research (Sparrow, 2019) would also help to explain the process of identifying talent and grouping it into talent pools. Prior studies have already reported the existence of talent groupings (Björkman & Smale, 2010;Mäkelä, Björkman & Ehrnrooth, 2010), albeit without a theoretical grounding that explains the way of classifying these talented employees (Thunnissen & Gallardo-Gallardo, 2019), and (iii) new empirical evidence within a Spanish context, where prior research has indeed already been conducted (Vivas-López, Peris-Ortiz & Rueda-Armengot, 2011;Valverde, Scullion & Ryan, 2013;Vivas-López, 2014;Maqueira, Bruque & Uhrin, 2019), although there are only a handful of practical studies on how firms address the talent identification and grouping process.
Considering these antecedents, the objective of the study is to identify and locate talent in organisations and propose a configurational approach as the theoretical framework for grouping it into different talent pools for the application of a differentiated talent management process to each one of them. The identification of talent involves answering four research questions, which according to the literature (Collings & Mellahi, 2009;Iles, Chuai & Preece, 2010a;Gallardo-Gallardo et al., 2013;Meyers et al., 2013;Ross, 2013;Nijs et al., 2014) allow reflecting upon how talent is identified and where it is located. Subsequently, based on a study case methodology, this paper analyses four cases of companies operating in Spain that implement TM, deriving theoretical propositions based on our findings. The results conclude that the talent is located in three pool configurations that constitute the bases for the development of an architecture of TM that is being demanded in the literature (Ganz, 2006;Garavan, Carbery & Rock, 2012;Sparrow & Makram, 2015).
Talent identification and location
Certain studies in the literature have made a greater effort to shed light on conceptualising talent as an essential prior step to effective TM (Tansley, 2011;Gallardo-Gallardo et al., 2013;Meyers et al., 2013;Ross, 2013). However, there is still ambiguity over identifying talent in organisations. Along these lines, Nilsson and Ellström (2012) suggest the need to clarify certain aspects relating to the identification and location of talent because, as affirmed by Nijs et al. (2014:180) "Organizations report great difficulty in measuring talent accurately, reflecting the lack of theoretical foundations for talent-identification in the HRM literature." We have therefore included four apparently opposing questions to be answered through the literature review (Collings & Mellahi, 2009;Iles et al., 2010a;Gallardo-Gallardo et al., 2013;Meyers et al., 2013;Ross, 2013;Nijs et al., 2014). The first two aim to explain how to identify talent and the other two where to locate it. How and where are key issues prior to the implementation of TM.
Talent: People or characteristics of people?
Two different approaches have been used to conceptualise talent -an object one and a subject one -which according to Gallardo-Gallardo et al. (2013) co-exist in the literature, but are somewhat contradictory. According to object approach, talent is defined as characteristics of people. This approach is supported by authors that consider talent to be an exceptional characteristics of people, in which certain studies include capacity, knowledge, ability, potential, skills, and performance (Tansley, Turner, Carley, Harris, Sempik & Stewart, 2007;Cheese, Thomas & Craig, 2008;Chuai, Preece & Iles, 2008;Silzer & Dowell, 2010;Stahl et al., 2012;Gallardo-Gallardo et al., 2013). In light of the myriad definitions we find under this approach, it is particularly interesting to identify the components of talent. For Gallardo-Gallardo et al. (2013) and Nijs et al. (2014), although the possession of special capacity, ability or skills is necessary to have talent, it is nevertheless insufficient. These authors consider the presence of non-intellectual attributes relating to affectivity to be necessary, such as commitment, according to Gallardo-Gallardo et al. (2013), and interest and motivation, according to Nijs et al. (2014). These authors consider the affective component of talent to be the result of adding the motivation and interest that makes people work with "passion".
According to the subject approach supported by Gallardo-Gallardo et al. (2013), talent is considered to be people; in other words, employees with special abilities and capacity that are reflected in high levels of performance and potential. Under this approach, Lewis and Heckman, (2006:141) refer to "talent as a euphemism of people". Tansley et al. (2007:8) in Gallardo-Gallardo et al., (2013:295) define talent as "those individuals who can make a difference to organizational performance, either through their immediate contribution or in the longer-term by demonstrating the highest levels of potential".
Talent: Innate or developed?
Some definitions of talent, such as that by Silzer and Dowell (2010:14), which refers to "an individual's skills and abilities (talents) and what the person is capable of doing or contributing to the organization" open the debate as to whether talent is innate or the result of a learning process that enables its creation and development. As part of the debate, Meyers et al. (2013) propose a continuum in which they consider three possible situations. The first considers talent to be totally innate, meaning there are people with the same training that always perform better than others because they possess certain unique and profound characteristics that cannot be learned (Gallardo-Gallardo et al., 2013).
The second considers that talent is partly innate and partly developed. The authors that support this approach believe that innate talent is a necessary but insufficient condition to attain a high performance, and assume that there is a component of talent that is acquired.
The third, in which talent is defined as the result of a learning process, concludes that anyone can be a prodigy. In this situation, talent is seen as the result of a deliberate practice, effort, training, development and learning process based on experience, meaning that anyone can necessarily have to be the top positions, and may be at an operational level. be talented (Collings and Mellahi, 2013;Gallardo-Gallardo et al., 2013;Meyers et al., 2013).
Talent: People or positions?
This second question is to determine whether talent involves those people with a high performance and potential, who can make a significant contribution to the organisation's future performance without being linked. Iles et al. (2010a) propose two approaches to TM: one exclusively based on people, and the other exclusively based on their position or job. The first approach involves a narrow view of TM based on the management of a limited group of people, a talent pool, with greater achievement and the capacity to make a significant difference in the organisation's present and future performance. Under this consideration, talent is not related to the position held by an employee. On the contrary, the approach based on the position considers talent to reside in the key positions within the organisation; in other words, in positions with a strategic value, whereby only employees that hold such positions can be classified as talent (Huselid, Beatty & Becker, 2005). Collings and Mellahi (2009) also support this approach, and define key positions as those that can potentially affect the future of the company, have strategic value with respect to the company's competitive advantage, and do not
Talent: Only the elite or throughout the organisation?
One of the key issues in TM and its definition is to locate talent. There are two approaches used in the literature: an inclusive and an exclusive one. Under the inclusive approach, everyone in the organisation has talent, any employee can be considered as a strategic asset capable of generating value and achieving a competitive advantage, and should therefore be given the opportunity to demonstrate and develop it (Iles et al., 20 10a;Stahl et al., 2012;Gallardo-Gallardo et al., 2013). By contrast, the exclusive approach is based on a segmentation or differentiation of the workforce, and under this approach it is not possible to be considered talent in the organisation. Talent resides only in a certain elite group within the organisation and comprises the talent pool (Boudreau & Ramstad, 2005;Collings & Mellahi, 2009;Iles et al., 2010a;Gallardo-Gallardo et al., 2013). According to the exclusive approach, talent resides in employees with a high performance and potential. In turn, these employees must contribute significantly to achieving organisational objectives by means of an aboveaverage performance (Silzer & Dowell, 2010;Stahl et al., 2012;Meyers et al., 2013). However, this is not sufficient, and must include other characteristics, such as experience, creativity, leadership, and attitude (Tansley, 2011;Dries, Van Acker & Verbruggen, 2012), which make up potential. Potential is defined as the capacity to progress and learn more quickly, and results in the ability to adjust to the company's future needs. For Tansley (2011), potential is related to an individual's ability to progress towards more senior roles and leadership positions, which she specifically defines as "someone with the ability, engagement and aspiration to rise to and succeed in more senior, more critical positions" (p. 272). According to Silzer and Church (2009), potential is rarely used in relation to current work performance, but is typically used to suggest that an individual has the qualities to effectively perform and contribute in broader or different roles in the organisation at some point in the future.
Based on these four questions, in the following section we use a case study methodology to clarify how to identify, locate, and group talent for its subsequent development.
Case study as the research methodology
The methodology used to conduct this empirical research was the case study method (Yin, 1994). The reasons that justify the relevance and choice of this method are twofold. Firstly, the consideration of the research issues in terms of how and why, given that the current corporate context gives rise to the need to analyse why TM should be studied and how talent is located and grouped, which requires its prior identification. Secondly, according to Eisenhardt (1989), the case study methodology is recommendable for issues that are new, especially if the intention is to progress theoretically, as in the case of TM, which is defined as a discipline in its adolescence (Thunnissen et al., 2013a) and still growing (Collings et al., 2015;Gallardo et al., 2015).
The research can be classified as follows: firstly, it is explanatory by nature, as it seeks to find empirical evidence for the theoretical development of the debate based on the research issues raised and according to the conceptual framework obtained from a literature review, by deducing and defining a series of propositions within the new concept of talent pools as differentiated configurations. Secondly, with regard to the sample, this case study specifically involves four companies. The choice of a single case was not recommendable here, as it is not valid for generalisations and would be limited to a descriptive study of the organisation in question, with greater bias in the conclusions. We increased external validity and reduced bias by carrying out a pilot case study, and decided to replicate the research process in three other organisations, according to the recommendations of Eisenhardt (1989) that four is a suitable number. Thirdly, the criterion for using a case study method is to generate theory in the absence of a sound theoretical framework for TM research (Lewis & Heckman, 2006;Gallardo-Gallardo et al., 2013;Thunnissen, Boselie & Fruytier, 2013b;Al Ariss et al., 2014). According to this last criterion, our research is structured as a case study of a holistic nature (Yin, 1994), in which the unit of analysis is represented by the companies that have implemented a TM plan, and the level of analysis is determined in relation to human resources management strategy.
Sample and data collection
We used theoretical sampling to identify the selection of cases. To facilitate the data collection process, four companies were selected from different industries that have implemented TM in Spain: Hospitality (Case A), Telecommunications (Case B), Aerospace (Case C), and Infrastructures and services (Case D). We have conducted a pilot study in Case A, and decided to replicate the research process in another three cases until we reached information saturation. Regarding Case A, the general manager facilitated access to two key persons in TM implementation: the corporate HR director and a specialist in the field of TM.
In order to replicate the investigation in the other three cases, the corporate HR directors were contacted, inviting them to participate in the research. All the companies agreed to participate at an initial meeting in which they were briefed on the project. At the same time, they identified the key people that could outline contextual issues within their organisations and advise us regarding further data collection. Specifically, five more people were interviewed: the HR manager and a TM specialist in Case B, the TM manager in Case C, and the corporate HR director and the talent development manager of a business unit in Case D. More than one informant was therefore interviewed in three of the four cases, so we reduced the bias in the answers obtained. In Case C, the importance of the person interviewed rendered it unnecessary to include anyone else in the investigation. Data collection involved the analysis of numerous internal documents, some of which were very valuable because they are TM-specific, but also strategy documents, archival data in the form of annual reports, internal company magazine articles, and websites, as well as external documents, such as specialised publications, reports by outside organisations, and articles published in the media, in some cases by the key informants interviewed.
An interview protocol was subsequently developed, and an interview template was designed to obtain insights involving questions on a number of issues, including the implementation of TM, the talent identification and location and the practices developed in each one of them for different groups of employees. Fourteen face-toface semi-structured and in-depth qualitative interviews were held with the seven key people mentioned above, some of whom were interviewed more than once. During the interviews, respondents were encouraged to describe and share information about their experiences both in relation to company strategies and their involvement in TM processes. The interviews, which on average lasted 90 minutes, were recorded for subsequent transcription, translation, and analysis.
Finally, two questionnaires were designed: the first one was an eight-part questionnaire covering the following aspects: identification of the company and the respondent, general information regarding the implementation of TM, talent definition and identification, and talent development. The questionnaire was made up of 16 open questions, five dichotomous questions, 10 categorical questions with response options, and three closed questions of between 6 and 12 items each, measured on a five-point Likert scale. The second one, based on the information obtained in the interviews and the Chami-Malaeb and Garavan index (2013), was used to determine TM scale practices in each talent pool. This questionnaire was emailed to 15 people considered specialists in TM. As in the work by Chami-Malaeb and Garavan (2013), analyses were performed to rule out possible multicollinearity, and then a factorial was made to ensure that each group of practices was applied for each talent pool and for different objectives.
Data analysis
The data analysis involved the free software VosViewer (version 1.6.5.). This provides easy-to-use softwareassisted qualitative data analysis focusing on the visualisation of bibliometric networks, although it is also used for qualitative content analysis. This tool creates a map based on text data, specifically creating term cooccurrence. The advantage of using qualitative content analysis software is that it allows for transparency, speed of data processing, and a reduction in the amount of data required for their analysis and objective interpretation. Term maps can be created directly based on a text corpus, so the interviews were transcribed, translated into English, revised to correct possible errors, and saved in "plain text" format. Subsequently, for all the interviews in each one of the cases, data analysis allowed identifying clusters that collected keywords that we could relate to some of the research questions.
Findings
The purpose of this work is to identify and locate talent in organisations and propose a theoretical approach to the grouping of talent for its subsequent differentiated development. The findings and ensuing discussion are arranged into two sections: the identification and location of talent, and the proposed configurational approach for its grouping.
Talent identification and location
The findings related to talent identification and location are articulated around the four questions raised in section 2.
As regards the issue of whether talent involves people or their personal characteristics, we have found evidence in the four cases analysed to show that these two approaches are not mutually exclusive, but instead complement each other in order to identify and locate talent. In the four cases studied, this means that talent is located in certain individuals, with the most recurrent terms being "people" in Case A, "individuals" in Case B, and "future leaders" in Case C. In the opinion of the companies subject to the study, it is pointless to identify talent if it is not personified in the individuals shown to possess it. Furthermore, our findings show that talented individuals have certain shared personal characteristics that are defined in the cases as: high levels of performance within the company (Cases A, B, C, and D) or outside the company (Case C), values that are coherent with those of the company, commitment, and the desire to grow (Case A), ability to learn quickly (Case B), mobility (Cases A and C), skills (Case D), and potential (Cases A and C).
The definitions of talent we have found in the four cases all confirm the importance of the aforementioned characteristics for talent identification. Thus, in an internal document kept by Case A (Talent Management. Boost your potential), we found talent defined as "an individual that possesses three characteristics: proven higher performance, a profile that is in line with the ethics and values of the group and the desire to develop personally and professionally within such group". Case B, also in an internal document: Development Conference Guideline defined talent as "the future leaders and therefore the employees that are potentially capable of holding strategic management positions with key functions ". In a specific internal TM document: Future Leaders, Case C defined talent as "an employee with the potential to take on a leadership position with the group in the future". Finally, the key respondents in Case D (corporate HR director and talent development manager) when interviewed defined talent as "an individual with the capacity to learn faster and the ability to successfully apply such knowledge to new situations".
The presence of these characteristics in employees is often measured using certain talent identification practices over and above traditional performance assessment measures that also cater for the appraisal of potential, such as 360º feedback (Cases B and D) and the assessment centre (Cases A and C).
As to whether talent is innate or developed, our results showed that those responsible for talent assume there is an innate part of talent. What truly maters for companies is its development. Accordingly, the talent managers in Case B affirm: "We do not analyse how an employee has acquired talent, what interests us is how they develop their current talent and future potential. Our goal is to find the best tools for developing talent". The HR manager in Case C adds "We are not interested in whether there is an innate part of talent. We are interested in that part that can be developed ". In all the cases analysed, talent is identified by its ability to develop the characteristics identified in the preceding section, and these people are located in part by their scope for personal and professional development (Case A), the development of potential (Cases B and C), and future learning (Case D). The results therefore reveal the importance in all four cases of the scope for development, growth and learning among those employees identified as talent.
In this respect, the Director of the HR Corporate Development Department in Case D stated in an article published in the company's blog that "talent is both the capacity to learn faster, as well as the ability to successfully apply what is learned to new situations. In short, it is basically the greater capacity to successfully adapt to change; not to be prepared for a particular scenario, but rather to be prepared to profit from any possible scenario".
The development practices identified in the cases explain how important it is that these employees identified as talent should grow through coaching or training programmes or via mobility either within the company or internationally, and specifically that the development of talent is achieved with tools such as executive coaching (Cases A, B and C), events of visibility and exposure (Case A), assignation to international projects (Case D) and premium training, especially for top management positions (Cases A and C). For mid-management, mentoring (Cases A and C), rotation (Case D) and training in skills (Cases A, B and C) are particularly suitable. Finally, for non-executive positions, group mentoring, rotation and technical training constitute key tools for developing talent.
As regards talent: people or positions, regardless of the fact that in certain cases, such as Case C, TM is being developed exclusively for executive positions for budgetary reasons, in all our cases, TM focused on the talented employee. In the opinion of the HR Manager in Case A: "belonging to the talent pool does not depend on the position held by an individual, but rather the person him or herself ". Similar terms were used by the Head of Training and Development in Case A: "people do not belong to the talent pool because of the position they hold, but because of their performance and potential". In Case B, the process of identifying talent is based on an assignment of people, irrespective of the position they hold. In Case C, the identification of talent is the result of an analysis of the people that show potential and performance. Finally, in Case D, the Head of HR Development stated that "TM is a specific model for a group of people that have been highlighted". According to the HR Manager, "the identification of the members of the talent pool is based on a process of people being assigned by any company employee. Subsequently, the people identified are evaluated at a roundtable discussion, to effectively determine whether or not they can be considered as talent".
In Case D, talent is considered at a corporate level as people holding operating positions, as well as midmanagement and top executive positions. It should be pointed out that at a business unit level, Case D has focused on implementing TM only for employees that hold management positions, as in the cases of A and C.
As regards talent: only the elite or throughout the organisation, consistent with the prior results the findings show that talented employees are identified according to certain characteristics, regardless of the position they hold and whether or not they belong to an executive position. In this sense, the Head of HRM in Case D stated that: "In our sector, there is a highly sought-after profile of individuals: topographers that have still not held management positions are vital for achieving competitive advantages. It is essential for our company to develop this talent". In turn, the HR Director in Case A affirms: "Talent at our company may be found anywhere in the organisation. Our challenge is to identify it in order to develop it and ensure these individuals hold key position in the future". In this respect, the Director of the HR Corporate Development Department in Case D stated in an article published on the company's blog that " talent is both the capacity to learn faster, as well as the ability to successfully apply what is learned to new situations. In short, it is basically the greater capacity to successfully adapt to change; not to be prepared for a particular scenario, but rather to be prepared to profit from any possible scenario. And so indeed, it may be found anywhere in the organisation".
Configurational approach to grouping talent into Talent Pools for its subsequent development
Once talent has been identified, it needs to be located and grouped to continue the development process.
The different timelines in which individuals can develop their talent is a criterion that allows talent to be grouped. In our cases, we have found similarities in the way of grouping talent, and in the four cases we have identified three kinds of talent pools (Figure 1). Figure 1 shows that Talent pool 1 is comprised of top managers which would develop and hold future strategic management positions more quickly. In Case A, they are called "top talent", in Case B "successors", in Case C "ready now", and in Case D "top management". Talent pool 2, with longer-term development over a greater period of time, could form part of Talent pool 1 through a process of TM and hold strategic management positions in the future. In the cases analysed, this talent pool was called "top leaders" in Case A, "high potentials" in Case B, "significant growth within 1-3 years" in Case C, and "mid-management" in Case D. Talent pool 3 is comprised of people with talent that do not hold management positions with the company, although they do hold key positions, meaning that the identification of talent is essential for the company to have its talent located for future development, to avoid it leaving, and to guarantee the succession of management positions.
The TM objective in this talent pool would be to locate this talent, as such persons do not hold management positions, and the talent could therefore be spread throughout the organisation. In the cases subject to analysis, this talent pool was called "high potential" in Case A, "raw diamonds" in Case B, "significant growth within 3-5 years" in Case C, and "technicians, clerks and topographers" in Case D.
Discussion
The discussion is articulated around talent identification and location, on the one hand, and its grouping, on the other.
How to identify and locate talent
As regards the process of identifying and locating talent, we may reach four conclusions, First, we can conclude that identifying talent requires locating those individuals with certain characteristics such as knowledge and skills, as well as certain attitudes, such as commitment and leadership, learning ability, and attitude and motivation that confirm their performance and potential. Therefore, the dilemma as to whether talent is the skills or the person is resolved when the companies define what they consider as talent within the context of their strategies, and subsequently identify the employees that meet the required conditions.
According to the evidence, all these definitions reveal that the companies analysed consider talent to be the individuals, people or employees that possess a series of features that set them apart or make them different from other employees, meaning that both approaches to talent (Gallardo-Gallardo et al., 2013) -the objective one (talent as personal characteristics) and the subjective one (talent as certain people) -are appropriate for its identification.
To identify the presence of these features in people, it is necessary to address two dimensions of talent: within the former, we may distinguish between intellectual attributes, which include capacity, skills and abilities, and affective attributes, values, commitment, attitude, and motivation. this is in line with the two components of talent: ability and affective, reported by Nijs et al. (2014). One refers to the nature of these characteristics in talented individuals, and the other involves the temporal dimension of those characteristics. In turn, there is a need to differentiate between talent's current and future dimensions. On the one hand, talent has a current dimension, high performance, which is measured by the actual contribution an individual makes, and has made in the past, in terms of performance, and constitutes an indicator of an employee's future performance, meaning that their past experience is vital (Garavan et al., 2012). This allows differentiating talent that "can be operationalized as performing better than others or performing consistently at one's personal best" (Nijs et al., 2014: 182). On the other hand, talent also has a future dimension, potential, which is measured by an individu al's future capacity to adapt to the company's strategic needs, to learn and progress, which materialises in higher levels of performance in the future. Talent's potential includes an employee's commitment and attitude in relation to growing rapidly and progressing within the company, and has a multiplying effect on future performance, in line with authors such as Chuai et al. (2008). Therefore, based on our evidence and the work of Church (2009), Tansley (2011) and Dries et al. (2012), talent's differentiating features are therefore as follows: besides knowledge, skills and high performance, the capacity to grow, learn, advance, progress and develop quickly to improve, face new challenges, apply what has been learned, influence the company (ambition), and be flexible in light of the company's future needs (commitment).
Second, once talent has been identified and located, the main thing is to focus on developing the part that can be acquired and developed, rather than analysing which part is innate. Our results are more in line with the second approach of Meyers et al. (2013), which claims that there is an innate part and another part susceptible to development. As posited by Silzer and Dowell (2010), we found that the companies here focused on taking a pragmatic approach to talent, without differentiating between its innate or developed components.
The companies' remit is to focus on the component of talent that can be developed by means of personal growth based on relations, working experience, and training (Garavan et al., 2012). The practices for developing the talent identified in the cases here are some of those defined by Garavan et al. (2012), and they differ according to the talent configuration or groups being considered.
Third, it is not the position in a specific job that informs the inclusion in a talent pool, but instead the presence of certain characteristics in specific individuals. This means that talented individuals may not be in management posts and irrespective of the position they hold, supporting the approach based on people as opposed to their positions, as maintained by Collings and Mellahi (2009).
Finally, we may conclude that although TM focuses on a differentiated group of talented individuals, these people might be anywhere in the company, with companies adopting what scholars refer to as an exclusive talent approach (Iles et al., 2010a;Gallardo-Gallardo et al., 2013). TM focuses on an elite group of employees; however, such employees with the aforementioned characteristics can be found anywhere in the organisation.
Based on the above, we establish the following proposition for identifying talent (Figure 2): Proposition 1. Talent identification is based on identifying those individuals, with a management position or not, that have high levels of current performance and future potential, as a result of a combination of intellectual attributes (capacity, knowledge, skills and abilities) and affective ones (commitment, attitude, and motivation) that can be developed in order to guarantee strategic company positions in the future.
How to group talent. Talent pools as a configuration
As a result of the identification of talent derived from the cases analysed, we can conclude that the configurational approach (Doty, Glick & Huber, 1993;Meyer, Tsui & Hinings, 1993;Delery & Doty, 1996) constitutes the best theoretical framework for understanding the grouping of talent in organisations.
Our cases show that talent can be considered as integrated into different configurations, complying with the conditions of creation, differentiation and equifinality.
With respect to the creation of configurations, different types can be found in organisations that are formed either by exogenous or endogenous forces. In the former, the coercive, regulatory and mimetic pressure appears to result in an isomorphism in the four companies analysed, with respect to the definition and identification of the same talent configurations. In the latter, the endogenous forces may cause a cognitive process to create structures and, in our case, the existence of people with talent leading to a differentiation of the workforce in different configurations.
The employees considered as talent in each of the configurations identified, or talent pools, share a common feature: they have a proven high performance and future potential, albeit without a consistent profile with respect to capacity, knowledge or experience, or with respect to the level of responsibility of their hierarchical functions.
Secondly, in relation to differentiation, the configurational approach defines organisational configurations as "multidimensional constellations of different conceptual features that commonly appear together" (Meyer et al., 1993(Meyer et al., :1175. As suggested previously by certain authors, in our cases we identified three different configurations of talent or talent pools. Accordingly, Björkman and Smale (2010) have identified three groups of talent: a pool for senior managers, one for intermediate managers, and another one for people at the start of their career. Mäkelä et al. (2010), in turn, have made a distinction between senior positions, top potentials, and potentials.
According to the configurational approach, different TM tools or practices are developed in each configuration to better achieve objectives (horizontal adjustment) for more efficient TM and to achieve the strategic objectives established by the talent strategy (vertical adjustment).
Thirdly, the principle of equifinality is present in the configurations, according to which a system can reach the same end result with different initial conditions and in a variety of ways (Doty et al., 1993). Accordingly, the three ideal talent pool configurations identified cater for the proposed objective of guaranteeing future strategic positions, although with different employees, different practices, and at different times. The fact that some talented employees hold management positions means they have a shorter path to reach strategic positions within the company than others that do not, given that although possessing talent implies a high likelihood of future promotion, the progress towards key or strategic management levels in the company's future will take place gradually over time.
For these reasons, we identified three talent configurations, which we call talent pools 1, 2 and 3, which must be managed differently ( Figure 2). Firstly, Talent pool 1 is comprised of employees with talent that hold executive management positions. Secondly, Talent pool 2 is comprised of employees that hold mid-management positions. Thirdly, talent pool 3 is comprised of employees that do not hold management positions.
The first two configurations (C1 and C2) represent people with talent that hold management positions (executive and mid-management) at the company. They are highly valuable to the organisation and its future strategy, and they have to be involved in TM. Given their value, an investment is required because they are of extraordinary value for the company's competitive advantage in the future. The TM objective for both configurations is to develop these people for strategic positions in the future.
Finally, the third configuration (C3) is comprised of people with talent that do not hold management positions within the company (Talent pool 3), although they do hold key positions, meaning that the identification of talent is essential for the company to locate it for future development, to avoid losing it, and to guarantee the succession of management positions. The TM objective in this case would be to locate this talent, as such persons do not hold management positions and their talent could therefore be spread throughout the organisation.
Based on the above, we propose the following (Figure 3
Conclusions
Although talent is deemed to be intangible capital that adds value to organisations (Alonso & Garcia-Muiña, 2014), there are few studies in the literature that address the process of identifying and locating talent. establishing the components of talent for its identification and location underpins this paper. Our initial conclusion accordingly is that from a corporate perspective talent involves people, and that tm focuses solely on the part of talent that can be developed.
This means that nurturing talent development is vital for companies. We propose the configurational approach as a theoretical framework for grouping talent into different configurations or talent pools. Our second conclusion is that talent may be found anywhere in an organisation, in management positions or not, and its grouping is crucial for its differentiated development in terms of both the tools and the time required to do so, depending on the talent pool involved. Some contributions and implications can be derived for academics and practitioners. For academics, on the one hand, a theoretical framework is proposed (as called for by Thunnissen et al. 20 13b): the configurational approach for grouping talent into different configurations or talent pools for the application of differentiated development policies. On the other hand, an empirical study is provided in a field dominated by theoretical analyses (Nijs et al., 2014) and, in a Spanish context, where there have been very few publications to date (Vivas-López et al., 2011;Valverde et al., 2013;Vivas-López, 2014;Maqueira et al., 2019) From the perspective of practitioners, this work's contribution consists of different configurations of talent pools for the design and implementation of a series of TM practices that are different for each configuration, thus allowing companies to develop talent at different points in time to achieve their future strategic objectives.
In this regard, companies can be more aware that talent has become a determining factor of competitiveness, and through efficient TM they can restructure the knowledge, experience and the commitment of those employees that contribute the most to the company's future, and build competitive advantages that are sustainable in the long term. | 2021-09-01T15:06:11.820Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "73ca0d0a8b849aa7b0a076e43a802ef37b0c0b61",
"oa_license": "CCBY",
"oa_url": "https://www.intangiblecapital.org/index.php/ic/article/download/1440/780",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dcab168b3219d9a3b1e8275c0d3ed712d00c52d7",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
259132541 | pes2o/s2orc | v3-fos-license | Social reward network connectivity differs between autistic and neurotypical youth during social interaction
A core feature of autism is difficulties with social interaction. Atypical social motivation is proposed to underlie these difficulties. However, prior work testing this hypothesis has shown mixed support and has been limited in its ability to understand real-world social-interactive processes in autism. We attempted to address these limitations by scanning neurotypical and autistic youth (n = 86) during a text-based reciprocal social interaction that mimics a “live” chat and elicits social reward processes. We focused on task-evoked functional connectivity (FC) of regions responsible for motivational-reward and mentalizing processes within the broader social reward circuitry. We found that task-evoked FC between these regions was significantly modulated by social interaction and receipt of social-interactive reward. Compared to neurotypical peers, autistic youth showed significantly greater task-evoked connectivity of core regions in the mentalizing network (e.g., posterior superior temporal sulcus) and the amygdala, a key node in the reward network. Furthermore, across groups, the connectivity strength between these mentalizing and reward regions was negatively correlated with self-reported social motivation and social reward during the scanner task. Our results highlight an important role of FC within the broader social reward circuitry for social-interactive reward. Specifically, greater context-dependent FC (i.e., differences between social engagement and non-social engagement) may indicate an increased “neural effort” during social reward and relate to differences in social motivation within autistic and neurotypical populations.
Introduction
Humans are inherently social creatures, relying on social interactions to survive and thrive, interactions that can be highly rewarding (Krach et al., 2010). Difficulty with social interactions is a central challenge and core diagnostic criterion for multiple psychiatric conditions, including autism (AUT, American Psychiatric Association, 2022). One controversial hypothesis is that these social challenges are due to differences in social motivation and social reward processing (Chevallier, Kohls, Troiani, Brodkin, & Schultz, 2012, but see Jaswal & Akhtar, 2018) and that these difficulties with social interaction may also be associated with an atypical neural circuitry (Clements et al., 2018). Thus, determining whether and how the neural substrates of social reward differ in typical and atypical development is critical to understanding the mechanisms underlying challenges in social interaction. efficient social interactions (Schurz et al., 2014). These two networks also tend to co-activate during social interaction (Alkire et al., 2018;Redcay and Schilbach, 2019;Xiao et al., 2022).
Despite theoretical arguments that atypical social reward circuitry may contribute to atypical social interactions in autism, empirical support is mixed. A recent meta-analysis surveying activation studies found that only a little more than half of the studies investigating social reward processing in autism (15 out of 27) showed atypical behavioral and/or physiological responses (Bottini, 2018).
Several factors may give rise to conflicting findings on the neural substrates of social reward processing in autism. One such factor is the heterogeneity among autistic (AUT) individuals a , including the level and display of social motivation (Wing, 1997). While autistic individuals often behave and express themselves in idiosyncratic ways, these behaviors do not necessarily indicate differences in motivation or desire for social connection (Jaswal and Akhtar, 2018).
Some atypical behaviors interpreted as social disinterest could have alternative explanations unrelated to social motivation, such as anxiety and self-regulation difficulties (Kapp et al., 2011).
Many autistic individuals also report high levels of loneliness and often long for friendship (Mazurek, 2014) -something that would be inconsistent with a "deficit" in social motivation.
Thus, it is important to consider variability in subjective experiences of differences in social motivation and reward.
Another reason for prior mixed findings may be that the experimental paradigms used to study social reward often use non-interactive contexts. For example, a photo of a stranger's smiling face is often used as a social reward. This lack of ecological validity is problematic (Redcay and a We acknowledge preferences for different languages used when referring to autism. Due to a reported preference for identityfirst language among a majority of autistic individuals and caregivers, we use that in this article. as recent studies have emphasized how participation in social interaction (as opposed to mere observation) alters the underlying cognitive and neural processing (Redcay and Schilbach, 2019;Schilbach et al., 2013). Therefore, researchers have increasingly advocated for assessing neural processes of social interaction by embedding the brain in a perceived live interactive setting , which may be critical to eliciting core social processing differences in autism (Rolison et al., 2015).
Relatedly, a potential methodological limitation of past work comes from their analytic approach. While past neuroimaging studies predominately focused on regional activation patterns to study social interaction and social reward, more recent studies have begun to investigate the brain's functional coupling pattern using functional connectivity (FC), which is thought to reflect the brain's interregional communication (van den Heuvel and Hulshoff Pol, 2010). Such studies have produced mixed findings on the neural signature of autism, with some reporting weaker connectivity in AUT compared to NT ("hypoconnectivity" in autism), others reporting greater connectivity in AUT ("hyperconnectivity"), and some claiming a combination of both (Hull et al., 2017). A few recent studies have begun to investigate how FC changes across cognitive states in AUT (e.g., Sridhar et al., 2021), termed as FC reconfiguration (for example, task-evoked conditions vs. null-task-demand conditions), as low FC reconfiguration may indicate that FC architecture easily adapts to task processing without significant 'neural effort' (Schultz & Cole, 2016). Moreover, while previous literature highlighted the critical role of the motivational-reward network and the mentalizing network for social reward processing, the interplay between the two has rarely been systematically investigated.
To address these gaps, we examined how the brain's FC supports reward processing and social interaction, focusing on key brain regions within the social reward circuitry. Our experimental paradigm involved a "live" chat room, where neurotypical and autistic youth between middle childhood and early adolescence shared self-relevant information and received engaged responses from a peer or computer. This specific age range corresponds with considerable changes in youths' social competencies and interpersonal relationships (Lam et al., 2014), offering a valuable window to understand underlying neural circuitry (Merchant et al., 2022). We hypothesized that connectivity patterns of the mentalizing and the motivational-reward network would be differentially modulated by social-interactive context and that the AUT group would show different connectivity patterns from the NT group. We further examined whether these differences can be accounted for by the heterogeneity of subjective experiences in social motivation and reward.
Participants
Neurotypical participants were recruited using a database of families in the nearby metropolitan area and word of mouth. Autistic participants were recruited through local organizations, professional settings, listservs, and social media groups related to autism as well as word of mouth. Sixty-two autistic youth and ninety-nine neurotypical youth aged 7-14 years were recruited. The final sample (n = 86, see Appendix S1 for detailed inclusion criteria) included 43 autistic and 43 neurotypical youth, matched in age, gender, full-scale IQ, and in-scanner motion (Table 1). All procedures were approved by the Institutional Review Board of the University of Maryland, and parents and youth provided informed consent and assent.
Experimental design
Youth participated in a real-time social interactive experiment designed to probe social reward systems Warnell et al., 2018). As shown in Fig. 1, the participant . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ; https://doi.org/10.1101/2023.06.05.543807 doi: bioRxiv preprint engaged in a text-based 'chat' with an age-and gender-matched peer (who was actually simulated) by answering yes or no questions about themselves (e.g., "I play soccer") followed by an engaged or disengaged response from the peer. They also completed computer trials, in which they shared information with the computer (for details, see Appendix S2).
[ Figure 1 goes here] Here, we focused on four types of reply events: Peer Engagement (PE), Peer Non-engagement (PN), Computer Engagement (CE), and Computer Non-engagement (CN), each with six trials per condition per run. The experiment was repeated over four runs, and each run lasted for approximately 6.2 mins. We included participants with three or more usable runs (n = 15 with 3 runs and n = 71 with 4 runs).
Post-scan interview
Immediately following the MRI scan, we conducted a verbal interview assessing how much the participants enjoyed interacting with the computer and peer on a 5-point Likert scale. Two postscan enjoyment scores were used in this study, including a social motivation score (the difference between how much they wanted to see the answer from the peer vs. from the computer) and a social reward score (how much they liked chatting with the peer vs. computer; see Appendix S3 for details). We also assessed the participants' belief in the live illusion by asking them if there was anything else they wanted to tell us. Seventeen participants who expressed disbelief were excluded from further analysis.
Image acquisition
The fMRI data were collected using a single Siemens 3T scanner 32-channel head coil at the Maryland Neuroimaging Center (MAGNETOM Trio Tim System, Siemens Medical Solutions).
Region of interests
Since our study primarily focuses on reward, we chose three a priori seed regions in the motivational-reward network, i.e., the bilateral amygdala, nucleus accumbens (NAcc), and ventral caudate ( Fig. 2A). The amygdala was anatomically defined in Harvard-Oxford subcortical structural probability atlas in FSL, while NAcc and ventral caudate were described in a previous FC-based study (Di Martino et al., 2011). These regions were chosen given their contribution to social reward processing in autistic (Kohls et al., 2013b) and neurotypical youth (Ernst et al., 2005).
Whole-brain context-modulated FC analysis
We evaluated the whole-brain context-modulated FC pattern using the subcortical ROIs as seeds with a beta-series connectivity approach. An overview of the analysis steps can be found in Fig. 2. First, a separate regressor was created for each reply event, which was then convolved with a canonical HRF to model the BOLD response. Second, a trial-specific activation (beta coefficient) was estimated for each voxel using the Least Squares-All (LS-A) approach (Mumford et al., 2012), generating trial-wise regressors to identify trial-specific activation. We also censored the frames with FD greater than 1mm. Third, the beta series were averaged within each ROI, and the averaged timeseries were correlated with beta timeseries of all voxels using Spearman . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ; https://doi.org/10.1101/2023.06.05.543807 doi: bioRxiv preprint correlation to compute the whole-brain context-modulated FC. The correlation coefficients were subsequently Fisher-z transformed.
General linear model analysis
To examine the effects of social context regardless of engagement, we examined the composite social context contrast by comparing the peer (PE+PN) to computer (CE+CN). To examine the effects of social reward specifically, we compared PE (i.e., peer engagement)the socialinteractive reward when participants receive an engaged, positive response from a peerversus CE (computer engagement), where participants receive a response from a computer.
Additionally, we compared PE vs. PN (i.e., peer non-engagement), where the peer is away and does not give a response. For simplicity, we only refer to the PE vs. CE contrast for the remainder of the paper when discussing our social reward contrast because we did not observe significant effects in PE vs. PN.
For each seed ROI, we conducted group-level analyses to identify voxels with significant main effects and group differences in the six whole-brain FC contrasts using AFNI function 3dMVM, while controlling for age, gender, and the number of runs. To account for the multiple testing of three seeds and three contrasts, significant clusters were determined with a conservative clusterwise false positive rate of 0.0056 (0.05/9 seeds, 124 voxels by 3dClustsim based on average noise smoothness from the residual data) and a voxel-wise p-value of 0.001.
Sample characteristics and behavioral findings
[ Table 1 goes here] . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ; https://doi.org/10.1101/2023.06.05.543807 doi: bioRxiv preprint The characteristics of the matched samples are summarized in Table 1. A two-way analysis of variance test was used to examine the effects of interaction partner (peer vs. computer) and group (NT vs. AUT) on self-reported social reward and social motivation. We found a significant effect of social context such that both groups enjoyed interaction with peers more than the computer (ps < 0.001), but no effect of the group nor any interactions were found.
Context-modulated FC of social reward and social interaction
We used the trial-specific beta coefficients of three a priori seed regions. Below we report wholebrain context-modulated FC analyses for the social reward (i.e., social engagement vs. non-social engagement: PE vs. CE) and the social context contrast (i.e., chatting with a peer vs. computer: For the main effect of social context, we observed stronger connectivity between the left NAcc and the left inferior frontal gyrus (IFG) during social interaction (Fig. 3A). We also found significantly stronger FC in the AUT group between the amygdala seed and three regions: bilateral posterior superior temporal sulcus (pSTS) and right temporoparietal junction (TPJ) in the social reward contrast. All analyses were performed with a primary voxel-wise threshold of 0.001, and a cluster-wise threshold of 124 voxels, correcting for nine comparisons (3 seeds × 3 contrasts). No other significant effects survived cluster-wise correction. A summary of all significant clusters can be found in Table S1 in Supporting Information.
Neural correlates of interaction enjoyment
To explore the mechanism underlying the significant group differences in context-modulated FC, we conducted post-hoc brain-behavior analyses between the FC values of the three significant clusters in the social reward contrast, and post-scan social reward and social motivation scores.
. CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. As shown in Fig. 4A, after controlling for diagnosis, stronger FC between the amygdala and the left pSTS was related to lower social motivation scores (i.e., differences between how much participants wanted to see the answer from a peer vs. computer) within the combined sample (t = -2.24, p < 0.05). Similarly, as shown in Fig. 4B, FC between the amygdala and the right pSTS was negatively correlated with social reward scores (i.e., the differences between how much participants liked the answer from a peer or computer, t = -2.25, p < 0.05), after controlling for the diagnosis.
Discussion
To understand whether social reward circuitry differs between autistic and neurotypical youth, we used a social-interactive paradigm to investigate task-induced FC changes during social reward processing in a sample of 86 youth between 7-14 years old. Middle childhood and early adolescence is a critical developmental period, as difficulties in social interaction during this period predict later mental health difficulties, poorer academic outcomes, and difficulties in later employment (Bornstein et al., 2010;Burt et al., 2008). Given earlier work highlighting the role of FC or co-activation between the mentalizing and reward networks during social interaction (Alkire et al., 2018;Assaf et al., 2013;Smith et al., 2014;Xiao et al., 2022), we hypothesized that connectivity profiles of regions associated with reward processing and mentalizing would be modulated by social context and that the AUT group would show different connectivity patterns in these regions. We found partial support for these hypotheses. Consistent with our hypotheses, we found greater connectivity within reward-relevant regions during social reward processing (i.e., receiving a positive response from a peer compared to a computer) in the full sample. We also found significant group differences in connectivity between key regions within the broader . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ; https://doi.org/10.1101/2023.06.05.543807 doi: bioRxiv preprint social reward circuitry (i.e., encompassing social-cognitive and reward-relevant regions of the amygdala and bilateral pSTS and R TPJ), and the amygdala-pSTS connectivity strength was related to individual differences in self-reported social motivation and reward across groups.
Social context relates to enhanced mentalizing and reward network connectivity
For the main effect of social context (PE+PN vs. CE+CN, i.e., interacting with a peer vs. computer), consistent with our hypothesis, we found increased connectivity between a core component of the reward system (NAcc) and a part of the mentalizing network (left IFG). Left IFG is an important region associated with mentalizing and empathy (Arioli et al., 2021) and has also been found to encode youth's interest levels in receiving feedback from peers (Guyer et al., 2012). Moreover, Pfeiffer and colleagues studied gaze-based interactions and found the activation in IFG and NAcc to be differentially modulated during interactions with a perceived human partner compared with a perceived computer (Pfeiffer et al., 2014). Therefore, a potential interpretation of this finding is that strengthened left IFG -NAcc connectivity is related to detecting and updating relevant social signals (particularly from interaction partners), with the NAcc encoding the reward prediction error (Schultz et al., 1997) and the IFG updating expectancies following social feedback (Kohls et al., 2013a;Nelson and Guyer, 2011). However, it is important to note that the IFG is a multi-functional region and this interpretation relies on a reverse inference (Poldrack, 2006).
Stronger amygdala-mentalizing connectivity during peer engagement in AUT
We observed greater connectivity in the AUT group between the amygdala and bilateral pSTS and nearby right TPJ in the mentalizing network, when participants received a positive response from a peer compared to a computer (i.e., social reward contrast [PE vs. CE]). The amygdala is important for social cognition and social motivation (Chevallier et al., 2012;Kohls et al., 2013b), . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ; https://doi.org/10.1101/2023.06.05.543807 doi: bioRxiv preprint and the TPJ and pSTS are among key regions for social cognition and social interaction (Redcay et al., 2010). Our findings are consistent with prior neuroimaging studies demonstrating greater connectivity in autism (Chien et al., 2015;Dajani and Uddin, 2016;Fishman et al., 2014;Jasmin et al., 2019;Redcay et al., 2013;Supekar et al., 2013;Uddin et al., 2013;You et al., 2013), especially between subcortical and cortical regions (Cerliani et al., 2015;Ilioska et al., 2022).
More specifically, connectivity between pSTS/TPJ and the amygdala has often been highlighted in previous neuroimaging studies of reward-related processing in autism (Dichter, 2012). For example, Abrams and colleagues found weaker connectivity between human-voice-selective pSTS and the reward circuit, including the amygdala, in autistic children during rest (Abrams et al., 2013), indicating a potential role of amygdala-pSTS connectivity in reward and human voice processing. In contrast, stronger amygdala-STS connectivity was found in autism during spontaneous attention to eye gaze in emotional faces during a cognitive control task (Murphy et al., 2012), indicating the role of pSTS-amygdala connectivity in social information processing.
Moreover, structural connectivity strength between the amygdala and pSTS was positively correlated with autistic traits (Iidaka et al., 2012).
Linking neural data to behavioral data may help us gain a more mechanistic understanding of the functional significance of the connectivity differences between groups (Picci et al., 2016). Interestingly, these group-level connectivity differences were present despite no group differences in behavioral self-report of either social motivation or social reward. Neural measures might prove to be more sensitive to identifying group differences than behavioral measures of social motivation/reward, due to potential confounds of bias in self-report (Van de Mortel, 2008) or observer expectations of social behavior (Jaswal and Akhtar, 2018). Importantly, we did find that behavioral reports tracked individual differences in connectivity, . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ; https://doi.org/10.1101/2023.06.05.543807 doi: bioRxiv preprint and this relation was similar in both groups. Specifically, connectivity between the amygdala and left and right pSTS was negatively correlated with self-reported social motivation and social reward scores. Thus, greater connectivity changes between conditions (i.e., greater FC reconfiguration) might reflect reduced social motivation or reduced sensitivity to social reward.
We offer two possible interpretations for these observations: first, that greater FC reconfiguration indexes a domain-general difficulty switching between high-demanding and low-demanding conditions, and, second, that it may relate to social anxiety.
Greater FC reconfiguration indexes greater neural effort?
First, greater FC reconfiguration between regions within the social reward circuitry may signal greater neural effort associated with social interaction (Schultz & Cole, 2016). Speaking to this possibility, a previous study found that high-demand social-interaction elicited greater connectivity between social brain regions in autistic compared to neurotypical youth in a live social interaction paradigm (Jasmin et al., 2019). When contrasting the condition with high social demand (conversation) with that of low social demand (repetition), they similarly found stronger task-evoked connectivity in social processing regions (e.g., STS) in autistic participants, and the increases in connectivity were positively related to the level of social impairment. Although these findings speak to differences in FC modulation in autism for social tasks, greater FC reconfiguration (i.e., task vs. control FC updates) has also been observed in autism during nonsocial cognitive tasks. For example, when contrasting a sustained attention task with rest, You et al. (2013) found that autistic children had increased distal connectivity between frontal, temporal, and parietal regions compared to neurotypical children, and this increased connectivity was associated with inattention problems in everyday life. In addition, Barttfeld et al. (2012) reported more pronounced connectivity changes in autistic adults than the neurotypical adults . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 across three cognitive states with varying attention demands (i.e., rest, interoceptive, and exteroceptive attentional states). Lexical processing induces similarly greater FC configuration in autistic adolescents as compared with neurotypical peers (Sridhar et al., 2021). Beyond autism, Schultz & Cole (2016) found that neurotypical individuals with better task performance had smaller task-evoked FC reconfiguration when switching from task to rest, suggesting that betterperforming individuals 'pre-configure' their FC at baseline (i.e., rest) to be more efficiently updated for various processing demands. Taken together, greater FC reconfiguration may indicate that social interaction requires greater neural effort and thus is mentally taxing for individuals who find social interaction difficult, regardless of diagnostic status.
Strengthened amygdala connectivity reflects social anxiety?
A second potential explanation for stronger connectivity between the amygdala and mentalizing regions relates to social anxiety. The amygdala is a multi-functional brain region. While here we chose the amygdala as a seed given its central role in reward processing and social motivation, the amygdala is also known for its involvement in regulating emotions such as anxiety (Davidson, 2002). Moreover, anxiety disorders often co-occur with autism, as evidenced by a recent meta-analysis showing about 40% of autistic youths have received a diagnosis of at least one anxiety disorder (van Steensel et al., 2011), while atypical amygdala connectivity is also common in these anxiety disorders (Sylvester et al., 2012). For example, in adults with social anxiety disorder, Pannekoek and colleagues found heightened resting-state connectivity between the right amygdala and the left middle temporal gyrus overlapping with our left pSTS cluster (Pannekoek et al., 2013). Thus, it is possible that for autistic youth, especially those with social anxiety or a history of peer rejection, the social stimulus could be more anxiety-inducing, causing heightened amygdala connectivity. To test this possibility, we reran the analysis after . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ; https://doi.org/10.1101/2023.06.05.543807 doi: bioRxiv preprint excluding the eight autistic youth with anxiety disorders. We found similar significant group differences in right pSTS and TPJ as well as significant brain-behavior relationships with interaction enjoyment, which does not offer direct support for this possibility. Furthermore, our post-hoc exploratory analysis using the parent-reported social phobia scale of the Screen for Child Anxiety Related Emotional Disorders (Birmaher et al., 1997) failed to find any significant relationship with connectivity between the amygdala and mentalizing regions within a subset of participants (n = 56). However, given the limited sample size, the case-control study design, and the prevalence of subclinical anxiety in youth (Vasa et al., 2013), we encourage further studies with larger samples of participants with anxiety (with and without autism) and investigation of whether social anxiety traits are driving the relation we found between mentalizing-reward network connectivity and social reward and motivation.
Limitations and future directions
The current study focused on individual differences in processing interactive social reward, including self-report measures of social motivation and reward. However, the differences in social abilities for autistic youth may also stem from other factors, such as theory of mind, executive function, or sensory processing differences. Moreover, the interactive chat task used in the current study is text-based, and text-based communication may alleviate some of the difficulties autistic people experience in face-to-face contexts (Benford and Standen, 2009). The task also is highly structured, which significantly reduces uncertainty (Boulter et al., 2014). Thus, our highly structured, text-based interaction may have diminished some potential group differences. Additionally, given the exploratory nature of our post-hoc brain-behavioral analysis, we did not perform any multiple comparisons correction, and future work is needed to validate our findings. Lastly, although we controlled for age in our analysis, age may add more variance, . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ; https://doi.org/10.1101/2023.06.05.543807 doi: bioRxiv preprint as previous studies have suggested that autistic youth from different age cohorts may show different connectivity patterns .
Conclusion
In sum, our study demonstrated increased integration between regions associated with reward and mentalizing networks during social reward processing in a live reciprocal social interaction.
The greater task-based modulation of connectivity (i.e., greater reconfiguration) within the broader social reward circuitry may contribute to group and individual differences in how socialinteractive reward is processed.
. CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ;https://doi.org/10.1101/2023 doi: bioRxiv preprint . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 Whole-brain context-modulated FC analysis. A new set of regressors were created to estimate trial-specific activation (beta coefficients). The context-modulated FC was estimated by correlating beta coefficients between the seed and voxels across the whole brain.
. CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ; https://doi.org/10.1101/2023.06.05.543807 doi: bioRxiv preprint . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ; https://doi.org/10.1101/2023.06.05.543807 doi: bioRxiv preprint
S1. Inclusion criteria
Youth were excluded if they were born premature (<34 weeks), non-native English speakers, had a history of concussion or head trauma, or had a full-scale IQ < 80 as assessed by the Kaufman Brief Intelligence Test, Second Edition (Kaufman, 2004). Parents of neurotypical youth reported no history of neurological or psychiatric disorders or first-degree relatives with autism or schizophrenia. Autistic youth were not excluded if parents reported a common co-occurring mental health condition, including attention-deficit/hyperactivity disorder (n = 19), obsessivecompulsive disorder (n = 1), or anxiety (n = 8). Autistic youth were eligible to participate only if they had a prior clinical diagnosis of autism which was then confirmed by our research team using the Autism Diagnostic Observation Schedule, 2nd edition (ADOS-2, Lord et al., 2012) via a licensed clinical psychologist or clinical psychology graduate student who was researchreliable in ADOS administration and coding. Of youth who completed the MRI scan, the following inclusion criteria were used: believed they were chatting with a real peer partner (see section 2.3 Post-scan interview), and adequate task performance (i.e., responding to at least 2/3 of trials) and with three or more usable runs (i.e., mean framewise displacement (FD) < 0.5mm and maximum FD < 5.2mm, corresponding to the diagonal length of a 3mm isomorphic cube).
The final sample (n = 86) overlaps with the sample used in McNaughton et al. (2023).
S2. Experiment
Participants were first informed whether the recipient was a peer (peer trial) or a computer (computer trial), both of which had an equal possibility. Then, the participants initiated an interaction by answering a Yes/No question about their likes and hobbies (e.g., "I like soccer").
After a jittering 2-6 sec (mean 3.5 sec) fixation period, participants received a response in the . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. two-second reply phase. The responses consisted of engagement (e.g., "Me too!", indicating the peer agreed with the participant, or "Matched!", indicating that the computer randomly generated the same answer as the child), non-engagement (e.g., "I'm away" or "Disconnected"), and disagreement (e.g., "That's not what I picked").
For peer trials, the participants were told that the peer would sometimes be unable to respond because the peer was playing another game. For these disengaged trials, an away message was displayed as the peer response. Moreover, participants believed that the peer always saw their answer, and the peer would either respond if they were able to or not respond if they had been assigned to play another game. For computer trials, participants believed that the computer would randomly pick an answer following participants' answering the Yes/No question.
Moreover, participants were informed that the computer would sometimes lose the connection and be unable to generate an answer, resulting in disengage trials. The order and timing of trials were predetermined, and we used four sets of stimuli to avoid pairing the questions with reply types (e.g., the "I play soccer" trials did not always receive a "Me too" response).
S3. Post-scan enjoyment questionnaire
A post-scan questionnaire was filled by participants using a 1-5 Likert scale (1 = not at all, to 5 = a lot). The items used in the analysis are listed below.
How much did you want to see his/her answer to your question?
How much did you want to see if the computer matched your answer?
How much did you like chatting with ______?
How much did you like it when you were just answering the computer?
S4. Preprocessing
. CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted June 7, 2023. ; https://doi.org/10.1101/2023.06.05.543807 doi: bioRxiv preprint A standardized preprocessing pipeline fMRIprep v1.4.1 was used to preprocess the imaging data (Esteban et al., 2019). The skull-stripped BOLD images underwent motion correction, slice timing correction, and susceptibility distortion correction, and were lastly resampled to MNI space. Automatic removal of motion artifacts using independent component analysis was performed on the preprocessed BOLD images after removal of non-steady-state volumes and spatial smoothing with an isotropic, Gaussian kernel of 6mm FWHM (full-width halfmaximum). Lastly, the BOLD images were then intensity normalized to have a mean intensity of 1,000, and a binary group mask at the threshold of 0.9 probability was applied. Table S1. Clusters with significant main effect and group differences with MNI coordinates of their center of mass (voxel-wise p-value = 0.001, cluster-wise threshold = 124 voxels).
Contrast
Seed | 2023-06-12T13:09:22.283Z | 2023-06-07T00:00:00.000 | {
"year": 2023,
"sha1": "5c344f2ae7602ce0c20556aa765456e22ad54cb9",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7733a971209c8ee0d1e79e6621885b0778f42e04",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
20935799 | pes2o/s2orc | v3-fos-license | Zero Pearson Coefficient for Strongly Correlated Growing Trees
We obtained Pearson's coefficient of strongly correlated recursive networks growing by preferential attachment of every new vertex by $m$ edges. We found that the Pearson coefficient is exactly zero in the infinite network limit for the recursive trees ($m=1$). If the number of connections of new vertices exceeds one ($m>1$), then the Pearson coefficient in the infinite networks equals zero only when the degree distribution exponent $\gamma$ does not exceed 4. We calculated the Pearson coefficient for finite networks and observed a slow, power-law like approach to an infinite network limit. Our findings indicate that Pearson's coefficient strongly depends on size and details of networks, which makes this characteristic virtually useless for quantitative comparison of different networks.
We obtained Pearson's coefficient of strongly correlated recursive networks growing by preferential attachment of every new vertex by m edges. We found that the Pearson coefficient is exactly zero in the infinite network limit for the recursive trees (m = 1). If the number of connections of new vertices exceeds one (m > 1), then the Pearson coefficient in the infinite networks equals zero only when the degree distribution exponent γ does not exceed 4. We calculated the Pearson coefficient for finite networks and observed a slow, power-law like approach to an infinite network limit. Our findings indicate that Pearson's coefficient strongly depends on size and details of networks, which makes this characteristic virtually useless for quantitative comparison of different networks. The Pearson coefficient r is used as an integral characteristic of structural correlations in a network. Pearson's coefficient characterizes pairwise correlations between degrees of the nearest neighboring vertices in networks. Some observable quantities in correlated networks (e.g., the size of a giant connected component near the point of its emergence) are directly expressed in terms of this coefficient [1,2]. The Pearson coefficient is the normalized correlation function of the degrees of the nearest neighbour vertices [3,4,5]. The coefficient is the ratio: where jk e is the average product of the degrees j and k of the end vertices of an edge, k e = k 2 / k is the average degree of an end vertex of an edge, k 2 e = k 3 / k is the average square of the degree of an end vertex of an edge [6], and is for normalization. Here . . . and . . . e denote averaging over vertices and edges, respectively [see Eqs. (16), (18), and (20) below]. Pearson's coefficient can be positive (in average, assortative mixing of the nearest neighbors' degrees) or negative (disassortative mixing) and takes values in the range from −1 to 1. The Pearson coefficient r is a convolution of the joint distribution of nearest neighbor degrees, P (j, k) ≡ e jk . This joint distribution is the probability that the ends of a randomly chosen edge have degrees j and k, jk e jk = 1. Being an integral characteristic of degree-degree correlations, the Pearson coefficient misses details of these correlations, compared to e jk [7, 8,9]. Despite this fact, Pearson's coefficient is widely used for characterization and comparison of real-world networks [4]. Note that the compared real networks have different sizes. In this paper we show that since r is a markedly size dependent quantity, Pearson's coefficient may be used for comparison of networks only with a very critical attitude. We calculate Pearson's coefficient for the simplest growing complex networks, namely recursive random networks with preferential attachment of new vertices. We describe the size dependence of r in these networks, Fig. 1. Remarkably, for all infinite recursive trees of this kind, we find that the Pearson coefficient is exactly zero at any value of the degree distribution exponent γ, although all these networks are strongly correlated. The statement that Pearson's coefficient is zero in the range γ > 4 is essentially non-trivial. The point is that zero value of the Pearson coefficient in the range γ ≤ 4 where the third moment of the degree distribution diverges is clear. Indeed, in this region, the denominator in definition (1) diverges in the infinite network limit [10]. This is not the case at γ > 4, and for zero r, the numerator in Eq. (1) must be zero. In this range, zero value of Pearson's coefficient means a surprising exact mutual compensation of different degree-degree cor- relations in these trees. There are two opposing kinds of degree-degree correlations in complex networks: assortative and disassortative. Here assortativity is a tendency of high degree vertices to have high degree neighbors and low-degree vertices to have low degree neighbors. In contrast, disassortative mixing means neighborhood of vertices with contrasting (low and high) degrees. This tendencies may be opposing in different ranges of degrees, see discussion in Sec. III, Fig. 4. That is, assortative and disassortative mixing may coexist. In particular, this is the case for random recursive trees. Our results show that in these growing networks, the different kinds of correlations completely compensate each other in the infinite network limit.
I. MAIN RESULTS
We study recursive networks in which each new vertex is attached to m existing ones chosen with probability proportional to a linear function of vertex degree, k+A ≡ k+am. This rule generates scale-free correlated networks with a degree distribution exponent γ = 3+A/m = 3+a.
In the infinite network limit, Pearson's coefficient where t in the number of vertices in a network. For γ > 4, we find which shows that for m = 1 (i.e., for random recursive trees), Pearson's coefficient r ∞ = 0 for any value of γ.
One can also see that r ∞ (γ = 4) = 0 for any m. In particular, for uniformly random attachment, i.e., γ→∞, we have: Figure 1, obtained by simulation of the networks of 10 3 , 10 4 , 10 5 , and 10 6 vertices (the number of runs for each point was between 50 and 500 for γ < 3 and between 5 × 10 3 and 10 4 for γ > 3), demonstrates the size dependence of Pearson's coefficient in these networks. This figure shows that even for large networks, the deviations from the limiting infinite network values are significant. We find the following asymptotic size dependences δr(t) = r(t) − r ∞ : with γ exponent of a degree distribution. Here we show the signs of the asymptotes but ignore their factors. The positive sign of the asymptotes at m>1, 3<γ<4 and γ>4 means that r(t) approaches the infinite size limit r ∞ from above. In this situation (m>1, γ>3), Pearson's coefficient varies with size non-monotonously: first increases and then diminishes to r ∞ (see the second panel of Fig. 1). Introducing exponent z: δr(t) ∝ t −z for large network sizes t, we arrive at the dependences z(γ) shown in Fig. 2. Note that z < 1, so the infinite network limit is approached slowly. The relaxation to the infinite network values, r ∞ , is especially slow if exponent γ is close to 2 (at any m) or to 4 (only if m > 1). In the specific case of m > 1, γ = 4, we obtain the logarithmic relaxation: We measured the dependence of exponent z on γ in the simulated networks. The result, shown in Fig. 3, demonstrates an agreement with above analytical predictions.
II. PEARSON COEFFICIENT OF INFINITE NETWORKS
Let us derive the Pearson coefficient (5) of the infinite recursive networks. The derivation is based on rate equations for the average number N k (t) of vertices of degree k in a recursive network at time t and the average number E jk (t) of edges connecting vertices of degrees j and k. We have for vertices, and so where p k (t) ≡ P (k, t) is a degree distribution of the network of size t.
For edges, we have: and so where e jk is the degree-degree distribution for edges. Using the standard rate or master equation approaches [11,12,13] to this kind of networks (specifically, to the recursive networks growing due the preferential attachment mechanism), we write the following rate equations for N k (t) and E jk (t): (note that the mean vertex degree in these networks is k = 2m) and We separately write out equations for the case of k = m.
Here we used the fact that the probability to attach a new vertex to a vertex i of degree k i in this model is Assuming a stationary regime at the limit t → ∞ [N k (t) ∼ = tp k , E jk (t) ∼ = 2mte jk ], we reduce these equations to (14) and 2m(k + m + 2 + 2ma + a)e mk = (k − 1 + ma)p k−1 , To obtain the Pearson coefficient, see definition (1), we must find k 2 , k 3 , and jk e , where jk e = jk jke jk .
Multiplying both the sides of the second equation of the system (14) by k 2 and k 3 and summing over k, and taking into account which follows from the first equation of the system (14), we obtain and respectively.
To get jk e , we multiply both the sides of the second equation of the system (15) by jk and sum them over j and k. We also take into account the following general equality: which gives k e = 1 2a (2 + 5ma + a + 2m).
III. SIZE DEPENDENCE
Equations (11) and (12) allow one to derive the full size dependence of the Pearson coefficient. Instead of these cumbersome straightforward calculations, we obtain the asymptotic behavior of r(t) in an easier way, using known results for the asymptotics of degree-degree correlations in these networks [11,14,15,16]. The derivation is based on the following expression of the Pearson coefficient in terms of the average vertex degree k nn (k) of the nearest neighbors of the vertices of degree k: This expression directly follows from definition (1). The leading asymptotics of k nn (k) can be obtained by using the known exact asymptotics of n kl for these recursive networks [11]. Here n kl is the probability that a descendant vertex of degree k is connected to an ascendant vertex of degree l. This quantity satisfies the following relations: l n kl = p k , l (n kl + n lk ) = kp k , l l(n kl + n lk ) = kp k k nn (k).
At m = 1, according to Ref. [11], For γ > 3 and any m, k nn (k, t) approaches a stationary limit as the network grows. Figure 4 demonstrates this relaxation for two values of the degree distribution exponent γ, namely, for γ = 3.1053 and γ = 4 in the case of m = 1. We can approximate the deviation from the infinite network limit by where k cut = k cut (t) is the time dependent cutoff of the degree distribution. In these networks, k cut (t) ∼ t 1/(γ−1) . Substituting Eq. (27) into Eq. (28) results in at large degrees k < k cut for the networks with γ > 3.
Here c 1 and c 2 are constants depending on γ, which we do not calculate. Similarly, for 2 < γ < 3, where k nn (k, t) diverges as t → ∞ at any m, we can write an asymptotic estimate k nn (k, t)kp k ∼ kcut m dl l(n kl + n lk ). (30) Substituting Eq. (27) into Eq. (30) we obtain (31) at large degrees k < k cut for the networks with 2 < γ < 3.
Here c 3 and c 4 are some constants [17].
We substitute asymptotics (29) and (31) into expression (25) for the Pearson coefficient and take into account the leading terms in the numerator and the denominator. The combinations of these leading terms are different in different areas of γ and m, since the quantities k nn (k, t), k 2 , and k 3 in expression (25) change their asymptotic behavior at two special points, namely, γ = 3 and γ = 4. In particular, at these points (γ = 3 and 4) the second and, respectively, the third moments of the degree distribution become divergent. For example, if 2 < γ < 3 at any m, relation (25) with substituted asymptotics (29) takes the form: where c ′ 1 , c ′ 2 and c ′ 3 are constants. This leads to the result: where c ′′ 1 , c ′′ 2 and c ′′ 3 are constants. In a similar way, we derive the other asymptotics listed in Eqs. (5) and (6).
Interestingly, both the terms in Eqs. (29) and (31) give contributions of the same order of magnitude to r(t), see the numerator of Eq. (33). Note that we suppose that for m > 1 the form of the asymptotics of k nn (k, t) is the same as in Eqs. (29) and (31). To verify our assumption, we inspected the corresponding results for k nn (k, t) in Ref. [15] and found that the asymptotic behavior should be similar at different m (apart of numerical coefficients) if exponent γ is fixed. In addition, we checked that results (5) also can be derived by using the asymptotics of k nn (k, t) from Ref. [15]. This confirms our conclusions about the asymptotics of r(t).
IV. DISCUSSION AND CONCLUSIONS
Our result, namely zero Pearson coefficient of random recursive trees at any γ > 2, naturally leads to the following questions. What is the class of trees that have zero Pearson coefficient? Is Pearson's coefficient zero for any infinite recursive tree? At present, we cannot answer the first question. As for the second question, the answer is negative. Indeed, as a counter-example, we present the simplest infinite recursive tree with a non-zero Pearson coefficient. This is a star, which is a tree of t > 2 vertices, including t − 1 leafs and the hub of degree t − 1. For star of any size, clearly, r = −1.
Thus we have studied the Pearson coefficient in strongly correlated growing networks. They form a representative class of networks with strong structural correlations including pairwise correlations between degrees of the nearest-neighbor vertices. Despite these correlations, we have found that in a wide range of infinite correlated networks the Pearson coefficient approaches zero. For any infinite random recursive tree whose growth is driven by arbitrary linear preferential attachment of new vertices, we observed zero Pearson coefficient. These networks include random recursive trees with rapidly decaying and even exponential degree distribution, where the third moment of the degree distribution is finite. So here zero value of Pearson's coefficient demands zero correlation function in the numerator of definition (1). This surprising equality to zero indicates an exact mutual compensation of assortative and disassortative degree-degree correlations. In this respect, the recursive trees is a very special case of random recursive networks [18].
We have investigated the size dependence of Pearson's coefficient in the growing networks. We have found that the size effect is significant even for very large networks. We have shown that a growing network during its evolution may demonstrate essential and even nonmonotonous variation of Pearson's coefficient. Due to this marked size dependence, it is hardly feasible to use this integral characteristic of correlations for quantitative comparison of different real-world networks. Instead, for this purpose, one has to use more informative characteristics, for example k nn (k). | 2009-11-22T20:14:37.000Z | 2009-11-22T00:00:00.000 | {
"year": 2009,
"sha1": "5bcced049acbff0731e7a93b6fc5e18e1c9c5bc3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0911.4285",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5bcced049acbff0731e7a93b6fc5e18e1c9c5bc3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics",
"Medicine"
]
} |
151159873 | pes2o/s2orc | v3-fos-license | Modern Christianity, Part of the Cultural Wars. The Challenge of a Visual Culture
The article focuses on the changing landscape of modern Christianity. It does so by analysing the use of the notion of image in early Christianity and in later eras. It appears that Christianity created an ambiguous concept of the notion of image, reducing it in the end to a void image. This development caused a separation between theology and culture. It is a separation which eventually led to a Christianity that is hostile to modern culture and seeks only to reinforce its own identity.
The Still-Changing Landscape of Religion
Looking at the situation of religion in Europe, the actual landscape is rapidly changing and difficult to picture in a concise way. 1 Different perspectives remain contradictory and are not really compatible. These different views are fiercely debated, yet they do not result in a real consensus on the nature of the landscape we are living in. One of the most controversial issues remains the narrative of secularization, which has long been contested in many ways. On the one hand, institutionalized Christianity is still decreasing and deconfessionalization is an ongoing process that is there to stay. 2 In this sense, indeed disenchantment, though understood in a different way than was the case with Max Weber, 3 is and will be the keyword. 4 On the other hand, religion seems to transform itself and can no longer be identified with the structures we were accustomed to. Moreover, religion has made a kind of comeback, 5 yet alia et aliter; we cannot speak of a return of the religion. 6 This contradictory phenomenon gave rise to different interpretations, ranging from authors sharing the view of Marcel 1 One of the fascinating features developing in the U.S. is the overwhelming growth of Independent Christian Groups. Cp. (Christerson and Flory 2017). 2 Cp (van Rooden 2010, pp. 121-34). 3 Cp. François A. Isambert (1986, pp. 83-103), who writes:'Il résulte de là que l'enchantement, l'ensorcellement du monde avait besoin d'être rompu par cette prise de conscience. L'Entzauberung change de signe. Il est perte, certes, mais perte d'illusion. Les grandes religions ont chacune à leur manière désensorcelé le monde et c'est pourquoi l'option pour chacune d'elles reste légitime, à la condition que le choix soit parfaitement informé et que l'adepte sache en particulier que l'acte fondamental qu'il sera amené à faire et qui donnera sens à sa vie est le sacrifice de l'intellect. Mais à celui qui fait l'option inverse est proposé la méta-éthique du courage intellectuel et de l'esprit de responsabilité qui peuvent animer la synthèse de la science et de la politique (p. 100). This may a kind of binary we no longer think acceptable, yet it clearly shows the way 'disenchantement' has changed since the times of Weber. Gauchet 7 to Taylor, 8 including the still highly important debates between Löwith and Blumenberg. 9 The main point remains the question of whether indeed our secularized era represents a new epoch in which religion no longer plays a decisive role. Or is our time still a long-term period in which these traditional religious structures still survive, though of course bearing new names? 10 Is it, so to say, a matter of supersessionism or is it indeed the 'departure of religion' as characterization of the disenchantment of the world? 11 The question is still open but, meanwhile, developments continue and we do observe many a phenomenon in which religion, be it in its traditional form, be it in a more modern form that has adopted the rules of modern politics, 12 plays an important role. There is a new battle over values, often considered to be Christian. 13 There is also a battle over identity, there is a battle about ethics, a battle about sexuality. In short, we are confronted with cultural wars and, in all these issues, religion is indeed paramount. Governments, while of course respecting the separation between church and state, try to find new ways of dealing with religion. 14 The same goes for the churches, which are fiercely trying to establish new relations with society. However, these new relations are far from clear-cut. Churches are deeply divided on matters such as sexuality, gender, and the relation with Islam, and in many churches the influence of a hardening orthodoxy is felt. To sum up, the religious landscape is indeed far from easy to understand and interpret. However, looking at the roots of these phenomena, at their history and their birth, most scholars seem to agree that the period of nominalism is an important one and that the Reformation has played a decisive role in the dynamics we still call 'secularization.' In this article, I therefore intend to focus on a particular aspect of religion, which is the relation between religion and identity, leading to a specific interpretation of the notion of image. I will discuss, too, how the theology of image still contributes to the changing landscape of religion. I shall limit myself to the Christian context, remaining thus within the framework of the main studies on secularization.
Christian Identity as a Conviction
When Christianity came into existence, it developed a distinctiveness that only gradually developed itself into an 'identity.' 15 Though Harnack stated that there was a kind of original and essential belief that created the Christian identity from its earliest days, modern research has long since abandoned this thesis. Harnack's vision was an interpretation of early Christianity according to which certain characteristics created Christianity's essential identity, namely the idea of the coming of the kingdom 7 Regarding Gauchet, see (Cloots [2015(Cloots [ ] 2016. 8 For an overview of the history of the narrative of secularization, see (Vanheeswijck 2009, pp. 3-26). See also (McKenzie 2017, pp. 3-28). An alternative is proposed by (Steinvorth 2017). See also (P. Harrison 2017;Pollack 2014). A particularly interesting view is to be found in (Donegani 2015). The same goes also for the position of Vatimo; cp. (Harris 2017). Hence, authors such as Ian Hunter maintain that secularization versus religion are concepts used by rival cultural-political factions which in a way can be seen as the continuation of older cultural wars. Cp. (Hunter 2015(Hunter , 2017. This is a vision which we also come across in Talal Assad, who defines secularization as a concept that is far from neutral but as a governing structure. Cp. (Assad 2003). An overview of the debate between Assad and Casanova can be found in Sorin Gog, After-life, Politics and Religious Governmentalisation: A Critique of the Post-Structuralist Theory, in (Sxchüler 2016, pp. 127-44). 9 Cp. (Fincke 1985, pp. 127-52). Regarding Blumenberg, cp. (Kirke 2019). See also (Kervégan 2007, pp. 107-17). 10 Cp. Enda McCaffrey (2009), who contends that the French State has replaced the Church and is in fact 'the creation of a civic theocracy.' 11 A beautiful book defending this point of view is of course Gregory (2012). 12 A good example of this new form is the French movement La manif pour tous, that can be identified with the larger part of traditional French catholic currents. Cp. (Béraud and Portier 2015). 13 A clear example of this battle was the debate on a reference to the Christian tradition on the preamble of the European Constitution. Since then, the term 'judeo-christianity' has gained a certain importance, in particular in right wing politics. Cp. (Topolski 2016, pp. 267-84;Teixidor 2006). 14 The French President Macron recently stated expressively that he wanted to reestablish the relation with the churches.
Moreover, he has also proposed that the law on the separation between church and state, of 1905, be revised: lecture of 9 April 2018, at the Collège des Bernardins, Paris, 2018. 15 One of the best introductions to the question of early Christian identity is Lieu (2002;2017, pp. 294-308). See also Christoph Markschies, who considers early Christianity as one of the products on the spiritual marketplace of Antiquity. Cp. . For an approach taking a closer look at the Jewish side of the story of the development of Christian Identity, see (Mimouni 2010). of God, 16 the notion of ethics based on love and, finally, the notion of an eternal soul that learned that we are all children of God the Father. 17 Subsequently, this original Christianity would have lost its soul once it started to merge with Hellenism, in particular Greek philosophy. This influence spoiled a good deal of the original Christianity and it was only centuries later that the Reformation was able to redress the situation and return to the essentials of Christianity. However, the notion of such essentials implied the notion of orthodoxy as a normative framework and this idea gave birth to the idea that, chronologically speaking, there was initially orthodoxy and that it was only afterwards that heterodoxy produced itself, taking the form of heresy. This idea was challenged by W. Bauer, 18 who held that heterodoxy was the starting point of the early church, and that it was this diversity that prompted the church to develop orthodoxy by proclaiming the canon, the creeds and the structure of the church. Though Bauer's thesis was contested, it had a great influence and made it clear that the essentialist view was no longer acceptable. What became evident was that early Christianity remained for a long time tainted by a plurality of currents. 19 Yet, these general traits do not represent our main interest. What is striking, in fact, is that Harnack, when he speaks about the fact that the soul has an eternal life and that it has the Father as its origin, does not refer to the notion of image. He nowhere refers to Genesis where humans are said to be made in the image of God and hence can be said to be His children. What he writes indeed is: 'Wer zu dem Wesen, das Himmel und Erde regiert, mein Vater sagen darf, der ist damit über Himmel und Erde erhoben, und hat selbst einen Wert der höher ist als das Gefüge der Welt.' 20 He focuses on the New Testament and, in his essentialist view, the notion of man as the image of God played no role. He does not consider it to be part of the core identity of a Christian. What belonged to the Christian identity, on the contrary, were convictions and doctrines, not something that was given by nature, a quality bestowed by God. Therefore, what we can observe in this essentialist approach is that it defines the right opinions and convictions as the purest base of one's identity. Indeed, Harnack saw in 19th-century German Protestantism something akin to the original Christian identity as it was before it became 'spoiled' by Hellenism.
Christian Identity as Anthropology
Hence, important aspects, such as the anthropology developed in early Christianity, were not brought to the fore and were left aside. However, the notion of man being created in the image of God was of paramount importance. This notion was already present in the letters of Saint Paul, as George H. van Kooten largely demonstrated, 21 but it only became a highly relevant aspect once the doctrine of the Trinity had been fully developed. The Cappadocean Fathers and Greek Orthodoxy 22 insisted on Christian anthropology as a doctrine holding that mankind is created in the image and likeness of God, a concept that led to the idea of deification. 23 Theologians such as Clement 24 and Chrysostom 25 were convinced that man, even after the Fall, is still in the possession of a free will and the ability to reach out to deification. Based on the fact that humans are created by God in His image, there had to be a possibility to become like God and to grasp, in some way, the divine reality. Of course, this could only be a partial grasping and hence the notion of participation became important. Nevertheless, this anthropology was an optimistic one. It was influenced by Plato, Neoplatonism, and the ideas of Origen, and it pictured the relation between God and humans as a relation that was not between equals, but that certainly was far from describing an essential incompatibility between God and humans. Anthropology became, thanks to the influence of Hellenism, a concept that allowed humans to look at the divine light, which made Christianity a religion that was not defined by certain convictions and creeds but by an anthropology. This insistence on the anthropology within the framework of the doctrine of the Trinity continued in the West, and Augustine in particular made this doctrine of humans as the image of God one of the hallmarks of his theology. Yet, it is also with Augustine that the notion of the image becomes characterized by a real ambiguity. His theology marks a shift in which the optimism of the Eastern Fathers disappears and his prolonged controversy with the Pelagians can also be considered as the battle over a certain anthropology. It is to this ambiguity that Augustine has introduced that we shall turn now. From there on, the ideas of the nominalists will loom. Their ideas will subsequently determine some of the characteristics of the transformation Christian religion undergoes in the actual era. Surprisingly, we will see that the intuitions of Harnack are about to return. Perhaps not his ideas as such on what the essence of Christianity was, but the mere fact that Christian religion becomes, once again, defined in terms of an identity having specific characteristics. It is the return of essentialism in the form of an identity policy that is at odds with the idea of Christianity expressing a certain anthropology common to all humans. This is no longer the case. Is it not true that once one takes a closer look at modern Christianity, one will be struck by the fact that orthodoxy is increasingly prevailing, and that it is this orthodoxy in particular that has changed the idea of a rich and fulfilling anthropology into the notion of humans as void images? Images, furthermore, that are no longer capable of reflecting a divine presence, and that have lost their relation with the original of which they are an image. So, let us see what Augustine tells us about the image.
Augustine and His Ambiguity in an Earlier Phase
Augustine dwells widely on the notion of image and he does so in particular within the framework of the Trinitarian doctrine. However, he started his main work on the Trinity only in 405, when he had already studied the notion of image in detail in his earlier works. What were his main ideas on image before he started with the De Trinitate? Already in his early works, to begin with the Soliloquia, 26 Augustine conceives the image as the imprint of the divine on the human being. Man has been created, in biblical terms, in the image of God, which means that humans bear this image in their being; it is part of them in the most substantial way. In their groundbreaking research, Gerald Boersma on the one hand, and Laela Zwollo on the other, have pointed out that Augustine mainly follows the Plotinian schedule of an image. In this philosophy, image is a reality which of course is different from the highest being as well as inferior to it, whilst nevertheless remaining irresistibly attracted to this highest being. 27 In Plotinus' view, then, there is a moment when the image will strive for a unity, a reunion, with the original. This endeavour is indeed the desire for a reunion, a return to the origin. Boersma subsequently explains that, though the image strives for a reunion, a conversion so to say, and though it is capable of doing so in the opinion of Plotinus, Augustine does not share this optimistic view. On the contrary, in his view the image is not capable of realizing this return by its own forces. It needs the help of the divine grace, and it is this aspect in particular which comes down to stating that a conversion to God is only possible thanks to God. Hence, it is: to God by God. It cannot be: to God thanks to the fact that humans are made in His image. Therefore, it is here then that we have the parting of the ways. The image is no longer the way to the original; it is the vestige of an original that can no longer be attained. The question, however, is why in fact did Augustine turn to a different idea on the capacities of the image than the vision he held in an earlier phase? Probably because he was strongly aware of the fact that an image is not only something that is inferior to the highest being, but that is also partially a false image. 28 It is a similitude, and a similitude always represents something true and something false. The reason for this defective character is the fact that an image is characterized by having a certain resemblance and likeness to the original. Yet, likeness implies that the image is partially true, and partially not. While an image is partially false, it definitely is not able to overcome its own defective nature by its own capacities; it can only do so thanks to a conversion. Yet, a conversion implied for Augustine the help and assistance of the highest being Himself: God. To conclude, an image represents a certain similitude and hence it has no other possibilities than being true and false at the same time. It shows simultaneously a certain presence and an absence, 29 which made Augustine think that the image was not capable of uniting itself with the highest being on the strength of its own forces and qualities. Yet, this argument only concerns the ontological aspect, which we come across in the Soliloquia. Included in this ontological approach is the perspective that images mostly concern the empirical world, which is the contrary of the spiritual world we encounter in the divine. Images represent mostly a literal and visible reality which, in itself, is perhaps not physical or visible. This leads to an ontological inferiority of the image, which will become the first step towards a certain ambiguity of the image. This physical and visible aspect of images is mentioned by Augustine when he speaks about the images one sees in one's dreams and which concern a very concrete reality. 30 He also refers to it with regard to the human memory, on which he dwells in Book X of the Confessiones. 31 Memories, of course, are mostly images of things we have once lived and seen and which we therefore have to consider as physical realities. 32 Consequently, these images represent an ontological inferiority. However, what is really different and new compared to Plotinus is the fact that, to Augustine, images not only represent an ontological inferiority, but also a moral one. Already in De Vera Religione, as was shown by Gerald Boersma, Augustine insists on this moral inferiority of the image, and he continues to do so in the Confessiones. He rather bluntly states that humans transform the glory of God into the glory of the corruptible image of God. 33 The only possibility, then, to overcome this double inferiority is through the grace of God. Humans have to be reborn thanks to this divine grace. 34
Augustinian Ambiguity in a Later Phase
So, these are the basic lines of his argumentation, which Augustine already developed before starting the De Trinitate. He bases himself on the Plotinian idea of the image, but he breaks away from this neoplatonic concept by insisting on the fact that the image is not capable of returning to God thanks to its own forces. It can only be unified with God if grace intervenes and it is only by grace that God can be attained. Secondly, the inferiority of the image is defined in an ontological and moral 28 Cp Sol. II,10,18: unde vera pictura esset, si falsus equus non esset? unde in speculo vera hominis imago, si non falsus homo? This example clearly shows that indeed the image must be false compared to the original, though even the opposite may be true: the picture of a horse can be true, still it is not a real horse. 29 Cp. (Bochet 2009, pp. 249-69 way. Once again, all this comes down to the idea that an image always has an ambivalent character. It reflects a reality and, in this respect, it certainly has its positive qualities. Being a reflection, it is part of the reality it reflects, and it cannot be separated from it. At the same time, however, there is this ontological and moral inferiority which also belongs to the image and that reveals its negative qualities.
It is a contrast we come across in the concept of humans that are created in the image of God without being the image of God. Would they be the image of God, there would exist the possibility of an equality between the image and its divine origin, as in the case of the Father and the Son. Indeed, they do share a substantial equality, which is inconceivable in the case of the relation between humans and God. Following this trail, Augustine will continue to develop his ideas on image in the De Trinitate. The twofold character of the image remains strongly present, yet there is still another element added. That is the element of self-knowledge. Self-knowledge teaches us that we cannot know ourselves without knowing God, and this awareness starts with the idea that we are created in the image of God. If we have been created in the image of God, we certainly have to understand this to mean we have been created in the image of the Trinity. How, then, does this trinitarian approach further our understanding of the image? Two examples seem to be the most important to Augustine, namely love and memory.
One who loves will realize that love always includes someone else. Even if one loves oneself, there remains this triad of the lover, the beloved and the love by which they are bound together. Yet, such an appraisal is only possible if we understand the nature of the Trinity and its dynamics. Understanding the nature of the Trinity, however, also implies that we have to acknowledge that being created in the image of God means being created in the image of the Trinity. So, the example of love allows us to attain some sort of self-knowledge, but this self-knowledge cannot be separated from the knowledge of the divine. The same goes for our memory. If I remember something, it cannot be something other than my own memories. 35 Then again, there is the one who remembers and there is also his memory, as well as the relation between memory and the one who remembers. Moreover, if I have a closer look at these memories, they represent not only my memories; they also represent the memory of the divine. This leads to the conclusion that the image has not only an ontological and moral status, but that it also functions as a knowledge tool. Both aspects-the ontological and the moral-stress the incarnational aspect of this doctrine. It allows us to know God, thanks to the fact that I can know myself; but this knowing myself comes down to realizing that there is a trinitarian image in my soul. Self-knowledge and knowledge of God cannot be disentangled. Nevertheless, even when I have come to know God, and in that sense have returned to God, there still will be this ontological inferiority. The same goes for the moral aspect. Augustine repeatedly insists in the De Trinitate on the fact that the soul, after the Fall, is morally corrupted and that this is due to its own cupidity. 36 This cupidity engenders the fact that the soul identifies itself with physical images by a mere fascination and perverted love for these physical images. 37 So, what we have in the end is the same picture as we had in the early works of Augustine. The image has a twofold character: it relates us substantially to God, but at the same time it breaks down this substantial relation. Moreover, this substantial inferiority cannot be overcome, at least not by humans themselves; there is a substantial need for grace. Put differently, the image suffers from defects that can only be traced back to the Fall and which, up to a certain extent, reduce it to a void image. It can no longer accomplish its role as the inner reference to that from which it originates, and it suffers from a serious ambiguity. On the one hand, it is the proof of the divine presence in human existence, the ultimate reference to the highest One. On the other hand, it is the proof of the impossibility, due to the Fall, of being present with and in the divine. It has become a void image unless divine grace restores and reforms it, which is quite a tragic conclusion, given that the image was meant to be the reference to the original and hence had a very positive meaning. How 35 De Trin. IX,2,2; X,3,5. 36 De Trin. IX,5,7: Multa enim per cupiditatem prauam, tanquam sui sit oblita, sic agit. 37 De Trin. X,6,8: Errat autem mens, cum si istis imaginibus tanto amore conjugit.
was this tradition then handed over to the Middle Ages, in particular to nominalism, and how did the nominalists interpret this ambiguity? 38
Ockham and the Lost Referential Character of the Image
We will deal with this topic by exclusively focusing on William of Ockham, 39 Ockham being the sharpest theologian of his time, up to a point where things become poignant. He delves into this matter in his Ord. dist. II, q 10, which is dedicated to the question of imprints and images. Yet, before entering into this matter, we should realize that Ockham distinguished between two kinds of knowledge: intuitive and abstract. The latter, however, cannot be reached without first having had some intuitive knowledge. You cannot recognize a tree as a tree if you have never seen a tree, and if you want to obtain abstract knowledge of something new, you will have to memorize what you have seen before in order to grasp this abstract knowledge. Someone who has never seen Hercules will not recognize a statue of this hero. 40 Yet, even this abstract knowledge has to be preceded by intuitive knowledge. This being said, a question remains: is there any difference between imago and vestigium? Indeed, there is. Vestiges are traces that are always caused by that of which they are a vestige, whereas this does not apply to images. 41 We can think of an image of Hercules that has not been produced by Hercules himself, but by a sculptor. This does not apply to vestiges; the imprint of a hoof cannot be made by anything other than the hoof itself. This differentiation seems to be a positive point of departure, with images possibly originating from something other than that of which they are an image. This makes the notion of image a far more complex one than the notion of vestige. Image can have a wider sense and does not refer to the original in the same way as vestige does. It is rightly this broader sense that raises some questions. Is it true, e.g., that humans possess a trustful image of God? Do they refer to the highest being? Or can we humans create these images ourselves? At first glance, Ockham seemingly adopts an optimistic tone of voice: there must be something that God and humans have in common, if indeed these humans have been created in the image of God. The incarnational aspect seems to be preserved. Yet, the expression 'in common' does not really fit this relation. Ockham returns to the old Augustinian idea of a similitude. Of course, there must be something that applies to God and to humans, otherwise the latter would not be called the image of God. So indeed, there must be a similitude between humans and God. 42 At this point, however, the analogy between Augustine and Ockham ends. Whereas Augustine is worried about the aspects of true and false and says that one has to look at a false Hector in the theatre in order to know the real one, Ockham is much more occupied by the fact that all these similitudes remain concepts. A concept can be true; it can even be univocally true when applied to God and to humans; but it is not the reality we are looking for. Speaking about the ultimate divine reality, we have to admit that we cannot know who God is: nihil est commune univocum deo et cuicumque naturae. 43 There is a kind of scepticism regarding our reason that obliges him to conclude that reality itself, time and again, concerns only the individual reality. Hence, when we try to relate one reality to another, we can only refer to concepts, realizing that we cannot grasp the reality behind them. 44 Therefore, speaking about similitudes, we can only stick to the level of concepts. Moreover, even these concepts will not satisfy us entirely. For example, 38 Cp. (Boulnois 2009, pp. 271-92). 39 See, as introduction to the complex thoughts of Ockham, (Normore 2012, 25p.). 40 Ord. I, dist 3, q. 9: Per experientiam enim patet quod si aliquis nullam penitus habet cognitionem de Hercule, si videat statuam Herculis non plus cogitabit de Hercule quam de Sorte. 41 Ibid.: Sed differunt vestigium et imago quia de ratione vestigii est quod sit causatum ab illo cuius est vestigium . . . de ratione autem imaginis non est quod sit causata ab illo cuius est imago, sicut imago Herculis sufficit quod causetur ab alio quam ab Hercule. Q 9: imago autem quia non est necessario causata ab illo cuius est imago. 42 Ibid.: illa creatura maxime proprie dicetur imago dei quae habet aliquid deo simillimum. 43 Ibid. 44 Cp Ord., dist 2, q, 9: Dico quod conceptus univocus uno modo poetst intelligi distingui contra conceptum proprium, alio modo potest intellegui distingui contra conceptum denominativum. Ockham here clearly speaks about concepts! speaking about God, we cannot find a reason why His unity gave birth to a Trinity. 45 It is only because the Scripture tells us to believe so. 46 So, if we stay within the framework of the similitude, what then can be the characteristics God and humans have in common? In Ockham's view, these characteristics do indeed refer to God's qualities, but, at the same time, they do not allow us to know God's essence.
Think of notions such as beauty, goodness, and truth, or of qualities such as mercy, justice, and wisdom. 47 They certainly belong to God, but at the same time they cannot be identified with God's essence. It is even more complicated than this matter of concepts. As long as our arguments follow the lines of a similitude, it has to be true that this similitude only concerns the accidental traits of both the image and the original. A statue of Hercules can have the same proportions as Hercules, the same haircut, even the same colour. So there is a similitude, but all these characteristics do not allow us to identify the statue with Hercules himself. Hence, we have to conclude that, substantially speaking, an image and the original must be different. They do not share the same substance; they only share accidental common features. Therefore, what really makes them independent beings is indeed the fact they do not share their substance; there is only the similitude, based on accidental common features. Yet, in God there are no accidents. The solution then must be, once again, that we are only speaking about a concept of God; we cannot speak about His substance, about His being as something other than all other beings, though we cannot speak of Him in anything other than the terms of 'being.' Moreover, as we have already mentioned, Ockham states that all knowledge must be based on intuitive knowledge. This means that knowledge is based on certain memory. That is the function of the image: to lead towards the memory of that of which it is an image. 48 This memory must have an empirical base. Yet, of God there cannot be such a memory. What we are 'remembering' is only the range of qualities that can apply to God and humans, such as beauty. 49 Hence, we are stuck in the realm of concepts where we look for any kind of words that univocally can be applied to x and y, but even though such concepts can be found, they still are not capable of revealing to us the very individual substance of each thing. We can speak of similitudes, of the accidents things can have in common, but we cannot define each individual substance by using words that can also be applied to other individual things. The incarnational aspect therefore becomes a limited one. What does this imply for the image? One of Ockham's relevant statements is that the image is not necessarily made by that of which it is an image. 50 However, when we speak about the image of God according to which humans have been created, this image can be the equivalent of a vestige: created by that of which it is a vestige. Apparently, there is room to argue that the image has preserved a referential character. That certainly is correct, but two important aspects definitely undermine this referential character. Firstly, we must keep in mind that this referential character only concerns the accidental aspect of a particular being. Secondly, there is no memory of God on which we can base our eventual knowledge. God cannot be compared to the other examples provided by Ockham. The image of Hercules only procures knowledge if we have seen Hercules in reality; otherwise, there cannot be any recognition. The same goes for his example of a vestige. We recognize the imprint of the hoof 45 Cp. (Friedmann 2010). 46 Ord., dist 2, q, 1: Ideo propter istam rationem dico quod sapientia divina omnibus modis est eadem essentiae divinae quibus essentia divina est eadem essentiae divinae, et sic de bonitate et iustitia; nec est ibi penitus aliqua distinction ex natura rei vel etiam non-identitas. Cuius ratio est, quia quamvis talis distinctio vel non-identitas formalis posset poni aeque faciliter inter essentiam divinam et sapeintiam divinam sicut inter essentiam et relationem, quia tamen est difficilima ad ponendum ubicumque, nec credo eam esse faciorem ad tenedum quam trinitatem personarum cum unitate essentiae, ideo non debet poni nisi evidenter sequitur ex creditis traditis in Scriptura Sacra vel determination Ecclesiae, propter cuis auctoritatem debet omnis ratio captivari. 47 Ord., dist 3, q. 9. 48 Ibid.: ducere in recordationem illius cuius est imago. See also q. 9. 49 Ibid., q. 9: Q 10: illa, inquam, substantia creata haberet accidentia eiusdem rationis cum accidentibus dei, et ita vere esset imago ducens in recordationem dei sicut modo statua ducit in recordationem Herculis. 50 Ibid., q. 9: De ratione autem imaginis non est quod sit causata ab illo cuius est imago, sicut imago Herculis sufficit quod causetur ab alio quam ab Hercule.
of a cow only because we have already seen cows and hoofs. 51 All this does not apply to God. The epistemological and ontological aspects of the image that Augustine still tried to preserve have been lost. The same goes for language as an image. It touches only the accidental aspects of particular beings and it cannot be applied to God and humans. The conclusion must be that, where Augustine maintains that the image of Hector can bridge the epistemological gap between the different realties of God and man, Ockham strongly believes that the image of Hector is no longer capable of giving us access to the divine reality. The image does exist as a trinitarian image, sure, but it remains a concept and has a limited incarnational value. What is left is not the capacities of our mind, our argumentation; no, they do have a limited reach. What is really left, on the contrary, is belief; belief that is based either on the Scripture or on the authority of the Church, and that teaches us that the Trinity is an existing reality in which essence and existence cannot be separated.
The Reformation: Iconoclasm
This was the epistemological situation at the start of Reformation. The Reformation had to flee into the shelters of fideism, not knowing in fact how to solve the philosophical problems nominalism had confronted theology with. The biggest question was whether any system of reference could still be saved, repaired, or newly invented. How did humans relate to the divine? Unfortunately, the question was not solved. There was a kind of twofold development. On the one hand, the Reformation insisted once more on the role of grace, which came to be understood as a moment of extreme contingency in Late Medieval theology. Not only were humans not able to know God, now that the image of the Trinity was no longer a step towards knowledge of the divine. There was even more: divine grace had indeed become completely unpredictable because God would otherwise be bound to human knowledge and thus lose His sovereignty. What was left was a theology of grace whereby God became a sovereign who was acting completely arbitrarily. Grace was no longer a means of a theological epistemology, but it became part of a theological anthropology, in which the overwhelming power of the original sin became the point of departure. Men were sinners and the only possibility of salvation was divine grace, though no one could count on it. The notion that humans were still reflecting the image of their Creator was no longer of great relevance. As the Reformation was neither able nor willing to solve the philosophical problems nominalism had dared to define, it refused to develop a philosophy of image and refused to design a kind of theological aesthetics. 52 That was the first move. On the other hand, it narrowed down the question of the image in the philosophical sense of the word to the topic of images in the pictorial sense, be it pictures or statues. It favoured iconoclasm as a realistic and destructive counterpart of its refusal to develop a theology of image (though in a later era, even in Protestant countries, the arts flourished). 53 It turned itself against the arts, but it did not confront itself with the question of the image in the field of the arts and the field of philosophy. It did not consider the relation between a pictorial image and the reality, nor did it study the question of the similitude between humans as the images of God and the divine reality. 54 On the contrary, it condemned images as betrayals of the reality to which they pretended to refer. Where Augustine and Ockham still spoke of Hector and Hercules, this was no longer possible in the Reformation. Such examples were taken from a pagan culture; they were taken from the theatre, and from myths that had nothing to do with the 51 Ibid., q. 9: Et ita est de vestigio, quod si aliquis videat vstigium bovis recordabitur de bove habitualiter cognito, sed si numquam prius habuisset aliquam notitiam de bove non plus recordaretur de bove quam de asino. 52 Cp. Pettegree (2005), in particular the chapter 'The visual Image,' pp. 102-27, who shows that there are some exceptions, especially the role of Cranach. Luther did try to develop a new kind of pictorial template but failed in his endeavors. The same goes for Joseph Leo Koerner, The Reformation of the Image, Chicago 2003, who shows in a masterful way how Cranach modified the iconic scheme of the catholic tradition, thus creating a new template, but also destroying older ways of representing the divine. See also Viladesau (2008), in particular chp. II, The Protestant Reformation in the Church and the Arts, online, as well as (Finney 1999). 53 Cp. (Finney 1999). 54 Though it has to be admitted that there was already in the late medieval age a tendency to a different form of self-understanding and therefore also to a different form of representation and self-representation. See, for a detailed analysis, (Herbert 2017).
Scriptures, and therefore their existence had to be ignored. In a way, it represented the false church as opposed to the true church-a theme that was of paramount importance. 55 Iconoclasm was, in the end, the answer of the Reformation both on the level of arts and on the level of theology 56 : any theology of image was abandoned. The only visual aspects that were still accepted were the visual elements of the Last Supper and Baptism. They were considered to be signs that represented something of a seal, proving the veracity of the One who had pressed it into humans, but this sacramental theology did not lead to any theological aesthetics. The world of faith therefore became completely disconnected from the arts, 57 but even more so from the philosophical problems that went along with the notion of image, which from now on belonged exclusively to the arts. It was the arts that were to develop a philosophy of aesthetics. As Brad Gregory 58 has already concluded, the Reformation wanted to reform but it ended up in destructive tendencies. The Counter-Reformation, in its turn, used the arts as means of propaganda and valued the arts in a positive way. However, it could only accept the arts if they were willing to illustrate the message of the church, which, shortly after the Tridentinum, started its fight with rationalism as an independent capacity to analyse the physical reality apart from faith. This situation was to last until the end of the 19th century and found its last expression in the Syllabus Errorum of 1864. However, the Catholic tradition of the Counter-Reformation, though fully incorporated in the prevailing culture it was substantially involved in shaping, did not develop a new doctrine of the epistemological value of images, nor did it develop a new anthropology that viewed humans in another perspective than the one that had been left by Augustine and nominalism. The result, therefore, of the Reformation and the Counter-Reformation was that the notion of image became a void one. It had long functioned as a bridge between humans and the inconceivable reality of the divine, but now it had lost this referential character.
Modern Times: The Separation of a Visual Culture and Religion
Which brings us to modern times. There is nothing new in saying that modern culture is, for the greatest part, a visual culture. 59 One is constantly reminded of the fact that what really matters is one's image. Image indeed has become prevalent in our culture and its presence in many forms seems to be everywhere. In our daily life, we are walking from screen to screen; we are constantly watching our mobile phones and we are trying to make ourselves as attractive as possible, thanks to the images we create on Facebook, Instagram, and so on. We want to be looked at, by others as well as by ourselves, and taking selfies has therefore become one of the strongest means of satisfying this need. Yet, though images are omnipresent and though they represent a dominating force in our lives, they no longer possess the referential character they once had. The modern image tells us that 'what you see is what you get' and there is nothing behind it. It creates its own reality and it no longer reflects another one. The best example of this phenomenon is perhaps the world of modern politics. A politician must present herself to his electors and therefore will take into account what these electors want to hear. Presenting herself and her ideas would not do and she will have to comply with the demands of her electors. What she will therefore give them is an image of herself, an image that will satisfy the often-irrational needs of her electors. The perfect example of this imagining one's own image is the actual President of the U.S., a man who is no less in love with his image than his electors. However, the image he created is the only reality that is left and it does not have any referential character. His image no longer refers to any value; it does not refer to any tradition within the story of democracy, 55 Cp. the drawings of Lucas Cranach the Younger, picturing the difference between the true and false religion. Cp. (The Staatliche Museen Zu Berlin et al. 2016). 56 A striking example, focusing in particular on the St. Bavo in Haarlem (Netherlands) is given by Mia M. Mochizuki (2008).
See also (Michalski 1993). 57 Cp. Randall C. Zachman (2007), who distinguishes in Calvin the rejection of pictorial art as 'dead' images, that can be opposed to the living images that Calvin allows. 58 (Gregory 2012). 59 Cp. (Heywood and Sandywell 1999;Rampley 2005;Plate 2002). nor to any institution inherited from the structures laid down by Montesquieu. This example can be expanded to many others. Who would not be surprised to see 'the mother of all parliaments,' the parliament of the UK, abasing itself to a kind of ongoing cabaret (at least in the month I am writing this article, March 2019)? Therefore, if power is obtained not by arguments, not by strategy, but by images, then the conclusion has to be that the modern visual culture is no longer the exclusive domain of arts or fashion; on the contrary, it has become the principal vector of our existence. Without creating one's image, without being looked at, one can scarcely pretend to exist. Curiously, the image is, literally, no longer an image because it no longer reflects an original of which it is the representation. An image, officially, can only be the image of something else-the original that gave birth to the image. Nowadays, however, the image has become its own reality and there is no need to refer to an underlying reality that is at the same time more complex and more simple than the image can be. This underlying reality can be a story, it can be a belief, it can be a tradition and so on; but all these kinds of realities are nowadays considered to be completely outdated. One might even say that the few stories that are still left provoke a growing aversion. The story of Europe does not inspire anymore, but is considered to be harmful to national interests. The story of democracy seems to be attacked and torn apart by two opposite forces: on the one hand, the cry for a strong man, and on the other, the cry for a people's democracy without any system of representation. The story of tolerance and hospitality is swept away by the argument of the 'clash of civilizations.'
Christianity in a Confessional Shelter
All in all, the values of Christianity and the Enlightenment seem to be at an end. This is a phenomenon that can easily be described as the mere result of technological progress and mass-media, but it is perhaps more complicated. The Christian story of the image has certainly contributed to the creation of this void image that no longer has a referential character. 60 It has definitely damaged the anthropology that pictured humans as more than mere puppets, abandoned by their Creator. Images have been devaluated in the long history of theology. They lost their ontological status and were merely interpreted as the reflection of moral behaviour, and thus it is within the Christian tradition that they had already lost their referential character. By focusing on morality, on the ontological inferiority, Christianity abandoned the strands of a positive approach to human existence. It is important not to conceal the role of Christian theology in these developments. This matters even more when we think of linking this development to various shifts that have changed the religious landscape in recent times. The main shift is perhaps the increasing influence of orthodoxy and evangelicalism, which often go hand in hand. Christianity seems to adopt the same attitude as it did in the second half of the 19th century, when it turned itself against modernism, against modern scholarship on the Bible and on the history of Christianity. Issues such as homosexuality, the role and place of women, euthanasia, and abortion have all become watersheds between modern liberal society and traditional Christianity. 61 This traditional Christianity does not consider itself as traditional, however, but as authentic and as a tradition that has preserved the message of the Bible. It is, in that sense, a current that aligns itself with Harnack and his Hellenisierungsthese. There is, in this concept, an original message of Christianity that should not be attenuated by cultural influences. On the contrary, there are some essentials (at some time called fundamentals 62 ) which cannot be exposed to the risk of being weakened by modern 60 In that sense, it is absolutely correct to presuppose that Christianity and secularization are connected to one another. Cp. (Bourdin 2015, pp. 192-205). 61 Cp. (Zafirovski 2009). To make this perfectly clear, the following quote may suffice: 'For illustration, US religious and political "reborn" conservatives condemn, attack, and seek to destroy liberal-secular, implicitly Jeff ersonian, democracy in America and beyond for its imputed ungodliness, notably the "mortal sin" of promoting and protecting human liberty at the expense of supra-human causes. These causes involve the primacy of Deity, including Biblical revelation, truth and inerrancy, faith and piety, religiously determined strict moral virtues, and nationalistic patriotism' (p. 258). 62 Cp. (Schimmel 2008). liberalism. 63 It is, in a way, another result of secularism, as secularism obliged religion to become a domain apart, no longer sharing public space with public institutions such as law, education, politics, and so on. As Torkel Brekke has it: 'religion came to be seen as a thing that could be detached from culture.' 64 Therefore, the dialogue between culture and religion was transformed into an opposition, leading to a kind of essentialism. This essentialism, which is far removed from modern scholarship, is reinforced by a kind of naïve fideism that pretends that the Scriptures are a source which is directly accessible and directly applicable. There is no awareness of the fact that the Scriptures are the product of a certain culture, that text and context cannot be separated one from another and that they should be interpreted extremely carefully because our knowledge of the times the Scriptures were born is limited. Assuming, however, that one should leave aside this cultural aspect, even then it should be clear that the voice of the Lord is only the voice the author of one of the sacred Books considers to be the voice of the Lord, without us knowing what His voice really is. In that sense, Ockham was right. However, these basic insights seem not to be relevant for the many believers with an orthodox or evangelical background. They consciously want to oppose themselves against modern liberalism (though not against economic liberalism) and find in their belief a stronghold against these modern times. 65 It must be added here that these tendencies are not limited to the evangelical movement alone. It is also present in a highly academic current such as Radical Orthodoxy and one could indeed uphold that even Radical Orthodoxy ends up in fideism. 66 Radical Orthodoxy does not style itself in a less controversial way and it strongly insists on the nihilism our society is suffering from. Yet, by accusing modern society of nihilism and by clearly defining itself as the remedy against this nihilism, it contributes to widening the gap, once again, between Christian belief and modern society. 67 The consequences of approaches such as mentioned hereabove are far from innocent. Because of their basic tenets, because of their roots in fideism and essentialism, these tendencies transform themselves into an identity discourse. Modern orthodox Christians do have a strong sense of their identity, and this identity is not compatible with modern liberalism, modern views on sexuality, on ethics, and so on. President Trump was elected with the full support of the evangelicals, because he is critical on abortion. He is even well supported by white evangelical women who prioritize issues such as immigration, Christian identity, and ethics over their own gender identity. 68
Image as a Stumbling Block for Christianity
In the midst of these developments, the whole question about the nature of images really matters. Indeed, the query of what images are and to what kind of reality they refer has been abandoned in our times. In our visual culture, the reference of the image to an original seems to be an issue that completely lacks relevance. If my image satisfies me, it is not because it fits into any greater story, such as the story of Europe, of democracy, or of Christianity. It fits because it reflects a 'me' which I consider to be the real 'me.' How do I know this 'me' fits 'me'? Because it is a similitude: it resembles others, but at the same time it is purely individual. It resembles others sufficiently enough to allow me not to worry about not being like the others. At the same time, it allows me, up to a certain extent, to be 63 Cp. Brouwer et al. (1996), which distinguishes four characteristics of this new fundamentalism: anti-Catholic, anti-feminist, anti-Communist, anti-Islam. 64 (Brekke 2012, p. 64 Shakespeare (2000, pp. 163-77), who quotes "There is no such thing as a secular realm, a part of the world that can be elevated above God and explained and investigated apart from him. There is however a great difference between Radical Orthodoxy and evangelical Christianity. The letter opposes itself against our modern culture by rejecting, the former engages in a fierce debate and fortunately dares to use philosophical argumentation. Still, it is used in order to state that culture without Christianity is lost." from Philip Blond. 68 PEW Research Center, 18/3/2019: Evangelical approval of Trump remains high, but other religious groups are less supportive (www.pewresearch.org). different from the others. This similitude suffers from the same difficulty as the one Ockham already drew attention to. It remains a concept without touching on the reality behind, without even being able to state whether there is a reality behind, because such a reality would always be a very particular and individual reality, one that cannot be related to a multitude of reflections and vestiges of it. This all suggests that the relations between humans in our time are comparable to a similitude between images. This strongly affects the way religion is conceived in our times and the way it develops. Religion accepts the break between the image and a reality behind the image; it is even one of the main causes of this break, and it does not care about reframing the theological anthropology. It does not try to re-establish the link between humans and the divine on the basis of an ontological notion of image. As it has left the notion of the image to the outside world-the cultural one-it is obliged to seek its truth outside the culture. Therefore, and unavoidably, in order to remain present within the surrounding culture, it must increasingly seek its foundation in itself, and it will become, compared to the culture in which it functions, more and more orthodox. That will, of course, create an increasing tension with this surrounding world. Religion in that sense is one of the most important dynamics in the current cultural wars, but, though it is a real and important aspect of modern binaries and divisions, this does not imply that society at large will become more religious in the traditional sense. The tension, then, between society and religion can only be resolved by focusing on specific issues that result from the orthodox confessional point of view. To put it quite straightforwardly, it can only be resolved by creating new battlefields, such as sexuality and gender equality, among others. Putting it differently: though religion no longer cares about the nature of image, it does use the void images we are living with. As there is no longer any theological anthropology making clear what could be the relation between humans and the divine, these void images that are left behind can still be used to discriminate between humans. When no one upholds that man and woman are created equal and that they are both created in the image of God, and when no one upholds this on the basis of a sane referential character of the image, it becomes easy to state that there is a difference between man and woman, as between black and white, homosexual and heterosexual, and so on. The loss of the referential character of the image procures the possibility for these fideist currents to create harmful distinctions. A certain backslide within the domain of values is therefore not unlikely; on the contrary. This is the paradox we will have to live with. Religion in the sense of confessional belief becomes less present. Yet, at the same time, its influence is only increasing in the domain of politics and society. The notion of the image, which once was an epistemological notion capable of handing over some knowledge of the divine, has become a void image, and it had already become so within the theology of the Reformation. This void image is one the main sources of our visual culture, where radical conservatism becomes the hallmark of the cultural wars and where the biggest bones of contention are situated in the domain of morals, gender, sexuality, and so on. That is, for the larger part, due to the fact that Christianity failed to reframe the notion of image and to restore the ontological relation between humans and the divine.
Conclusions, Cultural Theology Based on a Sound Anthropology
Looking back, then, to the beginning of this article and the question of secularization, my view will be clear. There is no return of religion in the traditional confessional sense of the word. Yet, religion continues to mark our society. It has transformed itself into a cultural statement, whilst at the same time positioning itself outside the dominating culture. It hopes to conquer the world not by missionary activities, but by associating itself with political points of view. In that sense, religion has proven to be capable of transforming itself, though at the same time this mechanism of political and religious interests intertwining and intersecting is a well-known phenomenon. Yet, one should also want to raise the question as to whether there is any 'solution' to this failing anthropology. The question is a difficult one. In principle, both the Catholic and Protestant traditions have had the opportunity to develop a balanced anthropology, in which the emphasis could have been put on the fact that all humans have been created in the image of God. In the words of Schleiermacher: All that is human is holy, for all is divine. 69 The Protestant tradition, which had, admittedly, since its birth turned away from culture and arts, could have done so because of its pneumatological character. This character could have relativized the importance of the church and its confessional tradition. Unfortunately, the confessional aspect became stronger and stronger and the Protestant tradition lost its capacity to relativize its own institutional nature. It was no longer capable of seeing beyond the walls of the church. The same goes, in fact, for the Catholic tradition, which had already developed a strong anti-modernist tradition by the 19th century. Yet, the Catholic tradition originally insisted strongly on the incarnation and therefore could have been capable of seeing the reflection of the divine in all human beings. Of course it did so, but it narrowed down this acknowledgement to those people who belonged to the church. Put simply: in both traditions the confessional aspect dominated the original intuitions, and this was due to the fear that modernity and rationality would definitely harm the place and role of the church. To echo Schleiermacher once again, they venerated a Scripture that was no more than a mausoleum, a monument for the spirit that once was there but that no longer dwells in letters hard as stone. 70 This tendency deepened the gap between culture and society on the one hand and the church and belief on the other. It led, indeed, to a kind of fideism that was considered to be a stronghold in times of increasing antagonisms and that could only make its voice heard by creating a nexus with conservative political stances. At least, this is the case in the early 21st century. Earlier, in the 1990s, Protestant churches in particular did the contrary and associated themselves with progressive views. The mechanism, however, has remained the same: confession losing influence and trying to preserve it by cherry-picking at the table of politicians. Yet, if Christianity were to redefine its position and if it were willing to contribute to creating a counterweight to the visual culture in which no image refers anymore to a reality behind it, it should develop a new concept of what makes humans human, based on a sane doctrine of the image of God. That would imply and demand a full commitment to society and culture instead of emphasising opposition. It would also demand the awareness that religion is nothing more than one of the human cultural expressions, not a domain apart. What Christianity needs, therefore, is the kind of spirit Schleiermacher showed in his On Religion: Speeches to its Cultured Despisers, in which he shed off all confessionalism. His wording is unsurpassed and merits to be reread in our times: You wish always to stand on your own feet and go your own way, and this worthy intent should not scare you from religion. Religion is no slavery, no captivity, least of all for your reason. You must belong to yourselves. Indeed, this is an indispensable condition of having any part in religion. 71 | 2019-05-13T13:06:03.526Z | 2019-04-28T00:00:00.000 | {
"year": 2019,
"sha1": "00132cf4e991087a4319c5ce148b1e6c4eb9d5ff",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1444/10/5/299/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "df7b627ff49dfab97b99d3800687aab277aa881f",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"History"
]
} |
55832213 | pes2o/s2orc | v3-fos-license | On Araujo's Theorem for flows
Araujo proved in his thesis \cite{A} that a $C^1$ generic surface diffeomorphism has either infinitely many sinks (i.e. attracting periodic orbits) or finitely many hyperbolic attractors with full Lebesgue measure basin. The goal of this paper is to extend this result to $C^1$ vector fields on compact connected boundaryless manifolds $M$ of dimension 3 (three-dimensional flows for short). More precisely, we shall prove that a $C^1$ generic three-dimensional flow without singularities has either infinitely many sinks or finitely many hyperbolic attractors with full Lebesgue measure basin.
Introduction
Araujo proved in his thesis [3] that a C 1 generic surface diffeomorphism has either infinitely many sinks (i.e. attracting periodic orbits) or finitely many hyperbolic attractors with full Lebesgue measure basin. The goal of this paper is to extend this result to C 1 vector fields on compact connected boundaryless manifolds M of dimension 3 (three-dimensional flows for short). More precisely, we shall prove that a C 1 generic three-dimensional flow without singularities has either infinitely many sinks or finitely many hyperbolic attractors with full Lebesgue measure basin. Notice that this result implies Araujo's [3] by the standard suspension procedure. We stress that the only available proofs of Araujo's Theorem [3] are the original one [3] and third author's dissertation [24] under the guidance of the first author (both in Portuguese). A proof of a weaker result, but with the full Lebesgue measure condition replaced by openess and denseness, was sketched in the draft note [22]. Let us state our result in a precise way.
The space of three-dimensional flows equipped with the C 1 topology will be denoted by X 1 (M). The flow of X ∈ X 1 (M) is denoted by X t , t ∈ R. By a singularity we mean a point x where X vanishes, i.e., X(x) = 0. A subset of X 1 (M) is residual if it is a countable intersection of open and dense subsets. We say that a C 1 generic three-dimensional flow satisfies a certain property P if there is a residual subset R of X 1 (M) such that P holds for every element of R. The closure operation is denoted by Cl(·).
Given X ∈ X 1 (M) we denote by O X (x) = {X t (x) : t ∈ R} the orbit of a point x. By an orbit of X we mean a set O = O X (x) for some point x. An orbit is periodic if it is closed as a subset of M . The corresponding points are called periodic points. Clearly x is a periodic point if and only if there is a minimal t x > 0 satisfying X tx (x) = x (we use the notation t x,X to indicate dependence on X). Clearly if x is periodic, then DX tx (x) : T x M → T x M is a linear automorphism having 1 as eigenvalue with eigenvector X(x). The remainder eigenvalues (i.e. not corresponding to X(x)) will be referred to as the eigenvalues of x. We say that a periodic point x is a sink if its eigenvalues are less than one (in modulus). We say that X has infinitely many sinks if it has infinitely many sinks corresponding to different orbits of X.
Given a point x we define the omega-limit set, ω(x) = y ∈ M : y = lim t k →∞ X t k (x) for some integer sequence t k → ∞ .
(when necessary we shall write ω X (x) to indicate the dependence on X.) We call Λ ⊂ M invariant if X t (Λ) = Λ for all t ∈ R; transitive if there is x ∈ Λ such that Λ = ω(x); and non-trivial if it does not reduces to a single orbit. The basin of any subset Λ ⊂ M is defined by W s (Λ) = {y ∈ M : ω(y) ⊂ Λ}.
(Sometimes we write W s X (Λ) to indicate dependence on X). An attractor is a transitive set A exhibiting a neighborhood U such that A compact invariant set Λ of X is hyperbolic if there are a continuous invariant tangent bundle decomposition T Λ M = E s Λ ⊕ E X Λ ⊕ E u Λ over Λ and positive numbers K, λ such that E X x is generated by X(x), DX t (x)/E s x ≤ Ke −λt and DX −t (x)/E u Xt(x) ≤ K −1 e λt , ∀(x, t) ∈ Λ×R + . With these definitions we can state our result.
Araujo's Theorem for nonsingular flows. A C 1 generic three-dimensional flow without singularities has either infinitely many sinks or finitely many hyperbolic attractors with full Lebesgue measure basin.
This result is closely related to Corollary 1.1 in [20] which asserts that a C 1 generic three-dimensional flow has either an attractor or a repeller (i.e. an attractor for the time-reversed flow). Indeed, Araujo's Theorem for nonsingular flows implies the existence of at least one hyperbolic attractor, but in the nonsingular case only.
The idea of the proof of Araujo's Theorem for nonsingular flows is as follows. To any three-dimensional flow X we define the dissipative region as the closure of the periodic points where the product of the eigenvalues is less than 1 in modulus. Two important properties of the dissipative region for C 1 generic flows without singularities X will be proved. The first one in Theorem 3.1 is that there is a full Lebesgue measure set L X of points whose omega-limit set intersect the dissipative region. The second one in Theorem 5.1 is that if X has only a finite number of sinks, then the dissipative region is hyperbolic, and so, splits into a finite disjoint collection of homoclinic classes and sinks. These properties allow us to apply the results in [8] concerning the neutrality of the homoclinic classes. In particular, we conclude for C 1 generic three-dimensional flows X that every point in L X is contained in the basin of the homoclinic classes and singularities in the dissipative region. To complete the proof we will show in Theorem 4.1 that those homoclinic classes in the dissipative region attracting a positive Lebesgue measure set of points are in fact hyperbolic attractors.
Dissipative Region
Hereafter, we will consider three-dimensional flows without singularities only.
Let X be a three-dimensional flow. We say that a periodic point x is dissipative if | det DX tx (x)| < 1. Denote by Per d (X) is the set of dissipative periodic points. The dissipative region of X is defined by Dis(X) = Cl(Per d (X)).
On the other hand, a saddle of X is a periodic point having eigenvalues of modulus less and bigger than 1. We denote by Saddle(X) the set of saddles of X. Define the set of dissipative saddles, Saddle d (X) = Per d (X) ∩ Saddle(X).
In particular, this equality is true for Kupka-Smale flows, i.e., flows for which every periodic orbit is hyperbolic and the stable and unstable manifolds are in general position [11]. Through any saddle x it passes a pair of invariant manifolds, the so-called strong stable and unstable manifolds W ss (x) and W uu (x), tangent at x to the eigenspaces corresponding to the eigenvalue of modulus less and bigger than 1 respectively [12]. Saturating these manifolds with the flow we obtain the stable and unstable manifolds W s (x) and W u (x) respectively. A homoclinic point associated to x is a point where these last manifolds meet whereas a homoclinic point q is transverse if is the one-dimensional space generated by X(q). The homoclinic class associated to x is the closure of the set of transverse homoclinic points q associated to x. A homoclinic class of X is the homoclinic class associated to some saddle of X. A dissipative homoclinic class is a homoclinic class associated to a dissipative saddle.
It follows easily from the Birkhoff-Smale Theorem [11] that every homoclinic class associated to a dissipative saddle is contained in the dissipative region. Furthermore, if Dis(X) is hyperbolic, then there is a finite disjoint union (2) Dis where each H i is a dissipative homoclinic class and each s j is the orbit of a sink. Next we will introduce some results from [8]. Let Λ be a compact invariant set of X. We say that Λ is Lyapunov stable for X if for every neighborhood U of Λ there is a neighborhood V ⊂ U of Λ such that X t (V ) ⊂ U , for all t ≥ 0. We say that Λ is neutral if Λ = Λ + ∩ Λ − where Λ ± is a Lyapunov stable set for ±X. The following can be proved as in Lemma 2.2 of [8].
For every subset Λ ⊂ M we define the weak basin by W s w (Λ) = {x ∈ M : ω(x) ∩ Λ = ∅}. This is also called weak region of attraction [5].
There is a residual subset R 2 of three-dimensional flows X such that if Dis(X) is hyperbolic, then there is a finite disjoint collection of dissipative homoclinic classes H 1 , · · · , H r and orbits of sinks s 1 , · · · , s l such that Proof. It follows from the results in Section 3 of [8] that there is a residual subset R 2 of three-dimensional flows X whose homoclinic classes are all neutral. Now suppose that X ∈ R 2 and that Dis(X) is hyperbolic. Then, we obtain a finite disjoint collection of dissipative homoclinic classes H 1 , · · · , H r and sinks s 1 , · · · , s l satisfying (2). Therefore, if x ∈ W s w (Dis(X)), then ω(x) intersects either H i or s j for some 1 ≤ i ≤ r and 1 ≤ j ≤ l. In the second case we have x ∈ W s (s j ) while, in the first, we also have x ∈ W s (H i ) by Lemma 2.1 since every homoclinic class is neutral. This proves As the reversed inclusion is obvious, we are done.
Weak basin of the dissipative region
Hereafter we denote by m(·) the (normalized) Lebesgue measure of M . The result of this section is the following.
There is a residual subset R 6 of three-dimensional flows X for which m(W s w (Dis(X))) = 1 To prove it we will need some preliminary notation and results. Let δ p be the Dirac measure supported on a point p. Given a three-dimensional flow X and t > 0 we define the Borel probability measure δ Xs(p) ds.
(Notation µ X p,t indicates dependence on X.) Denote by M(p, X) as the set of Borel probability measures µ = lim k→∞ µ p,t k for some sequence t k → ∞. Notice that each µ ∈ M(p, X) is invariant, i.e., µ•X −t = µ for every t ≥ 0. With these notations we have the following lemma.
Proof. For every δ > 0 we define We assert that m(Λ δ (X)) = 1 for every δ > 0. This assertion is similar to one for surface diffeomorphisms given by Araujo [3].
To prove it we define We claim that Indeed, take ǫ > 0 and for each integer n we define On the one hand, we get easily that where (·) c above denotes the complement operation. On the other hand, As ǫ > 0 is arbitrary we get (3). This proves the claim. Now, we continue with the proof of the assertion. Fix 0 < ρ < δ and η > 0 such that Choose 0 < s < 1 satisfying Take x ∈ Λ ρ (s). Then, there is an integer N x > 1 such that Thus, Then, the choice of η, ρ above yields But (3) implies m(Λ ρ (s)) = 1 so m(Λ δ (X)) = 1 proving the assertion.
To continue with the proof of the lemma, we notice that Λ δ ′ (X) ⊂ Λ δ (X) whenever δ ′ ≤ δ. It then follows from the assertion that L X has full Lebesgue measure, where Now, take x ∈ L X , µ ∈ M(x, X) and ǫ > 0. Fix k > 0 with log 1 + 1 k < ǫ.
By definition we have x ∈ Λ 1 k (X) and so there is Take a sequence µ x,ti → µ with t i → ∞. Then, we can assume t i ≥ N x for all i. Applying Liouville's Formula [16] we obtain Since ǫ > 0 is arbitrary, we obtain the result.
Again we let X be a three-dimensional flow. Given x ∈ M we define N x as the orthogonal complement of X(x) in T x M (when necessary we will write N X x to indicate dependence on X). The union N = x∈M N x turns out to be a vector bundle with fiber N x . Denote by π X We shall use the following version of the classical Franks's Lemma [10] (c.f. Appendix A in [6])
Lemma 3.3 (Franks's Lemma for flows).
For every three-dimensional flow X and every neighborhood W (X) of X there is a neighborhood W 0 (X) ⊂ W (X) of X such that for any T > 0 there exists ǫ > 0 such that for any Z ∈ W 0 (X) and p ∈ Per(Z), any tubular neighborhood U of O Z (p), any partition 0 = t 0 < t 1 < ... < t n = t p,Z , with t i+1 − t i < T and any family of linear maps L i : Proof of Theorem 3.1. Denote by 2 M c the set of compact subsets of the surface M . Let S : It follows easily from the continuous dependence of the eigenvalues of a periodic point with respect to X that this map is lower-semicontinuous, i.e., for every X ∈ From this and well-known properties of lowersemicontinuous maps [13], [14], we obtain a residual subset A ⊂ X 1 (M) where S is upper-semicontinuous, i.e., for every X ∈ A and every compact subset K satisfying By the Ergodic Closing Lemma for flows (Theorem 3.9 in [26] or Corollary in p.1270 of [17]) there is another residual subset B of three-dimensional flows X such that for every ergodic measure µ of X there are sequences Y k → X and p k (of periodic points of Y k ) such that µ Y k p k ,t p k ,Y k → µ.
By the Kupka-Smale Theorem [11] there is a residual subset of Kupka-Smale three-dimensional flows KS.
Define R 6 = A ∩ B ∩ KS. Then, R 6 is a residual subset of three-dimensional flows.
To prove the result we only need to prove where L X is the full Lebesgue measure set in Lemma 3.2. By contradiction assume that it is not. Then, there are X ∈ R 6 and x ∈ L X satisfying ω(x) ∩ Dis(X) = ∅, i.e., Since X ∈ KS, we have Dis(X) = S(X) from (1). Then, since S is uppersemicontinuous on X ∈ A, we can arrange neighborhoods U of ω(x) and W (X) of X such that Put W (X) and T = 1 in the Franks's Lemma for flows to obtain ǫ > 0 and the Since M is compact, we have M(x, X) = ∅ and so we can fix µ ∈ M(x, X). Since x ∈ L X , we have div Xdµ ≤ 0 by Lemma 3.2. By the Ergodic Decomposition Theorem [16] we can assume that µ is ergodic. Since X ∈ B, there are sequences Y k → X and p k (of periodic points of Y k ) such that µ Y k p k ,t p k ,Y k → µ. It then follows from Liouville's Formula [16] that Once we fix this k, write t p k ,Y k = n + r where n ∈ N + is the integer part of t p k ,Y k and 0 ≤ r < 1. This induces the partition 0 = t 0 < t 1 < ..
Define the linear maps A direct computation shows Then, by the Franks's Lemma for flows, there exists Consequently, t p k ,Z = t p k ,Y k and also P Z Up to a small perturbation if necessary we can assume that p k has no eigenvalues of modulus 1. Then, p k ∈ Saddle d (Z) ∪ Sink(Z) by the previous inequality which implies p k ∈ U ∩ (Saddle d (Z) ∪ Sink(Z)). But Z ∈ W (X) so we obtain a contradiction by (4) and the result follows.
We say that x is a dissipative presaddle of a three-dimensional flow X if there are sequences Y k → X and x k ∈ Saddle d (X k ) such that x k → x. Compare with [27]. Denote by Saddle * d (X) the set of dissipative presaddles of X. We only need the following elementary property of the set of dissipative presaddles whose proof is a direct consequence of the definition.
Lebesgue measure of the basin of hyperbolic homoclinic classes
This section is devoted to the proof of the following result. For this we need the lemma below. Given a homoclinic class H = H X (p) of a three-dimensional flow X we denote by where p Y is the analytic continuation of p for Y close to X (c.f. [21]). (1) m(W s Y (H Y )) = 0 for every Y ∈ R X,H . (2) H is not an attractor.
Proof. As in Theorem 4 of [1], there is a residual subset R 12 of three-dimensional flows X such that, for every homoclinic class H of X, the map Y → H Y varies continuously at X. Now, let H be a hyperbolic homoclinic class of some X ∈ R 12 . Since H is hyperbolic, we have that H has the local product structure. From this and the flow version of Proposition 8.22 in [25] we have that H is uniformly locally maximal, i.e., there are a compact neighborhood U of H and a neighborhood O X,H of X such that But the map Y → H Y varies continuously at X. From this we can assume up to shrinking O X,H if necessary that H Y ⊂ U , and so, H Y ⊂ t∈R Y t (U ), for every Y ∈ O X,H . Now, we use the equivalence in (b) above and the well-known transitivity of the homoclinic classes to conclude that t∈R Y t (U ) is a transitive set of Y . From this we get easily that H Y = t∈R Y t (U ). We conclude that We claim that if H is not an attractor, then there is a residual subset L X,H of O X,H such that We assert that U ǫ is open and dense in O X,H , ∀ǫ > 0. To prove it we use an argument from [2].
For the openess, take ǫ > 0 and Y ∈ U ǫ . It follows from the definitions that there is N large such that m(Λ N Y ) < ǫ. Set (where B δ (·) denotes the δ-ball operation). Since N is fixed we can select a neighborhood Therefore, yielding the openess of U ǫ .
For the denseness, take D as the C 2 flows in O X,H . Clearly D is dense in O X,H . Since H is not an attractor and conjugated to H Y , we have that H Y is not an attractor too, ∀Y ∈ O X,H . In particular, no Y ∈ O X,H has an attractor in U . Apply Corollary 5.7 in [7] we conclude that From this we have D ⊂ U ǫ , ∀ǫ > 0. As D is dense in O Y,H , we are done.
It follows from the assertion that the intersection (5) holds and the claim follows. Now, we define Suppose that m(W s Y (H Y )) = 0 for every Y ∈ R X,H . If H were an attractor, then H Y also is by equivalence thus m(W s Y (H Y )) > 0, ∀Y ∈ O X,H , yielding a contradiction. Therefore, H cannot be an attractor.
If, conversely, H is not an attractor, then R X,H = L X,H and so m(W s Y (H Y )) = 0 for every Y ∈ R X,H by (5). This completes the proof. Fix X ∈ R \ A. Then, Cl(Saddle d (X)) is hyperbolic and so there are finitely many disjoint dissipative homoclinic classes H 1 , · · · , H rX (all hyperbolic) satisfying As X ∈ R ⊂ R 12 , we can consider for each 1 ≤ i ≤ r X the neighborhood O X,H i of X as well as its residual subset R X,H i given by Lemma 4.2. Define, Recalling (c) in the proof of Lemma 4.2, we obtain for each 1 ≤ i ≤ r X a compact neighborhood U X,H i of H i such that As X ∈ A, S is upper semicontinuous at X. So, we can further assume that This easily implies We have that O 12 is open and R 12 is residual in O 12 . Finally we define Since R is a residual subset of three-dimensional flows, we conclude from Proposition 2.6 in [19] that R 11 also is. Now, take a Y ∈ R 11 such that Cl(Saddle d (Y )) is hyperbolic and let H be a dissipative homoclinic class of Y . Then, H ⊂ Cl(Saddle d (Y )) from Birkhoff-Smale's Theorem [11]. (6), so, H i Y is an attractor too and we are done.
Hyperbolicity of the dissipative presaddle set
In this section we shall prove the following result. Hereafter card(Sink(X)) denotes the cardinality of the set of different orbits of a three-dimensional flow X on Sink(X).
Theorem 5.1. There is a residual subset of three-dimensional flows Q such that if X ∈ Q and card(Sink(X)) < ∞, then Saddle * d (X) is hyperbolic. The proof is based on the auxiliary definition below.
Definition 5.2. We say that a three-dimensional flow X has finitely many sinks robustly if card(X) < ∞ and, moreover, there is a neighbourhood U X of X such that card(Sink(Y )) = card(Sink(X)) for every Y ∈ U X . We denote by S(M ) the set of three-dimensional flows with this property.
Recall that a compact invariant set Λ has a dominated splitting if there exist a continuous tangent bundle decomposition N Λ = E Λ ⊕ F Λ and T > 0 such that The proof of the following result is postponed to Section 7. Fix X ∈ C \ A. Then, X ∈ C and card(Sink(X)) < ∞. Since φ is upper semicontinuous at X, we conclude that there is an open neighborhood O X of X such that card(Sink(Y )) = card(Sink(X)) for every Y ∈ O X , and so, O X ⊂ S(M ).
By the Kupka-Smale theorem [11] we can find a dense subset D X ⊂ O X formed by C 2 Kupka-Smale three-dimensional flows. Furthermore, we can assume that every Y ∈ D X has neither normally contracting nor normally expanding irrational tori (see [4] for the corresponding definition).
Let us prove that Saddle * d (Y ) hyperbolic for every Y ∈ D X . Take any Y ∈ D X . Then Y ∈ S(M ), and so, Saddle * d (Y ) has a dominated splitting by Proposition 5.3. On the other hand, it is clear from the definition that every periodic point of Y in Saddle * d (Y ) is saddle. Then, Theorem B in [4] implies that Saddle * d (Y ) is the union of a hyperbolic set and normally contracting irrational tori. Since no Y ∈ D X has such tori, we are done.
We claim that every Y ∈ D X exhibits an open neighborhood and a neighborhood V Y of Y such that any compact invariant set of any Z ∈ V Y is hyperbolic [11]. Applying Lemma 3.4 we can assume that Saddle * d (Z) ⊂ U Y , for every Z ∈ V Y , proving the claim.
Define [19]). Now, take Y ∈ Q with card(Sink(Y )) < ∞. Then, Y / ∈ A and so Y ∈ O ′ X for some X ∈ C \ A. From this we conclude that Saddle * d (Y ) is hyperbolic and we are done.
Proof of Araujo's Theorem for nonsingular flows
Let R 2 , R 6 , R 11 and Q be given by Lemma 2.2, Theorem 3.1, Theorem 4.1 and Theorem 5.1 respectively. Define, Then, R is a residual subset of three-dimensional flows. Now, take X ∈ R with card(Sink(X)) < ∞. Since X ∈ Q, we conclude from Theorem 5.1 that Saddle * d (X) is hyperbolic. But clearly Cl(Saddle d (X)) ⊂ Saddle * d (X) thus Cl(Saddle d (X)) is hyperbolic too. As X is Kupka-Smale and card(Sink(X)) < ∞, we can apply (1) to conclude that Dis(X) is hyperbolic. Since X ∈ R 2 , we can apply Lemma 2.2 to obtain a finite disjoint collection of homoclinic classes H 1 , · · · , H r and sinks s 1 , · · · , s l satisfying As X ∈ R 6 , we have m(W s w (Dis(X))) = 1 by Theorem 3.1. We conclude that As the basin of the remainder homoclinic classes in the collection H 1 , · · · , H r are negligible, we can remove them from the above inequality yielding Since f ∈ R 11 , we have from Theorem 4.1 that H i k is a hyperbolic attractor for every 1 ≤ k ≤ d. From this we obtain the result.
Proof of Proposition 5.3
First we introduce some basic notations. Given a three-dimensional flow X and x ∈ Saddle(X), we denote by E s x and E u x the eigenspaces corresponding to the eigenvalues of modulus less and bigger than 1 of x respectively. We also denote N s and N u,X x will indicate dependence on X.
Notice that if p ∈ Saddle(Y ) for some three-dimensional flow Y , then where I is the identity whereas λ(p, Y ) and µ(p, Y ) are the eigenvalues of p satisfying Proposition 5.3 is clearly reduced to the following one.
Proposition 7.1. For every X ∈ S(M ) there are a C 1 neighborhood V and T > 0 such that Its proof is based on two lemmas.
Lemma 7.2. For every X ∈ S(M ) there are a neighborhood U X of X and 0 < λ < 1 such that Proof. Since X ∈ S(M ), card(Sink(X)) < ∞ and we can fix a neighborhood W (X) of X such that (7) card(Sink(Z)) = card(Sink(X)), ∀Z ∈ W (X).
Applying the Franks's Lemma for flows with T = 1 we obtain a neighborhood W 0 (X) ⊂ W (X) and ǫ > 0. We claim that U X = W 0 (X) satisfies the conclusion of the lemma. Indeed, suppose by contradiction that this is not true. Set Since the conclusion is not true for U X = W 0 (X) we can arrange Y ∈ W 0 (X) and p ∈ Saddle d (Y ) such that (1 − δ) tp,Y ≤ |λ| (for simplicity we write λ = λ(p, Y ) and µ = µ(p, Y )). Since p ∈ Saddle d (Y ) we also have |λµ| < 1 thus |µ| = |λ| −1 |λµ| < Now write t p,Y = n + r where n is the integer part of t p,Y and 0 ≤ r < 1. Consider the partition t i = i for 0 ≤ i ≤ n and t n+1 = t p,Y . It follows that t i+1 − t i ≤ 1 for every 0 ≤ i ≤ n. Consider a small tubular neighborhood U of O Y (p), disjoint of Sink(Y ). Define the maps L i : The choice of δ implies By Franks's Lemma for flows, there exists Z ∈ W (X) such that Z = Y along O Y (p) and outside U satisfying P Z ti+1−ti (Z ti (p)) = L i for every 0 ≤ i ≤ n. This implies that and thus the eigenvalues of P Z tp,Y (p) are (1 − δ) tp,Y λ (of modulus less than 1) and (1 − δ) tp,Y µ (of modulus less than 1 too). Therefore, p ∈ Sink(Z). Since all sinks of Y are located outside U , they are also sinks for Z. This implies that card(Sink(Z)) > card(Sink(Y )) = card(Sink(X)). Since Z ∈ W (X) we obtain a contradiction from (7) proving the result.
The orthogonal complement of a linear subspace E of R 2 is denoted by E ⊥ . The angle between linear spaces E, F of R 2 is defined by angle(E, F ) = L , where L : E → E ⊥ is the linear operator satisfying F = {u + L(u) : u ∈ E}. Lemma 7.3. For every X ∈ S(M ) there exist a neighborhood U and α > 0 such that Proof. Let U X and 0 < λ < 1 be given by Lemma 7.2. Since X ∈ S(M ) we can also assume that card(Sink(Z)) = card(Sink(X)) for every Z ∈ U X . Put W (X) = U X in the Franks's Lemma for flows with T = 1 to obtain the neighborhood W 0 (X) ⊂ W (X) of X and ǫ > 0. Set We claim that U = W 0 (X) and α as above satisfies the conclusion of the lemma. Indeed, suppose by contradiction that this is not true. Then, there are (p, Y ) ∈ Saddle d (Y ) × W 0 (X) satisfying Clearly we can fix a tubular neighborhood U of O Y (p) disjoint from Sink(Y ).
With respect to the orthogonal splitting N Y p = N s,Y p ⊕ [N s,Y p ] ⊥ one has the matrix expression To simplify we write γ = angle(N s,Y p , N u,Y p ) so 0 < γ < α. Define . Now write t p,Y = n + r where n is the integer part of t p,Y and 0 ≤ r < 1. Consider the partition t i = i for 0 ≤ i ≤ n and t n+1 = t p,Y . Clearly t i+1 − t i ≤ 1 for every 0 ≤ i ≤ n.
Define the sequence of linear maps Therefore, From this we obtain By Franks's Lemma for flows, there exists Z ∈ W (X) such that Z = Y along O Y (p) and outside U satisfying P Z ti+1−ti (Z ti (p)) = L i for every 0 ≤ i ≤ n. This implies that But now a direct computation using (8) and (9) implies that P Z tp,Z (p) is traceless and det P Z tp,Z (p) = λ(p, Y )µ(p, Y ). As p ∈ Saddle d (Y ), we have |λ(p, Y )µ(p, Y )| < 1 thus P Z tp,Y (p) has a pair of complex eigenvalues of modulus less than 1. Therefore, p ∈ Sink(Z). Since all sinks of Y are located outside U , they are also sinks for Z. This implies that card(Sink(Z)) > card(Sink(Y )) = card(Sink(X)), a contradiction which ends the proof.
Proof of Proposition 7.1. It suffices to prove that there exists T > 0 such that for every Y close to X and every p ∈ Saddle d (Y ) there exists 0 ≤ t ≤ T such that Otherwise, for every T > 0 there exists a flow Y , as close as we want to X, and a periodic point p ∈ Saddle d (Y ) such that We can assume that t p,Y → ∞ as Y → X. Indeed, by Lemma 7.2 and thus there exists k 0 , which depends only upon λ, such that This implies t p,Y ≥ T k0 , and since T is large, t p,Y is also large. Let U X and 0 < λ < 1 be given by Lemma 7.2. Let U and α be given by Lemma 7.3. Without loss of generality we can assume that U = U X . Put W (X) = U in the Franks's Lemma for flows with T = 1 to obtain the neighborhood W 0 (X) ⊂ W (X) and ǫ > 0. Set Choose ǫ 0 > 0, ǫ 1 > 0 and m ∈ N + satisfying (11) (2ǫ 0 + ǫ 2 0 )C ≤ ǫ, (1 + ǫ 1 )λ < 1, ǫ 1 < α 1 + α ǫ 0 and (12) Taking Y close to X we can assume that Applying Lemma 7.2, the last two inequalities in (11) and Lemma II.10 in [18] we obtain Therefore, (15) P − I < ǫ 0 and S − I < ǫ 0 .
Set τ = n + r where n is the integer part of τ and 0 ≤ r < 1. Define the partition t i = i for 0 ≤ i ≤ n and t n+1 = τ .
Define the linear maps L j : N Yt j (p) M → N Yt j+1 (p) M by r (Y tn (p)), if j = n + 1. We can use the first inequality in (11), (15) and (16) as in [23] to prove Since Y ∈ W 0 (X), we can apply the Franks's Lemma for flows to obtain a three-dimensional flow Z with (17) Z ∈ W (X) such that Z = Y along O Y (p) (thus p is a periodic point of Z with t p,Z = τ ) and P Z tj+1−tj (Z tj (p)) = L j for 0 ≤ j ≤ n + 1. It follows that P Z tp,Z (p) = n+1 j=0 L j . | 2013-07-22T17:36:49.000Z | 2013-07-22T00:00:00.000 | {
"year": 2016,
"sha1": "ed309a025773130913cb6ce925806a4a184629c5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ed309a025773130913cb6ce925806a4a184629c5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
95566756 | pes2o/s2orc | v3-fos-license | SYNTHESIS , STRUCTURE AND PHOTOLUMINESCENT PROPERTY OF A CADMIUM COMPLEX WITH 2 , 4-DICHLOROPHENOXYACETIC ACID
A new complex [Cd(2,4-D)2(Im)2(H2O)2] with 2,4-dichlorophenoxyacetic acid (2,4-DH) and imidazole (Im) as ligands has been synthesized and characterized by elemental analysis, IR spectrometry and single-crystal X-ray diffraction. The Cd(II) atom is coordinated in a distorted octahedral geometry, which is defined by two O atoms from two monodentate 2,4-dichlorophenoxyacetate ligands, two N atoms from two imidazole ligands and two water molecules. Intermolecular O-H···O hydrogen bonds form chains, which are connected by N-H···O hydrogen bonds, generating a sheet structure. The molecules are further assembled by π···π stacking interactions to form a two dimensional supramolecular network.
INTRODUCTION
2] Generally, the design or selection of suitable ligands having certain features, such as functionality and versatile binding modes, plays an important role in determining the topologies and properties of carboxyl complexes.In the past few years, extensive studies in this field have been focused on the utilization of rigid aromatic carboxylic acids as ligands.Comparatively, flexible carboxylic ligands have not been studied much, which is probably due to their varied geometries and conformations, and this often make it difficult to forecast and control the final structures of the expected products. 3To better understand the coordination chemistry of flexible carboxylic ligands and prepare desired complexes with predictable structures and properties, it is necessary to exploit flexible carboxylic acid ligands.2,4-Dichlorophenoxyacetic acid (2,4-DH) is a member of arylcarboxylic acids family, and compounds belonging to it are commonly used as conventional fungicides, plant growth regulators and agricultural hebicides. 4In recent years, an increasing interest can be observed on 2,4-DH coordination compounds, including complexes of transition metals such as Zn(II) 5 , Cd(II), [6][7] Cu(II), 8 Mn(II), 9 Co(II) 10 and Ni(II), 11 alkaline metals such as Ca(II), 12 and lanthanides like Gd 13 and Eu. 14 Several mixed-ligand coordination compounds with 2,4-DH and N-donor ligands, such as phenanthroline, 15 pyrazole, 16 pyridine, 17 pyrazine, 18 imidazole 19 and bipyridine, 20 are also known, and the results reveal that 2,4-D exhibits versatile binding and coordination modes.These interesting results inspire us to explore the related mixed-ligand coordination systems and to clarify their intrinsic assembly rule.Therefore, we describe here the synthesis, crystal structure and luminescent property of a new Cadium(
Materials and Physical Measurements
All reagents used in the synthesis were analytical grade.Elemental analyses for C, H, and N were performed on a Vario EL III elemental analyzer.The infrared spectra (4000 -400 cm -1 ) were recorded as a KBr pellet on a Nicolet 170SX FT-IR spectrometer.Luminescence spectrum was obtained with a RF-5301PC fluorescence spectrometer.The crystal structure determination was performed on a Bruker Smart Apex CCD area-detector diffractometer equipped with graphite-monochromatized Mo Ka radiation (λ = 0.71073 Å).
X-ray data collection and structure refinement
Single-crystal data were collected at 298(2) K. A summary of the crystallographic data is given in Table 1.Selected bond distances and angles are given in Table 2.
Infrared spectrum
The IR spectrum of 1 clearly show both the presence of 2,4-D and coordinated imidazole.The band with the maxima at 3345 cm -1 characteristic for νOH vibrations confirms the presence of water molecules in the complex.The band at 1734 cm -1 originating from the RCOOH group, presented in the acid spectrum, is replaced in the spectrum of complex by two bands at 1589 cm -1 and 1358 cm -1 , which can be ascribed to the asymmetric and symmetric vibrations of COO -group, respectively.The difference between ν as (COO) and ν s (COO) is 231 cm -1 .According to Kazou Nakamoto, [23] we can infer that the carboxylate groups in 1 are in monodentate coordination modes.The strong C-O stretching vibrations at about 1240 cm -1 in the spectrum of 1 suggest that the oxygen atom from phenoxyl may not coordinate to the metal centers.These are in accordance with the results of X-ray diffraction analysis.
Crystal structural description
As shown in Fig. 1, the Cd(II) atom is coordinated to two oxygen atoms from two 2,4-D ligands and two nitrogen atoms of two imidazole molecules in the equatorial position, and two oxygen atoms of water molecules in the axial position, forming a CdO 4 N 2 octahedral geometry.The distance of Cd-N is 2.238(2) Å, which is shorter than the corresponding values in [Cd 2 (2,4-D) 4 (phen) 2 ] (phen = 1,10-phenanthroline), [7] [Cd(2,4-D) 2 (bib)] n (bib=1,4-bis(imidazol-1-yl)-butane) [24] and [Cd(2,4-D) 2 (4,4-bipy)] n (4,4'-Bipy = 4,4'-bipyridine). [25][27] The bond angles of O( 1 It should be noted that the oxyacetate group is clearly twisted out of the plane of the benzene ring and the C3-O3-C2-C1 torsion angle is -67.9(3)°.This indicates the remarkable conformational flexibility of 2,4-DH as compared with rigid aromatic carboxylic acid such as terephthalate. [28]Moreover, the characteristic C-O carboxylate bond lengths (Table 2) suggest electron delocalization of the carboxylate groups of the anionic ligands.There are two types of intermolecular hydrogen bonds between the neighboring molecules (Table 3).An intermolecular O-H•••O hydrogen bond is formed between the uncoordinated carboxylate O atom and the O atom of the coordinated water molecule of the adjacent molecule, giving rise to a hydrogen-bonded chain along the c axis.The other intermolecular hydrogen bond involves the uncoordinated carboxylate O atom and the N atom of imidazole molecule, leading to an N-H•••O hydrogen-bonded chain along the b axis (Fig. 2).The chains crosslink each other and are further assembled through π•••π stacking interactions between imidazole rings with centroid-to-centroid distance of 3.6235Å, resulting in a two-dimensional supramolecular network (Fig. 3).We have previously reported the structure of [Cu 2 (2,4-D) 2 (Im) 4 ] n (NO 3 ) 2n . 19n coordination polymer [Cu 2 (2,4-D) 2 (Im) 4 ] n (NO 3 ) 2n , the Cu(II) atoms are bridged by 2,4-dichlorophenoxyacetate, leading to an one-dimensional chain.The replacement of Cu(II) by Cd(II) results in a quite different structure of 1.These differences reveal that the metal ions play a crucial role in structural assembly.
Luminescent property
It is known that Cd(II) complexes have high photoluminescence quantum yields and these complexes have potential applications in electroluminescent devices. [29]Fig. 4 shows the excitation and emission spectra of the title complex in the solid state at room temperature.When the excitation wavelength is 280 nm, the title complex exhibits an intense photoluminescence, and the maximum emission wavelength is at 455 nm.Comparably, under similar conditions, two emission peaks at about 392 nm and 490 nm were observed for free imidazole and 2,4-DH, respectively.The emissions arising from the free ligands are not observed in the spectrum of compound 1.The absence of ligand-based emission suggests energy transfer from the ligands to the Cd(II) atoms during photoluminescence.Therefore, the photoluminescence can probably be assigned to the ligand-to-metal charge-transfer (LMCT) transition. [30]gure 4 View of the excitation and emission spectra for 1 in the solid state at room temperature.
CONCLUSIONS
A novel Cadmium( II) complex, [Cd(2,4-D) 2 (Im) 2 (H 2 O) 2 ], has been hydrothermally synthesized and structurally characterized.The complex falls into monoclinic system with space group of P2 1 /c.There exists two kinds of intermolecular hydrogen bonds and π•••π interactions in the crystal which stabilize the structure.This work reinforces that weak intermolecular interactions play an important role in defining the overall supramolecular architecture.
SUPPLEMENTARY MATERIAL Crystallographic data (cif) have been deposited with the Cambridge Structural Data Centre (CCDC) with reference number 880381.See http:// www.ccdc.cam.ac.uk/conts/retrieving.html for crystallographic data in cif or other electronic format.Copies of the data can be obtained, free of charge, on application to CCDC, 12 Union Road, Cambridge CB2 1EZ, UK [fax: 44(0)-1223-336033 or E-mail: deposit@ccdc.cam.ac.uk].
Figure 1
Figure 1 Molecular structure of the title complex, with hydrogen atoms omitted for clarity.
Figure 2
Figure 2 View of the chain structure in 1 formed by N-H×××O hydrogenbond.H atoms not involved in hydrogen bond have been omitted.
Figure 3
Figure 3 Packing diagram of 1. Hydrogen bonds are indicated by black dashed lines.Only the carboxyl groups of 2,4-D ligands are kept for clarity.
TABLE 1 .
Crystal data and structure refinement details for 1.
Table 3 .
Hydrogen bond geometries in the crystal structure of 1 | 2018-12-19T10:36:32.761Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "a8e677e2935984cfc805ba0c9b7b0a7b94d86d4e",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.cl/pdf/jcchems/v59n3/art09.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a8e677e2935984cfc805ba0c9b7b0a7b94d86d4e",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
225504650 | pes2o/s2orc | v3-fos-license | Façade Components Optimization of Naturally Ventilated Building in Tropical Climates through Generative Processes. Case study: Sumatera Institute of Technology (ITERA), Lampung, Indonesia
Global warming and climate change have led to the world energy crisis. While it also leads to the crisis of energy consumption, the main reason is still debatable among the researchers. However, the released of carbon emission in construction field is generally believed has triggered the phenomena and become the primary cause of global warming. It is proven that the building sector contributes to the biggest causes of overheating in the environment. Based on previous research, residential and commercial has reached 20% to 40% of the total energy consumption and this growing trend is still happening. Moreover, the electricity consumption in the building and construction process is the main reason for the growth of carbon emission. Therefore, the stakeholder particularly architects and designer should take an early initiative to overcome the problems. One of the approaches is to predict building performance through simulation in the initial design phase. In line with this, rapid technology development has brought to an alarming situation concerning energy consumption. Passive design is considered as one of the strategies to moderate indoor temperature in a tropical climate. Some studies suggest that the use of natural ventilation is potentially reducing operating costs and produce better thermal and indoor air quality. This study will investigate the optimization process of daylight performace condition that driven by façade components such as balcony size, orientation, openings, layout and louver windows using generative and parametric tools through multi-objectives optimization and generative simulation. The methodology used in this research is generative simulation in parametric platform named Grasshopper sith plugin software ladybug + honey bee. From the simulation it is resulted that Preferred individuals have an illuminance value of 211 test points. The properties owned by these individuals are cantilevered as a canopy at the height of 3 meters, a cantilever or canopy length of 2 meters, the building’s orientation angle to the south is 21 °, and has 32% opening of the total façade surface.
Introduction
The goals of indoor environment quality rely on the design of the façade, massing, and building orientation that potentially brings occupant well-being [1]. Thus, the proper approach should be taken considering environmental awareness since the early phase of architectural design processes. The tremendous development in the digital and computational design is one of the approaches to optimize the aspects of architectural design. The benefit of the use of building performance analysis is that architects can predict the building performance before the building constructed [2]. Moreover, the ICoSITeR 2019 IOP Conf. Series: Earth and Environmental Science 537 (2020) 012015 IOP Publishing doi:10.1088/1755-1315/537/1/012015 2 possibility of integrating design and the pursuit of sustainability can be done by incorporating building performance simulation in early design stages [3].
The project that will be carried out in this paper is a classroom located on the campus of the Sumatra Institute of Technology (ITERA), which will be simulated based on microclimate conditions in the city of Bandar Lampung. The simulation is done by applying several conditions both to the classroom and to the surrounding climatic conditions. This study intends to identify the optimal solution from the application of several design parameters, especially components in the building façade such as height and size of the canopy and the percentage of openings on each surface of the classroom module. Furthermore, the optimization process will be applied to the designated model using an optimization plugin in the parametric platform called Octopus.
The background of this research is that the use of generative platforms is felt to provide flexibility in the production of design solutions. Besides, when compared to the classical design process, this method allows architects to map obtained from the distribution of design solutions that cannot be produced from the usual design process. The research question is how high the canopy, the width of the building canopy orientation, and the percentage of windows that can provide a point distribution in a room that has illuminance with a range between 200 -300 Lux, preferably 250 lux following the comfort standards in SNI 03-6575-2001 [4]. This research hypothesizes that integrating computational processes can result in the spread of an optimized population regarding the façade components.
Methodology
In this research, the methodology used is daylight analysis simulation using parametric platform software, Rhino Grasshopper. Honeybee and ladybug are used as media to change geometry into honeybee zones, which will be simulated through ladybug. The methodology begins with the study of design requirements and the information on the site. The actual data was obtained from Energyplus EPW file of Bandar Lampung, which contains historical weather data of the city. The research framework can be seen in Figure 2. When the main geometry has decided, the next step would be regulating the constraints. Constrains, which is a parametric source, will be arranged to form the main geometry based on the parameters. In this research, forming the geometry by the system of the Generative Algorithm is described by the terminology of modelling. The main activities of this thesis are the arranging process of the Generative Algorithm components that has an integrated function related to the objectives and parameters. The overall system of the Generative Algorithm ( Figure 3) is made in three major phases due to the different simulations to be performed. The first phase of the Algorithm is made to construct the main geometry. The second phase is the Algorithm for structural analysis, and the third is the Algorithm made for the Daylight analysis. Furthermore, finally, the Algorithm is made for the optimization process to filtrate the design solution of the generations of the structural and daylight analysis.
Object modelling
The simulation in this study was applied to the virtual model of a classroom with a size of 8m x 16m and height to the ceiling of 3.5m. This room module has been implemented several times in the construction of the ITERA campus. The proportion of length times width has a ratio of 2: 1 and is perceived to be at the height of 0.0m or on the 1st floor. The parameters used in this model are as follows:
Bandar Lampung conditions
Bandar Lampung is the capital city of Lampung province. It has a tropical climate with an average temperature of 23 -37 °C. humidity is about 60% -85% and is at a maximum altitude of 700 meters above sea level. The annual data of Bandar Lampung can be seen in Figure 5. Bandar Lampung city is classified in a zone that has a humid climate all over the year. Rainfall ranges between 2.257 -2.454 mm/year. The air humidity ranged from 60% -85% and the average temperature ranges from 23 -37 °C. Wind velocity ranged from 2.78 -3.80 knots with dominant direction from the west in November, from the north in March, from the east in June and from the south in September. For the analysis period, the hottest day of the whole year has been chosen, which is December 22 at 13:00 [5].
Simulation Platform
The platform used to do the simulation is a plugin that works on the Grasshopper software, Ladybug + Honeybee. A ladybug is a software used for environmental simulations. Ladybug was developed to facilitate designers and engineers to carry out environmental analysis quickly. Besides, Ladybug was created to integrate environmental analysis with the design process. Since this system is set to work in Grasshopper's parametric environment, it is hoped that this software helps justify design considerations concerning environmental simulations. Ladybug is an open-source plugin at Grasshopper that helps architects and engineers create architectural designs by involving environmental awareness. Importing Ladybug through Energy Plus Weather Files (EPW) into Grasshopper and providing various 3D interactive graphics to support the decision-making process in the early stages of design, Ladybug requires weather data from objects to be analyzed so that the validation of the results of the analysis is valid [6]. Honeybee connects Grasshopper to Energy Plus, Radiance, Daysim, and Open Studio to build energy simulations. The honeybee is intended to make this simulation feature available in a parametric platform. Analysis using the Ladybug + Honeybee is done after the virtual building model has been defined. The parametric modeling scheme using Ladybug + Honeybee in the overall algorithm system can be seen in Figure 7.
Results and Discussion
Another plugin that is integrated with Ladybug is Honeybee. Honeybee is software that works with Ladybug to work in simulating the Grasshopper system to other environments from energy simulation machines such as added energy, light, and others. Unlike Ladybug, Honeybee is used to defining information from the components and physical characteristics of objects for environmental and energy simulations.
Optimization on the Octopus platform is done with the elitism parameter 0.5, mutation probability 0.2, mutation rate 0.9, crossover rate 0.8 population size 100, and maximum generation 50. In this process, four parameters are set as genes, namely Cantilever Height, Cantilever lights, Angle of rotation for building orientation, and opening percentage. The results obtained from this process are population fields containing design solutions, each of which contains data related to the position of the tie movement parameters in the genes. The target sought is the highest number of test points with illuminance ranged from 200 lux to 300 lux. From the results obtained, the best solution that has a test point value with the targeted value is in the maximum position (top) quadrant axis 3. Preferred individuals have an illuminance value of 211 test points. The properties owned by these individuals are cantilevered as a canopy at the height of 3 meters, a cantilever or canopy length of 2 meters, the building's orientation angle to the south is 21°. It has 32% of the total surface opening of the façade building. The results of population distribution can be seen in Figure 8. The population above has five axes, including axis 1 for orientation angle, axis 2 for illuminance summary, axis 3 for cantilever length, axis 4 for opening percentage represented by gradations of color from green with low to red openings for larger openings. While the size of the mesh represents the axis five cantilever height, the smaller the mesh size, then it represents the position of the higher level. | 2020-08-13T10:09:20.089Z | 2020-08-11T00:00:00.000 | {
"year": 2020,
"sha1": "e53fcc638afff162ab98aab0b77f675067c7c4b4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/537/1/012015",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "16142ddb53b24cc87e6df142dc225b74f7331898",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Engineering"
]
} |
264979208 | pes2o/s2orc | v3-fos-license | Plasmon-Induced Flexibility and Refractive Index Measurement in A Sensor Designed by a Cavity, Two Rings, Two Teeth and Two Plasmonic Waveguides
In this research, we design a plasmonic refractive index sensor and examine it numerically, using transparency, refractive index, sensitivity, FOM fit shape and Q quality factor, to optimize and improve performance quality. We will be. To design the structure of this sensor, we use two plasmonic waveguides, a cavity, two rings and two teeth. The resonant wavelengths and refractive index of the resonators are investigated and simulated by the finite difference time domain (FDTD) method, and we draw the obtained diagrams using MATLAB software. After completing the sensor design, due to the fact that the amplifiers are very sensitive to changes in the refractive index, so by changing the refractive index and changing the dimensions of the structure, we can weaken or strengthen the passage coefficient in the resonant modes. These plasmonic sensors with a simple frame and high optical resolution can be used to measure refractive index in the medical, chemical and food industries.
Introduction
Surface plasmon polaritons (SPPs) have been studied extensively recently due to the fact that they confine light at nanoscale dimensions [1].As a result of these unique features, SPPs are used in many structures such as filters [2], optical demultiplexers, bio-sensors , logic gates and etc.The metalinsulator-metal (MIM) optical waveguide is used extensively for design of many plasmonic devices due to its ability to confine light within a small area and its compatibility with electronic platforms [37][38][39][40][41][42][43][44][45][46].In addition, having a simple design procedure makes it one of the most favorable structures.Consequently, a diversity of MIM plasmonic devices have been designed and implemented.Some of them are optical filters, sensors, coulpers, slow light devices, splitters and all-optical switches.A good refractive index sensor needs to have a good sensitivity (S) and a high figure of merit (FOM).Increasing the device size usually increases the sensitivity, but larger structures have higher full width half maximum, which leads to a reduction of FOM.Many criteria can be used to implement refractive index sensors, but plasmonic sensors are more suitable for integrated circuits due to their very small size (nanometers).Recently, various types of plasmonic sensors have been designed and manufactured.Among them, plasmonic refractive index sensors require high sensitivity and resolution.Conventional plasmonic sensors consist of a MIM waveguide with a cavity.Such cavities can have a variety of geometries such as tooth-shaped, disc-shaped, ring-shaped, and so on.In this paper, we propose a MIM plasmonic sensor with two rings and a cavity and two teeth.
To simulate the sensor, the two-dimensional finite difference time domain (FDTD) method with a uniform mesh size of 8 nm has been used.The boundary condition for all directions is selected as the perfectly matched layer (PML).
Structural model and theory analysis
There are many structures for designing optical sensors.These optical sensors usually include amplifiers and waveguides.Each waveguide with any geometric shape has the ability to transmit waves and can limit their energy in one and two dimensions.The proposed structure is shown in Figure 1, which includes two waveguides and a cavity, and two rings and two teeth.The input wave goes from the left waveguide to the structure and after passing through them goes to the output waveguide.The width of the two waveguides is w1 = 50 nm.The middle ring is located in the middle of two waveguides that have an inner radius of r1 = 90 nm and an outer radius of R1 = 133 nm, which is located at a distance of 19 nm from the two waveguides.The two teeth are connected to the middle ring, which has a length of 40 nm and a height of 20 nm.A ring is located at the bottom of the right waveguide and has an inner radius of r1 = 91 nm and an outer radius of R1 = 126 nm.The cavity also has a length of L = 80 nm and a height of W2 = 200 nm.The lower ring is attached to the waveguide and the cavity and the distance from the cavity to the waveguide is 55 nm.Pin and Pout are the monitors for measuring the input and output waves, respectively, and the transmission is calculated by T = Pout / Pin.
As shown in the 2D image, the green and white areas represent silver and air, respectively.The air permittivity is set to ε = 1 and the silver permittivity is used using the greeting model as follows: (1) Here ∞ gives the medium constant for the infinite frequency, ωp refers to bulk frequency for plasma, γ means damping frequency for electron oscillation, and ω shows incident light angular frequency.The parameters for silver are ∞ = 1, ωp =1.37 × 1016 Hz, and γ = 3.21 × 1013 Hz.Only TM mode is available in the structure.According to Figure 1, the TM wave, which is used for SPP excited waves, starts propagating from the left waveguide and propagates in the waveguide, and its intensity decreases as it gets closer to the output port.After distributing the field at the resonant frequency of the simulated structure, each amplifier reflects a certain amount of input wave.
Fracture coefficient simulation and measurement methods
The resonant behavior of the proposed structure is examined numerically and theoretically.In the numerical approach, we use the time domain finite difference (FDTD) simulation method with perfectly matched layer boundary conditions (PML) because this method effectively reduces the numerical reflection.The uniform mesh size is 8 nm.First, to measure the performance of the sensor and increase its quality, we must change its refractive index.This is done in the wavelength range of 400 to 1500 nm and the refractive index of the middle ring will change in steps of 0.01 from 1.15 to 1.2 nm.An electromagnetic field is generated by the excitation of a sensing element using light generated by SP that is concentrated on the metal surface.The refractive index of the MIM changes when the material under contact contacts the sensor.SPs are very sensitive to changes in refractive index in the vicinity of the surface.The reason we have only changed the refractive index of one ring and the refractive index of other amplifiers remains the same is to achieve a better result and a stronger sensor design.The transmission spectrum from the sensor device is shown in Figure 2.
After comparing the wavelengths using the refractive index change and plotting the transmission spectrum, we must obtain the three criteria of sensitivity S and the shape of the FOM and the quality factor Q. With this, we create
Conclusion
In this paper, a very high resolution refractive index optical sensor is presented.It is based on plasmonic conductors of metal-metal insulation.The structure is numerically simulated using the finite difference time domain method.The proposed structure is thought to consist of two plasmonic waveguides, a cavity, two rings and two teeth.This sensor provides a sensitivity of 2359 nm / RIU and a maximum rating of 15.0316 RIU-1 (FOM).Due to its high resolution resolution, this sensor can easily change 0.01% in the analytic refractive index for the index in the range of 1.15-1.We see the quality factor Q diagram in Figure 5.
According to the figure, the highest value of quality factor Q is for refractive index n = 1.17 (in mode1) which is equal to 16.56 and the lowest value for refractive index is n = 1.19 (in mode2) which is equal to 7.379.These three factors (S and FOM sensitivity and Q quality factor) and their numerical values showed that this sensor has good performance and quality and has a higher sensitivity compared to similar articles.
Figure 1 : 2 :
Figure 1: Two-dimensional image of a plasmonic sensor Figure 2: Transmission spectra of plasmonic refractive index sensor 2. a technology map to define the standard and development process of optical refractive index sensors.Sensitivity S defines the ratio of the output wavelength change of the sensor to the refractive index changes and is obtained from the following relation: S = Δ λ / Δn (nm / RIU) (2) We see the diagram of the plasmonic sensitivity coefficient in Figure 3, which according to the figure, has the highest sensitivity for the refractive index n = 1.2 (in mode2) which is equal to 2359 nm / RIU and the lowest value for the refractive index n = 1.16 (in mode1) Which is equal to 314.1 nm / RIU.The next item is the figure of merit (FOM), which determines the sensitivity of the SRI to the resonance width curve (FWHM) and how accurately the minimum resonance can be measured.FOM is calculated as follows: FOM = SRI / FWHM (3) We see the diagram of the figure of merit (FOM) in Figure 4, which according to the figure has the highest value for the refractive index n = 1.18 (in mode1) which is equal to 15.03 and the lowest value for the refractive index n = 1.12 (in mode1) which is equal to With 7.354.And the last case is the quality factor Q, which is obtained from the following equation: Q = λres / FWHM (4)
Figure 5 :
Figure 5: Quality factor diagram of Q plasmonic sensor. | 2023-07-11T18:52:33.916Z | 2023-06-09T00:00:00.000 | {
"year": 2024,
"sha1": "057bee4428f16463fbf007b3e05ab20020cd6af4",
"oa_license": null,
"oa_url": "https://doi.org/10.33140/pcii.06.03.04",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2cb791fad01e7a46930b96853c447098d85e85c1",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
257425086 | pes2o/s2orc | v3-fos-license | Continental drift shifts tropical rainfall by altering radiation and ocean heat transport
Shifts in the position of the intertropical convergence zone (ITCZ) have great importance for weather, climate, and society. The ITCZ shifts have been extensively studied in current and future warmer climate; however, little is known for its migration in the past on geological time scales. Using an ensemble of climate simulations over the past 540 million years, we show that ITCZ migrations are controlled primarily by continental configuration through two competing pathways: hemispheric radiation asymmetry and cross-equatorial ocean heat transport. The hemispheric asymmetry of absorbed solar radiation is produced mainly by land-ocean albedo contrast, which can be predicted using only the landmass distribution. The cross-equatorial ocean heat transport is strongly associated with the hemispheric asymmetry of surface wind stress, which is, in turn, controlled by the hemispheric asymmetry of ocean surface area. These results allow the influence of continental evolution on global ocean-atmosphere circulations to be understood through simple mechanisms that depend primarily on the latitudinal distribution of land.
INTRODUCTION
A narrow band of rainfall in the tropics, known as the intertropical convergence zone (ITCZ), encircles Earth's equator all year round. The ITCZ accounts for about one-third of global precipitation [e.g., (1)], and its annual mean latitude (ϕ ITCZ ) is about 4°N in the current climate (2,3). The ITCZ plays an important role in Earth's weather and climate; for example, it modifies Earth's radiative balance and sensitivity to climate forcings (4). The variability of ϕ ITCZ may notably affect the seasonal activity of tropical cyclones [e.g., (5)], and the persistent multiyear Sahel drought over the past century may be associated with decadal variability of the Atlantic ITCZ (6,7).
The ITCZ is traditionally thought of as a tropical system that is mainly controlled by the distribution of tropical sea surface temperature [SST; e.g., (8)]. However, recent studies view the ITCZ as a meteorological equator that separates the atmospheric meridional mean circulations of the two hemispheres, closely linking ϕ ITCZ and cross-equatorial atmospheric energy transport [e.g., (3,9)]. This has provided greater recognition and understanding of how the position of the ITCZ may respond to a variety of both shortand long-term climate forcings and variabilities, such as glaciations (10), ocean heat transport variations (11), cloud changes (12), and aerosol and volcanic forcing (13)(14)(15). These advances have allowed the ITCZ to be used as an active indicator of various hemispheric asymmetries, especially for the large variations in climate that have occurred over thousands to many millions of years in the past [e.g., (2)].
Paleoclimate evidence has shown considerable migrations of the ITCZ that can be linked with externally forced climate changes or internal climate variability [e.g., (2,3)]. For example, paleo records indicate that the ITCZ migrated northward through the early to mid-Holocene (8 to 6 ka ago) and retreated equatorward from mid-to late Holocene (6 ka ago to present) according to enhanced interhemispheric precession-induced insolation asymmetries (16,17). During the last glacial period (~20 ka ago), clear millennial variation of ϕ ITCZ associated with Heinrich stadials is seen in paleoclimate proxies (18,19). Over longer time scales, we lack a clearly successful proxy for ITCZ position, but hints from paleoclimate simulations indicate substantial variations of the ITCZ [e.g., (20)(21)(22)]. Migration of the ITCZ in paleoclimate states is an interesting and important phenomenon unto itself, but it moreover has substantial implications because the ITCZ interacts with other components of the Earth system. For example, ITCZ position affects the location of equatorial cold water upwelling, which is crucial for ocean ecosystems and primary production [e.g., (23)]. The ITCZ position may also affect the chemical silicate-carbonate weathering rate over continents, which is critical in controlling Earth's CO 2 concentration on multimillion-year time scales [e.g., (24)].
This study has two goals. The first is to systematically examine the migration of the ITCZ position since the beginning of Phanerozoic with a series of time-slice simulations, which has not been done before. The simulations are performed with a comprehensive Earth system model using a synoptic scale-resolving resolution, from 540 Ma to the preindustrial (PI) period with a time interval of 10 Ma (fig. S1; see Materials and Methods for more details) (25). The results depict ITCZ variability over geological time scales: How far poleward did the ITCZ migrate, and what was the variability of ITCZ latitude? This ensemble of simulations over the past 540 Ma also details the characteristics of the tropical rainfall climatology in each geological period, which is valuable for paleoclimate, paleoecology, and paleogeology studies. The second goal, which is a major focus here, is to unveil the mechanism that drives ITCZ migration on geological time scales. On these time scales, the main climate forcings external to the atmosphereocean system are changes of insolation, variations of greenhouse gases (mainly CO 2 ), and the evolution of continental configuration. In our experimental setting, the first two factors are highly hemispherically symmetric. Continental configuration, however, provides spatially inhomogeneous boundary conditions for geophysical fluid (ocean and atmosphere) motions that redistribute heat and greatly shape global and regional climates. We will show that migration of the ITCZ over geological time scales is largely driven by the continental evolution through two main pathways: the hemispheric asymmetry of radiation and ocean heat transport. Our results provide an atlas of paleoclimate states, deepen our understanding of ITCZ dynamics, and reveal an important relationship between paleoclimate and tectonic motion.
Migration of the ITCZ
Annual-mean precipitation distributions of four representative periods are shown in Fig. 1 (continent configurations and precipitation distributions of all periods are in fig. S2 and movie S1). At 540 Ma, the supercontinent Pannotia had broken into Gondwana and several smaller continents (Fig. 1A). From 540 to 430 Ma, the smaller continents drifted across the equator, while the northern hemisphere (NH) rain belt extended longitudinally and the southern hemisphere (SH) rain belt contracted. Then, the continents slowly reassembled and formed the single supercontinent Pangea around 250 Ma (Fig. 1B). Beginning 170 Ma, Pangea broke up and drifted toward the North Pole. At around 80 Ma, the continental distribution was most fractured (Fig. 1C), partly because of tectonic motions and sea level rise. Then, the continents slowly evolved to today's configuration (Fig. 1D), featured by the Atlantic expansion, the assembly of Eurasia, and the equatorward drift of Australia. The geographic distribution of tropical precipitation changed markedly with continental evolution. For example, the supercontinent Pangea was home to the intense rainfall zone on its eastern side, known as the Megamonsoon (20,26); the formation of the South Pacific Convergence Zone accompanied the continental drift of Australia. However, in all periods, the global distribution of precipitation featured a broad tropical rain belt with a peak in each hemisphere ( fig. S2), which is the subject of this study: the ITCZ.
Here, we focus on migrations of the annual-and zonal-mean ITCZ while leaving other features of the ITCZ, such as its seasonality, for future study. We quantify the ITCZ position using three quantities (see Materials and Methods): the latitude of the tropical precipitation centroid (ϕ ITCZ ), defined as the area-weighted mean latitude of zonal-mean precipitation from 20°S to 20°N [e.g., (27)]; the latitude of the tropical precipitation peak (ϕ pp ), defined as the latitude of the zonal-mean tropical precipitation maximum; and the precipitation asymmetry index (PAI), which quantifies the hemispheric asymmetry of tropical precipitation [e.g., (28,29)]. Time series of these three parameters are shown in Fig. 2A, and the corresponding zonal-mean precipitation climatology are shown in fig. S3. The centroid, ϕ ITCZ , shows relatively weak variability (ranging from 3°S to 4°N); the peak, ϕ pp , shows large variability (9°S to 9°N) and sudden jumps between hemispheres, and the asymmetry index, PAI, varies from −0.3 (i.e., SH tropical precipitation is 30% stronger than NH tropical precipitation) to 0.36. Highlighting different aspects of the ITCZ, the three parameters are highly correlated. In the following, we mainly focus on ϕ ITCZ for simplicity, keeping in mind that migrations of the ITCZ accompany systematic changes of atmospheric and oceanic circulations.
Aiming for a causal explanation of migrations of the ITCZ, we avoid linking the ITCZ position to internal climate variables, such as SST, because, then, one would need to answer the equally difficult question of what causes the SST changes in the simulations. Instead, we assess how ITCZ migrations are caused by changes in external parameters of the atmosphere-ocean climate system, such as concentrations of CO 2 , insolation, and continental evolution (see Materials and Methods for how these are specified in the simulations).
To achieve our goal, we adopt an energetic framework [e.g., (3,9)] to examine migrations of the ITCZ. Recognizing that the ITCZ is part of the rising branch of the time-mean tropical atmospheric meridional overturning circulation, this framework posits that ϕ ITCZ collocates with the latitude of the energy flux equator (ϕ EFE ), which is the tropical latitude with zero vertically integrated meridional atmospheric energy transport. If one applies a linear approximation to the meridional distribution of atmospheric energy transport near the equator and assumes the slope to be nearly invariant with climate state, then −ϕ EFE is proportional to the cross-equatorial atmospheric energy transport (F atm , positive values of which indicate northward transport). This energetic framework has been used to explain ITCZ variability from seasonal to millennial time scales [e.g., (2,3,27)]. Our simulations confirm the close correlation among ϕ ITCZ , ϕ EFE , and F atm (Fig. 2, A and B, and fig. S4). The correlation between ϕ ITCZ and ϕ EFE is 0.89, and that between ϕ ITCZ and −F atm is 0.89. The sensitivity of ϕ ITCZ to F atm is −3.3°PW −1 (Fig. 2D), which agrees well with that found over the seasonal cycle and in externally forced annual averages in observations and coupled climate models (27). Furthermore, considering the energy balance of each hemisphere ( fig. S5 and Materials and Methods), we may separate F atm and, thus, ϕ ITCZ into components associated with cross-equatorial ocean heat transport and the hemispheric asymmetry of radiation Here, F ocn is the ocean heat transport across the equator (positive denotes northward), and δR is the hemispheric asymmetry (NH minus SH) of net radiative energy input at the top of atmosphere (TOA). Equation 1 states that either a northward cross-equatorial ocean heat transport (F ocn ) or a positive net radiative heating asymmetry (δR) favors a NH ITCZ. Now, we examine the time series of F ocn and δR (Fig. 2C). From 540 Ma to present, F ocn shows a clear trend from negative to positive with the sign changing around 170 Ma, superimposed with variations on shorter time scales. Conversely, δR has a negative trend of similar strength; thus, their sum F atm shows no apparent linear trend over the whole time period. With the trend removed, F ocn and δR show no apparent correlation (the correlation coefficient of the detrended time series is −0.38). Figure 2C also demonstrates that variations in F atm are not dominated by either component; rather, F ocn and δR are of equal importance and together produce the temporal variations of F atm ( fig. S6). The strong cancelation between F ocn and δR is not a coincidence. In the following, we will examine the ocean heat transport and radiation asymmetry individually and argue that they are both caused by continental configuration changes over geological time scales.
Hemispheric asymmetry of radiation
The hemispheric asymmetry of radiation, δR, may be separated into a shortwave (δS, positive denotes inward energy) and longwave component (δL, positive denotes outgoing energy), respectively, so that δR = δS − δL (Fig. 3, A and B). We find that δS dominates the variations in δR, with δL being much smaller. In addition, δL is anticorrelated with ϕ ITCZ (Fig. 3B), consistent with high-level clouds and water vapor in the ITCZ trapping longwave radiation. Thus, δS variations may serve as a first-order approximation of δR variations.
Because annual-mean insolation is nearly hemispherically symmetric (the difference is less than 1 W m −2 ), δS has to arise from the interhemispheric difference of planetary albedo. We hypothesize On the basis of these results, we construct a simple model of δS using only the landmass distribution. We simplify the planetary albedos over land and ocean as uniform (spatially and temporally over different geological periods) values of α p,lnd and α p,ocn , respectively. For each location, given its annual-mean insolation, we can calculate its local TOA net shortwave radiation depending on whether it is over land or ocean surface. Integrating it over each hemisphere, we have the net shortwave radiation of each hemisphere (see Materials and Methods) and, thus, its hemispherical asymmetry (δS es , the subscript denotes values estimated by the simple model).
We compare the calculation of the simple model to the simulation results to determine the coefficients of α p,lnd and α p,ocn . With each possible pair of α p,lnd and α p,ocn , we calculate the corresponding δS es for each time period. The performance of the simple model may be estimated by the mean absolute difference between δS es and the simulated δS by the Earth system model (see Materials and Methods). The discrepancy between δS es and δS is minimized with a land-ocean albedo contrast of 0.1 (α p,lnd − α p,ocn = 0.1, corresponding to the white dashed line in Fig. 3E), with little constraint on the absolute values of either albedo. This suggests that landocean albedo contrast is the main cause of δS. We may further constrain α p,lnd and α p,ocn by comparing the simple model-estimated and simulated global mean net shortwave radiation at the TOA (Fig. 3F), yielding the optimal values of α p,lnd = 0.39 and α p,ocn = 0.29 (the asterisk in Fig. 3F). These values are close to both observed estimates in modern climate [e.g., (30)] and mean values diagnosed from all our simulations (the white triangle in Fig. 3F). Figure 3A shows that variations in δS es estimated using these optimized albedos are very close to variations in δS and δR, with correlation coefficients of 0.90 and 0.96, respectively. Given the robustness of the parameters over geological time and the accuracy of the simple model, we speculate that it may be applied in other paleoclimate periods.
The dominant role of land-ocean albedo contrast in controlling δS is further demonstrated by additional lines of evidence. First, we note that the values of α p,lnd and α p,ocn obtained above represent planetary albedos (estimated at TOA), including effects of the atmosphere and surface. The atmosphere affects planetary albedo through absorption, scattering, and reflection of shortwave in clear and cloudy skies. Observational studies of modern climate indicate that the atmosphere contributes more to planetary albedo than the surface [e.g., (31,32)]. Thus, we expect the contrast between α p,lnd and α p,ocn to be smaller than the contrast between the surface albedos of land and ocean (typical values are 0.26 and 0.05, respectively) (33), as is true of our estimated planetary albedos. Examining the effects of clear-sky and cloud processes on the hemispheric asymmetry of radiation, we find that the clear-sky component of the asymmetry dominates ( fig. S8). The clear-sky planetary albedos over land and ocean diagnosed in our simulations (the white dots in Fig. 3, E and F) are smaller than the all-sky planetary albedos [the white triangles in Fig. 3 (E and F)], so using them in the simple model would overestimate global mean net shortwave radiation (Fig. 3F). However, even those clear-sky values reproduce δS es well (Fig. 3E), highlighting the dominant role of land-ocean albedo contrast rather than the absolute values of albedos. Furthermore, there is strong cancelation between the longwave and shortwave effects of clouds on δR ( fig. S8). Thus, while clouds play important roles in local and global-mean radiative balances, they seem to have little contribution to variations in the hemispheric asymmetry of radiation driven by continental drift.
Last, although the latitude of landmasses is part of the input parameters in the simple model, δS es is mainly determined by the hemispheric difference in land area (δA lnd ), as shown by the strong anticorrelation between δS es and δA lnd (Fig. 3A).
In summary, these results indicate that over geological time scales, land-ocean albedo contrast largely determines δR. Using only the horizontal distribution of land (and no information on surface type or orography), we can well estimate δR. This link between δR and continental configuration constitutes the first pathway for how continental evolution affects migrations of the ITCZ.
Ocean heat transport
Now, we examine the ocean heat transport, which, in our simulations, exerts an opposite influence on ITCZ migration compared with that of the hemispheric asymmetry of radiation. The crossequatorial ocean heat transport plays an important role in setting the latitude of the ITCZ in the current climate (3,11,34) and is dominated by the deep global meridional overturning circulation (GMOC) of the ocean (35). This GMOC is clockwise (defined as northward at upper levels), mostly because of the Atlantic overturning circulation (36). Cold and salty water sinks into the deep ocean in the North Atlantic, while the strong SH surface westerlies bring deep water to the ocean surface by Ekman pumping within the polar part of the Antarctic Circumpolar Current (ACC) (37,38). The clockwise GMOC transports heat from the SH to the NH and pins the ITCZ north of the equator. The even deeper overturning circulation associated with the deep-water formation around the Antarctica has a negligible contribution to heat transport (39). Such bottom overturning circulation, if it exists, also has negligible contribution to ocean heat transport in all the simulated previous climates due to the small temperature difference between its upper and lower branches and is thus ignored and not counted in the GMOC in the context herein.
Consistent with the time series of F ocn (Fig. 2C), our simulations show an anticlockwise GMOC (interhemispheric in almost all cases) before 170 Ma and a clockwise GMOC after 90 Ma, with oscillations in direction during the transition period (160 to 100 Ma; fig. S9). The dynamics of GMOC is a complicated, unsettled topic of great importance (40,41); it is out of our scope to provide a theory for the GMOC here. Instead, we boldly propose, on the basis of the simulation results, that for variability over geological time, the direction of the GMOC is mainly set by the hemispheric asymmetry of area-integrated wind stress on the ocean surface, especially in the middle latitudes. This assertion is supported by the strong anticorrelation (a coefficient of −0.86; Fig. 4A) between F ocn and the hemispheric asymmetry of wind stress δ⟨τ⟩, defined as area-integrated wind stress over the ocean surface and normalized by hemispheric area in the NH minus that in SH (see Materials and Methods).
Prior studies based on seasonal to interannual variability of the modern ITCZ suggested that cross-equatorial ocean heat transport is dominated by the ocean's shallow tropical circulation cells [e.g., (42)(43)(44)]. These studies argued that the trade winds exert stress on the tropical ocean, which, because of Sverdrup balance, couple the tropical shallow ocean cells with the tropical Hadley cell. However, decomposition of δ⟨τ⟩ (Fig. 4A) shows that its variations due to tropical components (within 30°N/S) are close to zero, while its mid-latitude (30°to 70°in each hemisphere) component dominates. The component of δ⟨τ⟩ in the polar region is even smaller. This indicates that for ITCZ variability over geological time scales, the cross-equatorial ocean heat transport is determined by ocean deep overturning circulations driven by surface westerlies in middle latitudes rather than by tropical shallow cells driven by tropical winds.
Examining wind stresses ( fig. S10) and the GMOC ( fig. S9) in all the simulations confirms their strong relationship. Taking the 540-Ma period as an example (Fig. 4, C and D), when landmasses are concentrated in the SH and the NH is a vast open ocean, there is a strong NH mid-latitude surface ocean current (Fig. 4D), which may be analogous to the ACC in the present climate (although in the opposite hemisphere). This mid-latitude current is accompanied by deep-water upwelling (Fig. 4C) due to Ekman transport and Ekman pumping induced by the wind stress and its curl and an anticlockwise GMOC. A similar relationship is found in other periods, such that the hemisphere with a wider ocean and stronger westerly winds in the mid-latitude is often the hemisphere with the rising branch of the GMOC ( fig. S9 and S10). The wind-driven adiabatic component of the GMOCs dominates the diabatic component driven by vertical eddy diffusion (40) in most periods, as inferred from the larger number of streamlines rising within the westerlies-driven upwelling zones than in other regions ( fig. S9). This dominance of the adiabatic GMOC is consistent with the good relationship between F ocn and δ⟨τ⟩. This relationship is also consistent with zonal-mean theory that, for the present-day Earth, treats the net northward flux of ocean water by Ekman transport across the northern boundary of the ACC as a cause of the strength of the GMOC (the adiabatic part) (45).
Because the mid-latitude component δ⟨τ⟩ mid dominates the total wind stress asymmetry δ⟨τ⟩, we further decompose δ⟨τ⟩ mid into components due to hemispheric differences in mid-latitude ocean surface area (δA ocn_mid ) and due to hemispheric differences in midlatitude wind stress intensity (see Materials and Methods). The component due to δA ocn_mid , which is a geographic parameter, explains most of the variability of δ⟨τ⟩ mid , while the component due to hemispheric differences in wind stress intensity is secondary ( Fig. 4B). This partition may be expected because the variability of δA ocn_mid due to continental drift over the examined time period is significantly larger than the variability of asymmetry of wind stress intensity. As a result, there is a very strong correlation (a coefficient of −0.91) between F ocn and δA ocn_mid . The greater importance of ocean area compared to wind stress intensity may allow for more faithful representation of climate in Earth system models because the GMOC in low-resolution models tends to be too sensitive to changes in wind stress (46); the dependence of F ocn on δ⟨τ⟩ in our simulations is furthermore not due to a spurious sensitivity of the modeled GMOC to wind stress intensity. The results here do not argue against the roles of buoyance forcing in ocean dynamics, which is apparently important in setting the detailed structure of the GMOC (47). However, among the simulations, the hemispheric difference of buoyancy forcing is arguably much smaller than the hemispheric difference of wind stress, and the buoyancy forcing is less deterministic in setting the direction of ocean circulation (47). In summary, the above analysis indicates that F ocn is mainly set by the hemispheric asymmetry of mid-latitude ocean surface area due to the cross-equatorial ocean heat transport associated with the wind-driven GMOC.
Last, let us put the arguments made in the above two sections together. If we estimate δR by the simple model of radiation asymmetry and F ocn by a linear fitting using δA ocn_mid , the estimated F atm follows the simulated F atm reasonably well ( fig. S11), especially for the longer time scale variability, albeit there are sizeable discrepancies in shorter time scales. Given that the only input for the above estimation is continental configurations (with coefficients retrieved by fitting simulations), this match further highlights the strong explanatory power of the simple arguments.
DISCUSSION
Using a series of climate simulations, this study shows that over geological time (540 Ma to present), migrations of the zonal-mean tropical rain belt may be attributed to changes in size and latitude of continents. The continental configuration sets ITCZ latitude through two main pathways: cross-equatorial ocean heat transport and hemispheric asymmetry of radiation (see the schematic in Fig. 5). The hemisphere with larger mid-latitude ocean area has a stronger wind stress-forced upwelling and equatorward Ekman transport, resulting in an ocean MOC that transports heat toward the opposite hemisphere. On the other hand, the hemisphere with smaller land area has a lower planetary albedo, thus absorbing more solar radiation. These two mechanisms have competing effects on the ITCZ, and the cross-equatorial atmospheric energy flux required to balance their residual explains the migration of the ITCZ from the perspective of continental evolution.
Although our simulations and analyses reveal the first-order mechanisms of geological time-scale migrations of the zonalmean ITCZ, we used simplifications and other factors that merit exploration in future work. Here, the ITCZ latitude was linked only to bulk parameters of continental configuration; including additional parameters such as orography or land surface type may improve our prediction. Given that our simple estimate of δR is quite accurate (Fig. 3A) while the correlation between δA ocn_mid and F ocn is weaker (Fig. 4, A and B), the space for improvement lies in a better understanding of the dependence of ocean circulation on continental configuration. Previous studies [e.g., (48)(49)(50)(51)] have shown that continental changes in key areas (e.g., formation of the Isthmus of Panama and uplift of the Tibetan plateau) may affect the ocean general circulation and tropical rainfall. Motivated by that work, we carried out a set of sensitivity tests, keeping the same settings as the control cases except for opening the Drake passage during the period of 110 to 70 Ma. The results [asterisks in Fig. 2 (A to C)] show that the perturbations to ϕ ITCZ are relatively small compared with those due to continental evolution over geological time. Our conclusions thus seem to not to be affected much by the state of the Drake passage, but this issue deserves further study.
The latitude of the zonal-mean ITCZ serves as an indicator of a broad variety of hemispheric asymmetries and has profound implications for global weather and climate. For example, in the current climate, the NH summer monsoons are significantly stronger than their SH counterparts (52), which is associated with the annualmean ITCZ lying north of the equator. Currently, the SH is much stormier than the NH, and Shaw et al. (53) attribute this to hemispheric asymmetry in an energy framework of similar philosophy to that used here. Thus, our results motivate further study of the hemispheric asymmetry of other weather and climate components, their evolution over geological time, and their linkage with geographic parameters.
Numerical model and experimental design
The experiment setting follows that by Li et al. (25). Li et al. (25) includes a set of low-resolution atmosphere-ocean coupled simulations and a set of high-resolution atmosphere-only simulations and mainly show the results of the high-resolution experiments. Here, we use the low-resolution coupled simulations because interactive ocean is critical in driving ITCZ. Here, we only briefly describe the experiment setting, while the details can be found in the work of Li et al. (25).
The Community Earth System Model [CESM1.2.2; (54)] is used here. It simulates the processes within and interactions among the atmosphere, ocean, land, sea ice, and river runoff. The horizontal resolutions are 3.75°× 3.75°for the atmosphere and land and g37 (116 meridional grids and 100 zonal grids) for the dynamic ocean and sea ice, respectively. There are 26 vertical levels for the atmosphere and 60 vertical levels for the ocean.
We carry out a series of time-slice simulations from 540 to 10 Ma with an interval of 10 Ma, as well as the PI climate. The simulated time period is chosen largely to be consistent with the set of reconstructed paleogeographic (55). For the PI case, the CO 2 , CH 4 , and N 2 O are uniformly distributed with concentrations of 280 parts per million by volume (ppmv), 760 parts per billion by volume (ppbv), and 270 ppbv, respectively. The simulation of the PI matches with the observations reasonably well (figs. S2 and S3). The main differences among the paleoclimate simulations are the continental configurations and the global mean temperature. The paleogeographic maps, which include the elevation of the land surface and bathymetric configuration, are from the paleo-digital elevation model (pale-oDEM) (55). For each simulation, the CO 2 concentration is tuned to match the reconstructed global mean surface temperatures (56,57) in the corresponding time period. This approach ensures a good match with the targeting global mean surface temperature; however, the cost is that the applied CO 2 concentration is notably higher than those from proxy reconstruction [e.g., figure 5 in (25)] at certain time periods. Considering the increases of luminosity of the Sun over time, insolation is linearly increased from 1302 Wm −2 at 540 Ma to 1361 Wm −2 at present (58). All other atmospheric compositions and the orbital parameters are set to the PI values. Each simulation is run into its equilibrium state (after more than 5000 model years, with the net radiation at the TOA less than 0.1 Wm −2 ), and the last 100 years' outputs are analyzed.
Observational data
For the validation of the PI simulation, the following observational data are used: Global Precipitation Climatology Project (GPCP2.3) (59) data and the fifth generation of European Centre for Medium-Range Weather Forecasts reanalysis data (ERA5) (60), between 1981 and 2010.
Energetic framework of ITCZ
We use three quantities to depict the migration of the ITCZ. The first one is ϕ ITCZ , which is the latitude of tropical precipitation centroid between 20°S and 20°N [e.g., (27)]. The second one is ϕ pp , which is the latitude of the zonally averaged tropical precipitation maximum. The third one is the PAI [e.g., (28,29)], which quantifies the tropical hemispheric asymmetry of precipitation and calculated as PAI ¼ The energetic framework [e.g., (3,9)] is built on the basis that the ITCZ lies near the energy flux equator (ϕ EFE ), which may be further related with the cross-equatorial atmospheric energy (the sum of sensible, latent, and geopotential energy fluxes) transport (F atm ), by a linear approximation so that ϕ ITCZ � ϕ EFE � À 1 a F atm NEI 0 , where a is the radius of Earth and NEI 0 is the atmospheric net energy input at the equator. We have examined that in our simulations, the variability of ϕ EFE is mostly due to the variability of F atm , while NEI 0 may be taken as a constant (i.e., the averaged NEI 0 of all the simulations; fig. S4).
Next, we consider the energy balance of each hemisphere. In an equilibrium state, the net energy inputted into one hemisphere at the TOA has to be transported out by its lateral boundaries at the equator by atmosphere and ocean (see the schematic in fig. S5). Let R NH and R SH be the net radiative energy input of NH and SH (units of Watt, positive denotes energy input) and F atm and F ocn be the cross-equatorial atmospheric and oceanic energy transports (positive denotes northward), respectively; we have R SH = F ocn + F atm and R NH = −(F ocn + F atm ). Then, we may define a hemispheric asymmetry of net radiative heating δR = (R NH − R SH )/2 in which δ denotes the difference between the NH and SH hereafter. Thus, we have −F atm = F ocn + δR, which is Eq. 1.
A simple model of the hemispheric asymmetry of radiation
For the simple model of the hemispheric asymmetry of radiation, we assume that the planetary albedos over land and ocean are constants (α p,lnd and α p,ocn , respectively). The TOA net shortwave heating over a unit area, with its annual-mean insolation (I), is I(1 − α p,lnd ) or I(1 − α p,ocn ), depending on its surface type. Then, given an arbitrary landmass distribution, the net shortwave radiation at TOA averaged over the NH can be calculated as S NH = ∫ lnd,NH I(1 − α p,lnd )ds + ∫ ocn,NH I(1 − α p,ocn )ds, where the two integrals are applied over the NH land and ocean areas, respectively. The SH net shortwave radiation, S SH , can be calculated similarly. Then, the simple model-estimated hemispheric asymmetry of net shortwave radiation is δS es = (S NH − S SH )/2. To quantify the performance of the simple model, we calculate the absolute mean error (i.e., cost function) by comparing the results from the simple model with the results from the numerical simulations as ɛ ¼ 1 55 X 55 i¼1 j δS es;i À δS i j, in which the subscript i denotes results of each geological time period. ε is shown in Fig. 3E for a range of combinations of α p,lnd and α p,ocn . Similarly, we can use the simple model to calculate the global averaged net shortwave radiation S glb,es = (S NH + S SH )/(2A 0 ) (with units of W m −2 ) in which A 0 is the hemisphere area (255 × 10 6 km 2 ). Again, the error of this quantity from the simple model compared with the simulation results is j S glb;es;i À S glb;i j as shown in Fig. 3F.
Calculation of the ocean wind stress intensity
We areally average the wind stress intensity (τ, with units of N m −2 ) over oceans of each hemisphere, and their difference is thus δhτi ¼ 1 A 0 ðhτi N À hτi S Þ ¼ 1 A 0 ð Ð A ocn;N τ N ds À Ð A ocn;S τ S dsÞ: A ocn,N and A ocn,S denotes the ocean area of the NH and SH, respectively, and A 0 is hemisphere area. δ⟨τ⟩ may be further decomposed into components due to hemispheric differences of ocean area and hemispheric differences of wind stress intensity. We first define a mean wind stress intensity τ in each hemisphere sothat δhτi � 1 A 0 ðτ N A ocn;N À τ S A ocn;S Þ and then let τ G ¼ 1 2 ðτ N þ τ S Þ and δτ ¼ ðτ N À τ S Þ=2, so we have δhτi � 1 A 0 ½τ G δA ocn þ δτðA ocn;N þ A ocn;S Þ�, where δA ocn = A ocn,N − A ocn,S . The right-hand-side terms in the brackets are components due to hemispheric differences of ocean area and wind stress intensity, respectively. The above decomposition may be applied for the tropical or mid-latitude component of δ⟨τ⟩ individually.
Supplementary Materials
This PDF file includes: Figs. S1 to S11 Legend for movie S1 Other Supplementary Material for this manuscript includes the following: Movie S1 | 2023-03-10T06:16:55.259Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "28c8813f0228b60d793d14e3ca01848c3f1e9992",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a818a27520b7834a1284342809cb299d9e3221a5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256960965 | pes2o/s2orc | v3-fos-license | OSM-9 and an amiloride-sensitive channel, but not PKD-2, are involved in mechanosensation in C. elegans male ray neurons
Mechanotransduction is crucial for touch sensation, hearing, proprioception, and pain sensing. In C. elegans, male ray neurons have been implicated to be involved in the mechanosensation required for mating behavior. However, whether ray neurons directly sense mechanical stimulation is not yet known, and the underlying molecular mechanisms have not been identified. Using in vivo calcium imaging, we recorded the touch-induced calcium responses in male ray neurons. Our data demonstrated that ray neurons are sensitive to mechanical stimulation in a neurotransmitter-independent manner. PKD-2, a putative sensor component for both mechanosensation and chemosensation in male-specific neurons, was not required for the touch-induced calcium responses in RnB neurons, whereas the TRPV channel OSM-9 shaped the kinetics of the responses. We further showed that RnB-neuron mechanosensation is likely mediated by an amiloride-sensitive DEG/ENaC channel. These observations lay a foundation for better understanding the molecular mechanisms of mechanosensation.
The TRP channel proteins, LOV-1 (TRPP1) and PKD-2 (TRPP2), are expressed in all RnB neurons (except R6B) and they are required for male mating behavior 16,17 . Males with loss-of-function mutations in pkd-2 display significantly impaired responses to hermaphrodite contact and vulva identification 16,17 . In vertebrates, depletion of the polycystin orthologs PKD1 and/or PKD2 may lead to an impairment of flow sensing in the primary cilium of renal epithelial cells in nephrons [18][19][20] . Thus, PKD-2 has been speculated to be part of the sensory receptor complex mediating mechanosensation in male-specific neurons 12 .
In this study, we employ in vivo calcium imaging to monitor touch-evoked activities in male ray neurons. We demonstrate that ray neurons are sensitive to mechanical stimulation in a cell-autonomous manner. The transient receptor potential (TRP) vanilloid channel subunit OSM-9, but not PKD-2, is involved in mechanical signal transduction in RnB neurons. We further show that amiloride blocks touch-induced calcium increases in RnB neurons, suggesting that amiloride-sensitive sodium channel(s) (ENaCs) are likely the primary mechanotransduction channels in RnB neurons.
Results
RnB neurons are sensitive to mechanical stimulation. To determine whether RnB neurons respond to mechanical stimulation, a genetically encoded calcium indicator, GCaMP5.0, was expressed in all RnB neurons (except R6B) under the control of the pkd-2 promoter (Fig. 1a) 21,22 . A glass probe was used to exert a mechanical stimulus, while the fluorescence changes were recorded (Fig. 1b). Using this method, we observed dramatic touch-induced calcium increases in R1B, R2B, and R3B neurons when a mechanical stimulation consisting of a 15-μm displacement was applied to the region of rays 1/2/3 (Fig. 1c). The calcium levels in these neurons were recovered minutes later, and could rise up again when we gave them another mechanical stimulation (Fig. 1d, S1 and Movie S1). Similarly, touch-induced calcium increases were observed in the R4B, R5B, R7B, R8B, and R9B neurons when we stimulated rays 4/5 and rays 7/8/9, respectively (Fig. 1e,f). Notably, no statistical differences in either the amplitude or kinetics of calcium increases in the various ray B neurons were observed when the touch probe moved forward to the indicated rays (Fig. 1g,h). These results demonstrate that RnB neurons are sensitive to mechanical stimulation.
RnA neurons occasionally respond to mechanical stimulation. We next asked whether RnA neurons respond to mechanical stimulation. We expressed GCaMP5.0 in RnA neurons under the control of the tba-9 promoter (Fig. 2a) 23 . We speculated that RnA neurons might be much more sensitive to mechanical stimulation than RnB neurons because TRP-4, a pore-forming subunit of a gentle nose touch-related mechano-gated channel, is expressed in some RnA neurons 4,12,24,25 . Surprisingly, no detectable calcium response was observed in any RnA neuron when a mechanical stimulation consisting of a 15-μm displacement was applied. A mechanical stimulation exceeding a 20-μm displacement occasionally induced calcium increases in some RnA neurons (4 out of 20 worms) (Fig. 2c,d). These results suggest that RnA neurons may also be activated by mechanical stimulation, but their sensitivity is quite low in our experimental setting. We next focused our study on the touch-induced responses of RnB neurons (particularly calcium responses in R1B, R2B, and R3B neurons [R1B-R3B]) induced by a mechanical stimulation consisting of a 15-μm displacement applied to the region of rays 1-3.
Touch-induced calcium responses in RnB neurons do not rely on synaptic transmission. One possibility is that calcium increases in RnB neurons following mechanical stimulation are post-synaptically induced by other neurons. Therefore, we examined the touch-induced responses of RnB neurons in unc-13(e51), eat-4(ky5), and unc-31(e928) mutant worms. Specifically, unc-13 and unc-31 encode orthologs of the mammalian Munc13 and CAPS proteins, which are required for neurotransmitter and neuropeptide release, respectively 26,27 . Additionally, eat-4 encodes an ortholog of the mammalian vesicular glutamate transporter, which is necessary for glutamatergic neurotransmission 28 . Interestingly, touch-induced calcium increases in RnB neurons in mutants for unc-13, unc-31, or eat-4 were similar to those of wild-type worms, suggesting that RnB neurons are likely the primary neurons for sensing mechanical stimulation (Fig. 3a,b).
PKD-2 is not involved in mechanotransduction in RnB neurons.
We next sought to investigate the molecular mechanisms of mechanotransduction in RnB neurons. PKD-2 has been implicated in contact responses of adult male towards hermaphrodites. Thus, PKD-2 has been speculated to be part of the sensory receptor complex mediating chemosensation and/or mechanosensation in male-specific neurons 12,17 . Surprisingly, we found that touch-induced calcium responses in neither pkd-2(sy606) mutants nor pkd-2(sy606);lov-1(sy582) double mutants were impaired (Fig. 4a,b), strongly suggesting that PKD-2 is not involved in mechanotransduction in RnB neurons.
OSM-9 is involved in mechanosensation of RnB neurons. The TRPV channel subunits of OSM-9
are required in the ASH sensory neurons for avoidance responses to nose touches and aversive chemicals 29 . In adult males, OSM-9 has been reported to be expressed in male-specific neurons in the tail (possibly in the HoB and RnB neurons) and in the male-specific CEM neurons in the head 30,31 . Furthermore, OSM-9 is required for male sexual attraction behaviors 31 . We found that osm-9(ky10) mutant males have normal touch-induced calcium increases in RnB neurons (Fig. 4a,b). However, touch-induced calcium increases in RnB neurons in osm-9(ky10) mutants were significantly slower than in wild type animals (Fig. 4c). OSM-9::GFP has been previously reported to localize to the endoplasmic reticulum (ER) of the cell body, but not to the cilia of RnB neurons 30 . Taken together, OSM-9 may act downstream of the primary mechanotransduction channel as a calcium modulator in RnB neurons. It should be noted that we did not observe a deficit in contact responses in osm-9 mutants, probably because of the minor role of OSM-9 in mechanosensation of RnB neurons. . Interestingly, touch-induced calcium increases were fully eliminated by 200 μM amiloride in most RnB neurons except R3B, and recovered after amiloride was rinsed out (Fig. 5a,b). By contrast, touch-induced calcium increases in all RnB neurons were not affected by 100 μM GdCl 3 (Fig. 5a,b). These results suggest that a DEG/ ENaC channel, but not a GdCl 3 -sensitive cation channel, is likely the basic component of the mechanotransduction channel in RnB neurons.
Discussion
C. elegans male ray neurons have long been considered candidate mechanosensory neurons 12,15,33 . However, clear evidence showing that ray neurons directly respond to mechanical stimulation has been lacking. Here, we demonstrate that ray neurons can be activated by mechanical stimulation in a cell-autonomous manner. Our data further show that OSM-9 and an amiloride-sensitive channel, but not PKD-2, are required for mechanosensation in RnB neurons.
Whether TRP family channels function as primary mechanotransduction channels has long been of great interest 9 . Recently, TRPN proteins (TRPN1/NOMPC/TRP-4) were confirmed to be cilia-associated mechano-gated channels in both C. elegans and flies 4,5 . The unusually long N-terminal repeat, which consists of 28 ankyrin domains of the TRPN subunit, presumably acts as the gating spring by which force induces channel gating 34 . Nevertheless, TRPN proteins appear to have been lost in vertebrates 35 . Importantly, there is no evidence showing that any of the other TRP proteins are mechanically gated, even though many members of the TRP subfamily proteins have been implicated in mechanosensation 4,9,36 . PKD-2 has also been considered a strong candidate mechanotransduction channel in RnB neurons because its ortholog is likely to function in flow sensation in the primary cilium of human renal epithelial cells 12,18 . Strikingly, our data show that pkd-2 mutant worms have no detectable defects in touch-induced calcium responses in RnB neurons, excluding the role of PKD-2 in mechanotransduction in RnB neurons.
Male RnA neurons are thought to be essential for contact responses, scanning, and turning, whereas RnB neurons are only crucial for contact responses 12 . Since most steps of the mating behavior involve direct male-hermaphrodite body contact, RnA neurons are speculated to play a more important role in mechanosensation than RnB neurons 12 . Our data support the idea that RnA neurons may act as mechanosensory neurons. Nevertheless, we only got low efficiency on recording of touch-induced calcium responses in RnA neurons. TRP-4, a mechano-gated TRP channel, mediates touch sensation in CEP neurons and PDE neurons 4,25 , and it is expressed in some RnA neurons 12,24 . Surprisingly, our data hint that TRP-4 might not participate in mechanosensation in RnA neurons, consistent with previously reported observations that trp-4 null mutants appear almost normal for all male mating sub-behaviors 12 . Our study suggests that male-specific neurons of C. elegans may provide an outstanding context for teasing out the molecular mechanisms of mechanosensation in vivo.
Calcium Imaging. Individual animals were glued on a coverglass using a cyanoacrylate-based glue (Gluture Topical Tissue Adhesive, Abbott Laboratories) and immersed in bath solution (145 mM NaCl, 2.5 mM KCl, 1 mM MgCl 2 , 5 mM CaCl 2 , 10 mM HEPES, 20 mM glucose, pH adjusted to 7.3 with NaOH). The calcium indicator GCaMP5.0 was used to measure the intracellular calcium signals 22,44,45 . Imaging was acquired on an Olympus microscope (BX51WI) with a 60× objective lens. Raw image data were acquired with an Andor DL-604M EMCCD camera and micro-Manager 1.4 software. GCaMP5.0 was excited by a Lambda XL light source and fluorescent signals were collected at a rate of 1 Hz. The average GCaMP5.0 signal from the first 10 s before stimulus was taken as F0, and ΔF/F0 was calculated for each data point. The data was analyzed using Image J.
Mechanical Stimulation.
Touch stimulation was delivered to the cell using a tip diameter of ~1 μm borosilicate glass capillary driven by a piezoelectric actuator (PI) mounted on a micromanipulator (Sutter) 43 . The needle was placed perpendicular to the worm's tail. In the "on" phase, the needle was moved toward the worm's tail so that it could probe into the worm's tail on the cilia of the ray neurons and then held on the cilia for 500 ms. In the "OFF" phase the needle was returned to its original position. Statistical Analysis. Data analysis was performed using GraphPad Prism 6 software. Error bars were mean ± SEM. N represents the number of cells. P values were determined by Student's t test. P < 0.05 was regarded as statistically significant. | 2023-02-18T14:57:54.264Z | 2018-05-08T00:00:00.000 | {
"year": 2018,
"sha1": "b4ebf30ec09349fcfe4601036621f31ab4301385",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-25542-1.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "b4ebf30ec09349fcfe4601036621f31ab4301385",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
268954790 | pes2o/s2orc | v3-fos-license | Exploring Physical Activity Levels in Patients with Cardiovascular Disease—A Preliminary Study
Increased physical activity may prevent disease onset and severity in individuals with cardiovascular disease. However, studies evaluating physical activity in people with cardiovascular disease are limited. This prospective observational study aimed to objectively assess the level of physical activity in patients with cardiovascular disease and determine the actual extent of physical activity in their daily lives. Participants aged 20 years or older with cardiovascular disease at a cardiology clinic were included. Physical activity was measured using an activity meter with a three-axis acceleration sensor. Overall, 58 patients were included in the study. Household activities were found to be more frequent sources of physical activity. The step count was related to age and housework, while total physical activity and household activity were related to age and work. Locomotive activity was related to sex and housework. Total physical and household activities tended to decrease with age. These findings indicate the influence of work and household chores on physical activity and suggest that physical activity may be underestimated if household activity is not also assessed. These fundamental findings may provide clinical evidence to underpin physical activity for patients with cardiovascular disease.
Introduction
The total number of patients with heart disease in Japan is 1,732,000, and the number of deaths due to heart disease is 200,000 per year [1].Furthermore, looking at medical treatment costs by injury and disease, medical expenses for cardiovascular diseases are the highest, putting pressure on national finances [2].Efforts to prevent serious illness and reduce mortality in patients with cardiac disease should be an important public health initiative.The acceleration of atherosclerosis is a factor in the severity and recurrence of heart disease, and may occur due to physical inactivity.Habitual physical activity and aerobic exercise are said to improve vascular endothelial function, promote systemic blood circulation, improve mitochondrial function and energy metabolism in skeletal muscle cells, increase antioxidant capacity, and maintain an anti-atherosclerotic quality composition [3].On the other hand, an increased heart rate is associated with increased hospitalization for worsening heart failure or cardiovascular-related deaths [4].Excessive physical activity leads to an increased heart rate.In other words, moderate physical activity is necessary to prevent the severity and recurrence of heart disease.
Physical activity refers to any bodily movement that consumes more energy than when at rest [5].Physical activity can be categorized into two types: activities of daily living, such as work or school, housework, and daily commuting; and exercise, which is planned, continuous, and intentional during leisure time to improve physical fitness.Activities of daily living, excluding exercise, such as housework, are classified as "non-exercise physical activity".In recent years, there has been an increased interest in the relationship between non-exercise physical activity (NEAT: non-exercise activity thermogenesis) and obesity [6].Therefore, "non-exercise physical activity" should also be considered when assessing physical activity.
There are two ways of assessing physical activity: using questionnaires, such as the International Physical Activity Questionnaire (IPAQ), or directly measuring physical activity using equipment such as activity meters [7][8][9].Questionnaire-based assessments are easy and cheap because they do not require any instruments.However, they rely on the participants' memory, which can lead to recall bias and an overestimation of their physical activity levels [10][11][12].Nevertheless, most methods for assessing physical activity in people with cardiovascular disease in Japan use questionnaires, and there are very few studies on the actual measurement of physical activity.The heart rate method and calorimetry are used to measure physical activity, but they have issues such as the need for calibration before measurement and low responsiveness when the intensity of activity changes [13].
In Japan, many studies have objectively evaluated physical activity using accelerometers.However, most of these studies have focused solely on "running and walking" and have used pedometers or uniaxial accelerometers to measure these activities [14][15][16].This approach ignores the fact that people engage in many "non-exercise physical activities" in their daily lives, such as sitting, standing, cleaning, and doing laundry.Unfortunately, pedometers and uniaxial accelerometers can only measure horizontal displacement activity, which is running and walking, and cannot measure other movements.To accurately evaluate all activities, including those beyond running and walking, researchers should use triaxial accelerometers.In Japan, the most commonly used triaxial accelerometers are the Active Tracer AC-210 (GMS, Tokyo, Japan), Actimarker EW4800 (Panasonic Electric Works, Tokyo, Japan), and Active style proHJA-750C (Omron Healthcare, Kyoto, Japan).The Active style proHJA-750C (Omron Healthcare) has proven reliable and accurate in measuring physical activity beyond running and walking [17,18].Unfortunately, there are very few studies that have used triaxial accelerometers to evaluate physical activity levels, including non-exercise activities, in people with cardiovascular disease [19,20].
This study aims to objectively evaluate the level of physical activity among patients with cardiovascular disease using a triaxial accelerometer and to clarify the actual amount of physical activity, including movements without horizontal displacement, among patients living in the community.This study also aims to clarify the physical activity of movements without horizontal displacement, which has not been adequately assessed before and will lead to a more precise estimate of the physical activity levels of people living in the community.The findings of this study will provide fundamental data for developing strategies to regulate physical activity levels for individuals with cardiovascular disease.
Study Design, Setting, and Participants
This cohort study, conducted from April to October 2022, included participants aged 20 years and older who visited the cardiology department of a general hospital in Japan.Patients were included if they were undergoing any treatment to prevent disease recurrence or progression.Exclusion criteria encompassed patients with class IV cardiac function classification (according to the New York Heart Association), with dementia, with psychiatric or orthopedic disease, or unable to manage physical activity meters [21].NYHA class IV denotes a physical activity capacity index of 2 metabolic equivalent tasks (METs) or less, which is lower than that of ordinary walking (corresponding to 3 METs).Patients were also excluded from the study due to continually declining physical activity levels.
Cooperation and consent for the study were obtained from the relevant departmental heads at the hospital.A co-researcher selected participants from among the outpatients.The principal investigator or co-investigator provided written and oral explanations of the study's purpose, significance, and methods to the selected patients.Written patient consent was obtained.This study was approved by the Ethics Review Committee for Medical Research of the Graduate School of Medicine at Gifu University (Approval No. 2021-A223).
Data were collected from the medical records, questionnaires, and activity meters.
Investigation of Physical Activity
Physical activity was measured using an activity meter equipped with a 3-axis acceleration sensor (Activity Monitor Active Style Pro HJA-750C, Omron Healthcare, Kyoto, Japan).This activity meter can measure not only horizontal movements, such as walking and running, but also vertical movements, such as drying laundry, vacuuming, and lifting luggage.This enables the measurement of routine activities that cannot be performed using conventional accelerometers.The reliability of the measured values and the validity of the activity intensity have been verified in previous studies [17,18].
In this study, the amount of physical activity is expressed as "Ex: activity intensity (METS) × time (hours)".This is because, in Japan, the Ministry of Health, Labour, and Welfare uses "activity intensity (METs) × time (hours)" as an indicator of the amount of physical activity [22].Physical activity is defined as the sum of "locomotive activities: running/walking activities"; and "household activities: activities other than running/walking activities"."Locomotive activity" refers to activities that involve horizontal movement among running and walking, such as slow walking, brisk walking, and jogging.In other words, any activity that includes walking is considered a "locomotive activity", whether it is for exercise purposes or just a part of daily physical activity.Furthermore, "household activity" refers to any movement that does not involve running or walking, such as cleaning, laundry, loading and unloading luggage, and sitting.These activities are also a part of daily physical activities.
Thus, the following information was collected from the activity meter: total daily physical activity (measurement unit: EX), locomotive activity (measurement unit: EX), household activity (measurement unit: EX), and daily number of steps for the period of wearing the meter.The activity meter measured locomotive and household activity of three METs or more.
Participants were instructed to wear the activity meter on their waist for at least nine days, as often as possible, except during underwater activities (e.g., swimming pools and baths) and when sleeping.To prevent adjustment of physical activity levels due to awareness of the measurements, activity meters were set to display the time.At the end of the study period, patients returned the activity meters to a box set up in the hospital or sent them by mail to the principal investigator.
A total of 101 activity meters were used, and the data obtained from the activity meters were imported into the principal investigator's PC using dedicated software (HDV-TDH-160070, Omron Healthcare, Kyoto, Japan).The activity data for each participant were consolidated into a single spreadsheet using Microsoft Excel for Microsoft 365 MSO (ver.2311) (Microsoft Corporation, Tokyo, Japan).
Questionnaire Survey
The questionnaire survey was conducted in an outpatient waiting room with adherence to strict COVID-19 infection control measures.Questions included sex, age, height, weight, current disease, cardiac rehabilitation, occupation, and household chores.Height and weight were measured using a height and weight scale in a medical treatment room.The researcher calculated BMI (Body Mass Index).The Japan Society for the Study of Obesity classifies a BMI of 22 as the appropriate weight (standard weight) and statistically the weight at which one is least likely to become ill; a BMI of 25 or higher is classified as obese, and a BMI of less than 18.5 as underweight [23].
Statistical Analysis
Participant characteristics were expressed as mean ± SD for continuous variables and by frequency for categorical variables.Histograms were used to confirm that the distribution of all four physical activities was negatively skewed.The physical activity for each characteristic was summarized as median and interquartile range (IQR).To compare the physical activity between characteristic categories, the Mann-Whitney U test or the Kruskal-Wallis test was used.Cohen's d and Cliff's delta were also presented.Cliff's delta aids in evaluating the effect size between two groups when analyzing non-parametric data [24,25].Physical activity was visually presented in a box plot for each characteristic category.
Restricted cubic spline curves with a knot number of 3 were used to represent the smooth nonlinear relationship between age and physical activity.Spearman's correlation coefficient was used to summarize the relationship between age and physical activity.The statistical software EZR version 1.55 (Saitama Medical Center, Jichi Medical University, Saitama, Japan) and R version 4.2.2 (R Foundation for Statistical Computing, Vienna, Austria) were used for all analyses.The level of significance was set at p < 0.05.
Participant Characteristics
Of the 105 participants, 4 were excluded from the study because of the loss or submersion of their activity meters.From the integrated data, we first extracted the data of participants who wore the device for more than 600 min per day.Next, data that included at least 2 days of data measured on weekdays and at least 1 day of data measured on holidays were extracted.Finally, the data from 58 patients were included in the analysis [26].The participants' backgrounds are presented in Table 1.A total of 31 (53.4%)participants in this study were female.The mean age was 70.3 ± 12.1 years, and 44 (75.9%) patients were aged over 65 years.Heart failure was the most common primary disease in 13 patients (20.6%), followed by coronary artery disease and arrhythmia in 11 patients (17.5%), and lastly, valve disease was present in 10 patients (15.9%).Importantly, with regard to the stage of heart failure, 65.5% of patients were in stage A/B.While ten patients (17.2%) were currently undergoing cardiac rehabilitation, a greater proportion of patients were not.In relation to employment, 24 people (41.4%) answered yes, half of whom were males.With respect to housework, 37 participants (63.2%) answered yes, of whom nine were male.
Comparison of Physical Activity and Basic Attributes
A comparison of physical activity and basic attributes is presented in Tables 2 and 3.In this study, household activities tended to be higher than locomotive activities.The number of steps taken was significantly higher when the participant was less than 65 years old and there was no housework (less than 65 years, p = 0.013; no housework, p = 0.025).Total physical activity was significantly higher in those aged <65 years and those with work experience (age < 65 years: p = 0.013, with work: p = 0.002).Locomotive activity was significantly higher for men and those without household chores (men: p = 0.019, without household chores: p = 0.003), while household activity was considerably higher for those under the age of 65 and with work (under age 65: p = 0.012, with work: p < 0.001).Although body mass index (BMI) did not differ significantly among each physical activity level, all physical activity items tended to be lower for those with a BMI below 18.5, compared with those above 18.5 (Figure 1).In addition, all physical activity items tended to be lower in patients with heart failure at stage C/D than in those at stage A/B (Figure 1).Specifically, household activities had lower median values compared to locomotive activities in stages C/D (Figure 1).Although body mass index (BMI) did not differ significantly among each physical activity level, all physical activity items tended to be lower for those with a BMI below 18.5, compared with those above 18.5 (Figure 1).In addition, all physical activity items tended to be lower in patients with heart failure at stage C/D than in those at stage A/B (Figure 1).Specifically, household activities had lower median values compared to locomotive activities in stages C/D (Figure 1).
Physical Activity Level Associated with Age
Regarding physical activity and basic attributes, age was associated with three items: the number of steps taken, total physical activity, and household activities.Therefore, a nonlinear regression analysis was performed to evaluate the association between physical activity levels and age. Figure 2 shows the relationship between the physical activity levels and age.The correlation coefficients between household activity and total physical activ-
Physical Activity Level Associated with Age
Regarding physical activity and basic attributes, age was associated with three items: the number of steps taken, total physical activity, and household activities.Therefore, a nonlinear regression analysis was performed to evaluate the association between physical activity levels and age. Figure 2 shows the relationship between the physical activity levels and age.The correlation coefficients between household activity and total physical activity with respect to age were -0.433 and -0.361, respectively, with the coefficients for the main effects proving significant (p = 0.001 and p = 0.013, respectively).In contrast, the correlation coefficients between the number of steps and the locomotive activity with respect to age were -0.252 and -0.217, respectively, with the p-value for the coefficients of the main effect being non-significant (p = 0.224 and p = 0.114, respectively).This suggests a decline in household and total physical activities with increasing age.However, no significant correlation was found for locomotive activity, with a correlation coefficient of 0.217 (p = 0.114) or the number of steps, with a correlation coefficient of 0.252 (p = 0.244).Figure 2
Physical Activity Level Associated with Age
Regarding physical activity and basic attributes, age was associated with three items: the number of steps taken, total physical activity, and household activities.Therefore, a nonlinear regression analysis was performed to evaluate the association between physical activity levels and age. Figure 2 shows the relationship between the physical activity levels and age.The correlation coefficients between household activity and total physical activity with respect to age were -0.433 and -0.361, respectively, with the coefficients for the main effects proving significant (p = 0.001 and p = 0.013, respectively).In contrast, the correlation coefficients between the number of steps and the locomotive activity with respect to age were -0.252 and -0.217, respectively, with the p-value for the coefficients of the main effect being non-significant (p = 0.224 and p = 0.114, respectively).This suggests a decline in household and total physical activities with increasing age.However, no significant correlation was found for locomotive activity, with a correlation coefficient of 0.217 (p = 0.114) or the number of steps, with a correlation coefficient of 0.252 (p = 0.244).
Characteristics of Physical Activity
The goal for patients with chronic diseases is 6500-8500 steps per day [27].However, the number of steps taken by the participants in this study ranged from 2166 to 5540 steps per day, which did not reach the goal for patients with chronic disease.A study performed by Saint-Maurice et al. reported that the average daily number of steps taken by adults aged 40 years and older is approximately 9200 steps per day and that a decrease to less than 2000 steps per day increases the risk of death from cardiovascular disease by approx-
Characteristics of Physical Activity
The goal for patients with chronic diseases is 6500-8500 steps per day [27].However, the number of steps taken by the participants in this study ranged from 2166 to 5540 steps per day, which did not reach the goal for patients with chronic disease.A study performed by Saint-Maurice et al. reported that the average daily number of steps taken by adults aged 40 years and older is approximately 9200 steps per day and that a decrease to less than 2000 steps per day increases the risk of death from cardiovascular disease by approximately 51% [28].Therefore, physical activity interventions that enable individuals to continue their current number of steps are required to reduce the mortality risk.
The Ministry of Health, Labour and Welfare recommends that physical activity in adults be at least 23 EX/week [22].However, the physical activity of the subjects in this study was far below this level.In terms of physical activity (EX) by sex, women had a higher total physical activity and household activity, while men had a higher locomotive activity.These results are similar to those of the Hisayama Town study [29].In another study by Hagino et al. investigating physical activity in people with chronic heart failure using an activity meter with a three-axis acceleration sensor, and they reported that household activity was higher than locomotive activity [19].Our study showed that housework activity tends to be higher than exercise activity in individuals with cardiovascular diseases (including heart failure), which is consistent with the results of Hagino et al. [19].When evaluating the physical activity of people with cardiovascular disease, it is crucial to consider not only the number of steps taken and locomotive activity but also household activity.Neglecting household activity may lead to an underestimation of the actual level of physical activity.However, the degree of physical activity, reflected as the cardiac load of a person, depends on their cardiac function, and the appropriate physical activity for individuals with cardiac dysfunction is determined by the anaerobic metabolic threshold.Therefore, further research on the presence or absence of physical activity surpassing the anaerobic metabolic threshold and its associated factors is also required.
It is possible that the three-axis accelerometer used in this study does not adequately assess physical activity centered on the upper body [30].Therefore, it is highly likely that household activities, especially those involving more upper body movement, were underestimated.The current study was not able to clarify even what types of physical activities were actually performed.However, considering the characteristics of the equipment used, it will be necessary to determine the actual physical activity content in the future.
Physical Activity in Relation to Basic Attributes
In this study, the total physical and household activities were associated with age and work status.In contrast, the number of steps taken and physical activity were associated with the presence or absence of household chores.We expected household activity to be associated with the presence or absence of housework, but there were no significant differences.In this study, 40% of the participants were employed, with a male predominance, especially in the no-housework group.These findings suggest that the no housework group may have included more men who had jobs; therefore, activities with vertical movements performed at work were reflected in household activities.Future research should examine the relationship between the specifics of work, housework, and physical activity.Nevertheless, to date, no study has clarified the relationship between physical activity and the presence/absence of work or housework; thus, the results of this study are novel.
As BMI is calculated by dividing the weight (kg) by the square of height, it increases with weight gain.Katou et al. investigated the association between weight gain and physical activity in patients after acute myocardial infarction using a pedometer with an accelerometer (the Lifecorder GS, Suzuken, Aichi, Japan).The number of steps taken in the weight gain group was significantly lower than that in the maintenance group [31].In our current study, physical activity levels were similar between participants with a normal BMI and those with a high BMI.However, individuals with a low BMI had lower physical activity levels.The risk of mortality from heart disease is linked not only with a high BMI but also with a low BMI [32], indicating the need for further investigation into the relationship between physical activity and a low BMI.
Patients with heart failure in stage A/B tended to take more steps and perform more physical activity than those in stage C/D, as heart function typically declines with the progression of heart failure stages.Future research should assess the presence or absence of physical activity above the anaerobic metabolic threshold and the related factors for each heart failure stage, as well as the relationship between physical activity and a participant's life background [33].
Finally, a decline in total physical and household activities with increasing age was noted.It has been suggested that activities involving vertical movements, such as standing, become less frequent with age.The findings of this study are important because research on the impact of age-related changes in physical activity in individuals with cardiovascular disease is scarce.In addition, previous studies have shown that both upper and lower limb muscle strength declines with age [34].In this study, the decrease in vertical movement may contribute to the decline in lower limb muscle strength.However, further studies are needed to confirm this hypothesis.
Strengths and Limitations of the Study
The median number of steps, total physical activity, locomotive activity, and household activity of persons with cardiovascular disease attending outpatient clinics were 3515 steps per day and 3.32, 0.68, and 2.41 EX/day, respectively.These results suggest that excluding household activity when assessing physical activity in people with cardiovascular disease may lead to an underestimation of physical activity levels.Additionally, the findings indicate that the total physical and household activities are associated with age and work status, and the number of steps taken and physical activity are associated with the presence or absence of household chores.In contrast, the BMI and heart failure stage were not associated with physical activity.Notably, the total physical and household activities tended to decrease with age.
This study had a few limitations.The study included 58 participants, a small sample size that could contribute to the lack of association observed among physical activity, BMI, and heart failure stage.Additional studies with larger sample sizes are warranted.Additionally, anaerobic metabolic thresholds were not measured in this study.Physical activity commensurate with cardiac function is crucial for individuals with cardiovascular diseases.Therefore, additional research is needed to focus on physical activity commensurate with cardiac function.It is also believed that the environment, including factors such as the area you live in, your family structure, and the size of your home, can impact physical activity.However, although this study targeted people living in rural areas, it did not investigate elements such as family structure and house size.Given that physical activity is influenced by the environment, future research will also be necessary in relation to regional characteristics and living conditions.Finally, the activity meter used in this study may not adequately assess upper body-centered activity, which is a limitation of the device.Therefore, in the future, it would be beneficial to consider a method that can comprehensively evaluate physical activity.
Conclusions
For the first time, we quantitatively assessed the physical activity of individuals with cardiovascular disease at an outpatient clinic using a triaxial accelerometer.The strength of this study lies in its novelty, considering that few studies have quantitatively evaluated physical and household activities.In addition, no studies have clarified the relationship between physical and social activities, such as work and housework.Therefore, the findings of this study may provide clinical evidence to support physical activity intervention strategies in patients with cardiovascular disease.
Figure 1 .
Figure 1.Distributions of physical activity ((a) number of steps, (b) total physical activity, (c) locomotive activity, and (d) household Activity) by participants' characteristics.The whiskers shows from the upper and lower quartiles to the furthest points within 1.5 times the interquartile range (IQR).Data points that fall outside this range are plotted as dots.
Figure 1 .
Figure 1.Distributions of physical activity ((a) number of steps, (b) total physical activity, (c) locomotive activity, and (d) household Activity) by participants' characteristics.The whiskers shows from the upper and lower quartiles to the furthest points within 1.5 times the interquartile range (IQR).Data points that fall outside this range are plotted as dots.
Figure 1 .
Figure 1.Distributions of physical activity ((a) number of steps, (b) total physical activity, (c) locomotive activity, and (d) household Activity) by participants' characteristics.The whiskers shows from the upper and lower quartiles to the furthest points within 1.5 times the interquartile range (IQR).Data points that fall outside this range are plotted as dots.
Figure 2 .
Figure 2. Nonlinear prediction of physical activity ((a) number of steps, (b) locomotive activity (c) household Activity, and (d) total physical activity) by age.Prediction plots were estimated using restricted cubic spline curves with a knot number of 3. The average predicted value is denoted by the solid line, and the gray shaded area shows the 95% confidence interval.
Figure 2 .
Figure 2. Nonlinear prediction of physical activity ((a) number of steps, (b) locomotive activity (c) household Activity, and (d) total physical activity) by age.Prediction plots were estimated using restricted cubic spline curves with a knot number of 3. The average predicted value is denoted by the solid line, and the gray shaded area shows the 95% confidence interval.
Table 2 .
Comparison of physical activity (number of steps, total physical activity) by participants' characteristics.
* Significance: a , the Mann-Whitney U test; b , the Kruskal-Wallis test.
Table 3 .
Comparison of physical activity (locomotive activity, household activity) by participants' characteristics.
Table 3 .
Comparison of physical activity (locomotive activity, household activity) by participants' characteristics.
* Significance: a , the Mann-Whitney U test; b , the Kruskal-Wallis test. | 2024-04-06T15:26:06.323Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "13e96435254d93ff1490182200682d45dd0bbacd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9032/12/7/784/pdf?version=1712157611",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4bac2100dfcfee7869b776210e94fecb455d1d06",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251403068 | pes2o/s2orc | v3-fos-license | Complexions in a modified Langmuir-McLean model of grain boundary segregation
The Langmuir-McLean isotherm is often interpreted as providing an approximation to the most probable grain boundary segregation as a function of the bulk mole solute fraction $x_B$, even though $x_B$ is not an independant parameter in the free energy minimization on which it is based. In this paper it is shown that the most probable segregation for a specified $x_B$ differs from the standard Langmuir-McLean relation. Numerical solution of the derived equation suggests that two potentially stable interface compositions are associated with most bulk compositions. One solution represents a state with an excess of solute along the boundary relative to the bulk, while the other represents a deficit. The vacancy content ratio between the interface and the bulk plays a large role in determining the shape of the derived isotherm.
Introduction
Segregation is the process of grouping impurities and structural defects together in a material system [1,2]. Any free energy reduction associated with segregation can be leveraged to stabilize a desired defect structure, allowing materials engineers to "bake in" what would otherwise be transitory material properties that depend on the dominant defect population. [3,4] Segregation has therefore been the subject of much research in metallurgy and materials science from its initial roots [5] to the present day [6,7,8,9]. The simplest model of equilibrium segregation along a grain boundary is given by the Langmuir-McLean isotherm [10,11] where δG is the segregation free energy per solute atom, k B is Boltzmann's constant, T is the ambient temperature, Γ B is the number density of solute atoms segregated to the boundary with maximal value Γ 0 , and x B and x A are the mole fractions of components A and B in the bulk.
The Langmuir-McLean isotherm serves as a common touchpoint for a number of segregation models proposed over the years on the basis of more complicated assumptions. A few of the more prominent models include the Fowler-Guggenheim isotherm [12], which considers the influence of solute-solute interactions in the interface, and the Seah-Hondros isotherm [13], derived on the basis of solid-state theoretic methods.
Despite these efforts, there remains a significant discrepency between the observed and predicted segregation to interfaces in many real materials [14]. In this paper I present a simple modification to the Langmuir-McLean model that yields very different predictions.
Model derivation
Equation (1) results from analyzing a two-state model of a grain boundary in which impurity atoms of component B are either segregated to the interface or are free to roam in a bulk matrix consisting of atoms of component A. The same two-state system will be considered in this work. A brief outline of the steps involved in the derivation of equation (1) will be presented before indicating the changes proposed in this paper.
Distribute n A = n A0 + n A1 atoms of component A and n B = n B0 +n B1 atoms of component B among N 1 indistinguishable bulk sites and N 0 indistinguishable interface sites, such that n B0 and n A0 are situated on the interface while the rest remain in the bulk. The quantity of primary interest is n B0 , because this represents the number of atoms of component B segregated to the interface. Subscript 1 will indicate bulk quantities; subscript 0 will indicate interface quantities.
The most probable number of segregated solute atoms, n B0 , minimizes the Helmholtz free energy F = U − T S in the NVT ensemble. To find this minimum, we can express both the internal energy U and the entropy S as functions of n B0 and evaluate ∂F/∂n B0 = 0. An expression for the entropy S = k B ln Ω may be determined by counting the total number Ω of indistinguishable configurations of the system that correspond to a specified system configuration {n A0 , n A1 , n B0 , n B1 }. Basic combinatorics yields where n V 1 = N 1 − n A1 − n B1 and n V 0 = N 0 − n A0 − n B0 are the number of vacant sites in the bulk and in the interface. Using this expression, it can be shown that the general solution to ∂F/∂n B0 = 0 in the Stirling approximation satisfies where δG = ∂U/∂n B0 is the segregation free energy and primes indicate differentiation with respect to n B0 . In order to evaluate the primed exponents that appear in equation (3), it is necessary to specify how each variable depends on n B0 . The Langmuir-McLean isotherm follows from imposing the constraints n A0 + n A1 = n A (4) where all quantities on the right-hand side are considered to be independant of n B0 . The first two constraints follow from conservation of atom number by component; the second two from conservation of site number; and the last constraint neglects vacancies in the interface. From these constraints, we can see that n A0 = −1, n A1 = 1, n B1 = −1, n V 0 = 0, and n V 1 = 0. Substituting these values into equation (3) and rearranging leads to equation (1), after identifying A similar result may be obtained by replacing the final constraint (8) with the equation n A1 + n B1 = n 1 , which permits the interface and the bulk to exchange an atom of component A for an atom of component B while allowing no change in the total number n 1 of atoms in the bulk. This relation leads to the same set of exponents as in the previous case, but the interface vacancy content is no longer necessarily zero. As a consequence we must write . This constraint accounts for interface vacancies by normalizing the maximum segregation to match a given interface vacancy content. In either case, the nature of the final constraint indicates that this relation best models segregation in a system that does not allow variable vacancy content, whether in the interface or in the bulk. The Langmuir-McLean isotherm therefore represents the most probable segregation for a given bulk atom density n 1 . Let us instead seek the most probable segregation for a specified bulk impurity composition x B1 . To do so, consider replacing constraint (8) with the equation n B1 /n A1 = r, where r is a fixed positive number. Fixing r also fixes x B1 because x B1 = r/(1 + r). It can be shown that in this case we have n A0 = 1/r, n A1 = −1/r, n B1 = −1, n V 0 = −(1 + r)/r, and n V 1 = (1 + r)/r, leading to or in terms of mole fractions x B0 x A0 x A0 . This system is constrained such that if the bulk loses a single solute atom to the interface, it must also lose 1/r solvent atoms to maintain a constant composition, and so gain (1 + r)/r vacancies. Equation (11) is the central focus of this study. In the following I present numerical solutions and discuss some of its implications.
Numerical analysis
To investigate the extent to which the solutions to (1) and (11) differ, I have determined interface compositions x B0 that satisfy equation (11) as a function of bulk composition x B1 for specific values of ν = x V 0 /x V 1 and δG/k B T using numerical techniques. Explicitly, I have defined and interpolated to find the set of points , and δG, the points (x 0 , y 0 ) represent solutions to equation (11), with y 0 = x B0 = Γ B /Γ V for a specified bulk composition A typical solution set is plotted in Figure ( It can be seen that across a wide range of compositions the lower curve predicts segregation at rates lower than those of the Langmuir-McLean isotherm. The most striking difference, however, is the appearance of a second branch, as indicated by the blue curve with round bullets. In this figure the second branch tracks LM −1 in the dilute limit. Unlike LM and LM −1 , which correspond to oppositely signed segregation free energies, both red and blue curves represent solutions to equation (11) for δG/k B T = +1.5. The red curve with square bullets represents a solution with a diminished concentration of solute in the interface than in the bulk (x B0 < x B1 ) whereas the blue curve with round bullets represents a solution with augmented solute content in the interface (x B0 > x B1 ).
The nature of the isotherm fundamentally alters if the vacancy ratio ν differs from unity. I illustrate the dependance on ν in Figure (2), where solutions obtained for the same segregation free energy δG/k B T = +1.0 but differing values of ν are plotted. Figure (2a) depicts variations that occur for ν ≤ 1, when the mole fraction of vacancies in the bulk exceeds that in the interface. As ν decreases, it can be seen that a gap opens in the upper branch (blue) along the x B0 axis, suggesting a minimal segregation The lower branch shifts uniformly downward as ν increases, indicating reduced segregation roughly proportional to ν.
A system for which the interface vacancy content exceeds the bulk vacancy content (ν > 1) is depicted in Figure (2b). It can be seen that the upper and lower branches pull away from the origin and merge as ν increases, opening up a gap along the x B1 axis in which no potential stable solutions x B0 exist, apart from x B0 = 0 or x B1 = 1. Calculations suggest that this gap exists even for small excursions in ν above 1.
The solutions to equation (11) exhibit much different behavior than the Langmuir-McLean isotherm under segregation free energy sign reversal. In Figure ( The dependance on ν for negative segregation free energy solutions is explored in Figure (3b). All of the curves plotted in Figure (3b) were obtained using δG/k B T = −1.0, except for the curve LM , which is the Langmuir-McLean isotherm for δG/k B T = +1.0. The curves are labeled with the associated value of ν. As ν increases toward 1, the isotherm pulls away from the point (1, 1) and contracts toward the origin. For ν = 1 the isotherm vanishes. No solutions exist for δG < 0 and ν ≥ 1, apart from x B0 = 0 or x B0 = 1.
Discussion
To more readily compare equations (1) and (11), note that we can express (11) as with Γ V = Γ 0 /(1 + x V 0 ) as before. It can be seen from this equation that if the composition and vacancy content are similar in the interface and the bulk, then the ratio inside the logarithm is close to unity, reproducing the original Langmuir-McLean relation. This ratio may be recognized as an approximation to the equilibrium rate constant k for the interaction in which the interface and the bulk exchange component A and vacancies. We would therefore expect k ≈ 1 when the standard formation energies for vacancies and component A do not differ much between bulk and interface sites. Otherwise, unconditional reduction to the Langmuir-McLean form requires r = −1, which is unphysical. When k = 1, the numerical analysis presented in the previous section indicates qualitative differences between segregation described by the Langmuir-McLean model and by equation (11). The appearance of two stable states, or complexions, over a broad range of values indicates that there are two different system configurations that can accomodate chemical differences between the interface and the surrounding bulk. The nature of these two configurations is unclear, apart from the fact that one is enriched, and one depleted, in segregated solute, relative to the bulk. From a purely mathematical perspective, these configurations result from the fact that x(1 − x) 1/r = C can admit two distinct solutions, where r and C are constants.
The appearance of two branches in solutions to equation (11) is not uncommon. Each branch indicates a set of points such that F = 0. The stability of each state can be determined by evaluating the second derivative F . In the Langmuir-McLean approximation, with U = 0, we obtain from which it follows that both branches represent potentially stable solutions as long as the {n α } are all positive. In a more realistic model, U < 0 could potentially modify the stability of either branch. We might instead expect one stable branch and one unstable branch, and so it is important to question whether both branches are physically relevant. Due to the introduction of mole fractions, no mechanism exists in the formalism to guarantee that all component population variables {n α } remain individually positive in the solution. Indeed, negative population variables easily appear as solutions to the traditional Langmuir-McLean equation (1) for most values of x B once concrete values are specified for the model parameters. In the current model, negative values could lead to F < 0 in equation (15), resulting in an unphysical solution that appears to be thermodynamically viable.
Therefore let us investigate whether either solution requires negative population variables. At every point along either branch it is clear that 0 < x B0 < 1 and 0 < x A0 < 1, so that n B0 and n A0 must be either both positive or both negative. Also, n V 0 must have the same sign as n B0 and n A0 when we provide an appropriate value for x V 0 to define the quantity Γ V = Γ 0 /(1 + x V 0 ). The same considerations apply for n A1 , n B1 , and n V 1 , except that the sign of n V 1 is linked to the sign of n V 0 through the quantity ν = x V 0 /x V 1 ; only positive values for ν have been considered in this work. These considerations suggest that all quantities in the solution are either all positive or all negative. But it is clear that equation (11) is invariant under a transformation that inverts the sign of all population variables. If {n α } is a solution, so is {−n α }. The corresponding mole fractions are positive in both cases and satisfy the same equation.
Each of these branches therefore represents a set of potentially stable, physically relevant solutions to equation (11). As in the Langmuir-McLean case, however, the entire domain is most likely not accessible once concrete parameters have been specified. It seems probable that when the system finds itself in one of these two states, the second state becomes both unphysical and unstable, corresponding to negative {n α } and F < 0.
The free energy F and its derivatives F and F in the Stirling approximation become difficult to define along the borders, where at least one population variable equals zero. The nature of the limiting behavior of the system at the poles (0,0) and (1,1) clearly influences the shape of the global isotherm. The value of ν and the sign of δG appear to control whether the system is attracted or repulsed from these poles, and to what extent. This suggests that the local value of ν plays a large role in controlling the dynamics of segregation.
This model may be most appropriate in systems that exhibit a preferred bulk solute content x B1 . On the other hand, the constant mole fraction constraint on which it is based is better aligned with the interpretation that it provides the most probable segregation for a specified bulk composition. Regardless of its applicability, the substantial departure observed from Langmuir-McLean behavior indicates the critical role that the vacancy constraint plays in determining the shape of the Langmuir-McLean isotherm.
Summary
In this work I have presented a simple modification to the Langmuir-McLean model of grain boundary segregation, leading to equation (11). In contrast to the Langmuir-McLean model, the proposed model allows the interface and the bulk to exchange vacancies as well as atoms to determine the most probable segregation given a specified bulk mole solute content x B1 .
Numerical analysis indicates that this modification has a large effect on the predicted segregation. In particular, two complexions appear across a wide range of bulk compositions, corresponding to solute enrichment or deficiency relative to the bulk. The ratio of the vacancy mole fraction in the interface to the vacancy mole fraction in the bulk assumes a prominent role in determining the shape of the isotherm. (11) for segregation free energy δG = +1.5k B T , plotted versus mole fraction of bulk solute x B1 , with equal vacancy mole fractions in the bulk and in the interface (x V 0 = x V 1 ). The Langmuir-McLean isotherms for δG = +1.5k B T and δG = −1.5k B T are labeled LM and LM −1 , respectively. All curves are vacancy-normalized, with Γ V = Γ 0 /(1+x V 0 ). Both upper and lower curves are solutions to equation (11) for δG = +1.5k B T . With each value x B1 is associated two possible stable compositions, or complexions: one on the blue curve and one on the red curve. | 2022-08-09T01:16:30.644Z | 2022-07-20T00:00:00.000 | {
"year": 2022,
"sha1": "21f720f775220acd51f719f52eae47b930d3313a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "63577a41d9c3d95abd3dec8791fe7900a18643d8",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
240390861 | pes2o/s2orc | v3-fos-license | Attention Bias to Pain Words Comes Early and Cognitive Load Matters: Evidence from an ERP Study on Experimental Pain
Attention bias (AB) is a common cognitive challenge for patients with pain. In this study, we tested at what stage AB to pain occurs in participants with experimental pain (EP) and tested whether cognitive load interferes with it. We recruited 40 healthy adults aged 18-27 years, and randomized them into control and EP groups. We sprayed the participants in the EP group with 10% capsaicin paste to mimic acute pain and those in the control group with water, accessing both groups' behavioral results and event-related potential data. We found that high-load tasks had longer response times and lower accuracies than low-load tasks did and that different neural processing of words occurred between the groups. The EP group exhibited AB to pain at an early stage with both attentional avoidance (N1 latency) and facilitated attention (P2 amplitude) to pain words. The control group coped with semantic differentiation (N1) at first, followed by pain word discrimination (P2). In addition, AB to pain occurred only in low-load tasks. As the cognitive load multiplied, we did not find AB in the EP group. Therefore, our study adds further evidence for AB to pain, suggesting the implementation of cognitive load in future AB therapy.
Introduction
Attention bias (AB) refers to different allocations of attention to certain types of stimuli [1]. AB to pain-related materials is a common phenomenon in patients with pain [2], and attention bias modification (ABM) is now a novel treatment for pain. However, ABM has been reported to have contradictory result with analgesic effects [3][4][5] or no substantive improvements [6]. As such, more information regarding AB is needed.
When does AB to pain occur? One study with healthy persons and patients with anxiety found that AB to negative stimuli occurred in an early stage of information processing [7][8][9]. However, other studies found that AB to pain was limited and inconsistent. For example, Knost et al. [10] reported that pain-related words activated only a stronger early component N100 in patients with chronic pain, as compared with healthy persons. As compared with neutral words, pain-related words can induce stronger late low waves in patients with chronic pain. Sitges et al. [11] have revealed that for patients with pain, pain-related words can elicit significantly more enhanced positive event-related potential (ERP) amplitudes than pleasant words can, but this phenomenon was not confined to a certain period. Until now, no research has provided clear understanding regarding when AB to pain occurs.
Cognitive load is a potential factor influencing pain perception [12][13][14][15]. It is generally believed that when compared with low cognitive tasks, medium-to-high cognitive tasks compete with pain for more attention and thus have more obvious analgesic effects [16][17][18]. Scholars have further confirmed this finding using functional magnetic resonance (fMRI) showing that complicated tasks can activate or deactivate certain brain areas related to pain [19][20][21]. However, what will happen in AB with different cognitive loads? If a high cognitive load affects one's AB, then the load factor could be used considerably as a new type of intervention.
Building on the above researches, we suggest that AB to pain probably happens at an early stage for persons with pain, and cognitive load may influence AB. To test our hypothesis, we studied a healthy control group and an experimental pain (EP) group using a working memory paradigm of cognitive tasks, with four types of words interspersed as nonstimulus target interferences. We included high and low cognitive loads and obtained and analyzed behavior results and ERP data between the groups. In this study, we assumed that AB to pain stimuli might occur early in the EP group with a significant difference of early ERP components among stimuli and between groups, and that cognitive load can affect AB in a way that the higher the load is, the less obvious AB is.
Materials and Methods
2.1. Participants. We recruited 40 healthy students (20 men and 20 women; aged 18-27 years) from the University of Southern Medical University. The inclusion criteria were right-handed, fluency in Chinese, and normal or correctedto-normal vision. The exclusion criteria were having a diagnosis of or receiving treatment for a psychiatric disorder currently or within the past 5 years or regularly taking any psychotropic or analgesic medications. All students gave their written informed consent to participate in the study, which was approved by the Ethics Committee of Zhujiang Hospital, Southern Medical University.
We obtained participant demographic data during the evaluation session before the experiments. We collected scores from the Hospital Anxiety and Depression Scale (HADS) and the State-Trait Anxiety Inventory (STAI) afterwards.
Experimental
Protocol. The experiment was designed for 2 groups ðcontrol and EPÞ × 2 cognitive loads ðhigh and lowÞ × 4 interfering words (neutral, positive, negative, and pain). We randomized the participants into EP and control groups. EP participants were sprayed with 10% capsaicin paste (Professional Arts Pharmacy) on the left inner forearm and covered with plastic wrap to mimic a sense of acute pain. The control participants were sprayed with pure water and covered with plastic wrap also. Cognitive load was distinguished by the length of a number string (6 for high, 2 for low). All 400 digits were generated by a random number generator, with 50% in each kind of digits, respectively.
Interfering word types included neutral, positive, negative, and pain. We selected 25 neutral, positive, and negative words, respectively, from the modern Chinese lexicon in the Chinese emotional material database, with the low-level features (i.e., valence, arousal rating, dominance, and familiarity) matched for each type. Sensory pain words (i.e., dull pain) were selected from the Chinese version of the McGill Pain Questionnaire (MPQ).
Because the pain words in the MPQ were not large enough in quantity to form enough stimulus for ERP research, we presented all 25 words in 4 forms with 4 different colors (blue, green, red, and yellow), resulting in 100 words in each word category. Color saturation and bright-ness were matched to eliminate errors between the colors. Words and cognitive loads were randomly combined.
We used the visual analogue scale (VAS) to score pain values before and during the experiment. If a participant's pain perception was lower than 4/10 on the VAS, we resprayed the pain-causing substance on the left inner forearm to maintain a pain perception of >4/10. E-Prime version 3.0 software was used for experimental programming. Every presentation of a trial began with a fixation point "+" (200 ms), followed by a sequence of digital loads (300 ms) and a "......" screen to maintain the width of attention (300 ms), an empty screen (600-800 ms), a word interference screen (1,000 ms), a black screen (600-800 ms), and a selection screen(≤2,000 ms), which consisted of 2 of the same load numbers, with only 1 of them having been presented before. On the selection screen, we asked the participants to respond as quickly and correctly as possible if the numbers appearing previously were on the left-hand side (their left index finger pressing the "F" key) or on the righthand side (their right index finger pressing the "J" key correspondingly). Finally, we used a black screen (600-800 ms) to end the trial (see Figure 1, upper right).
Before the formal experiment, the participants conducted 24 practices to familiarize themselves with the tasks. To obtain more reliable ERP data, more trials should be conducted. All participants conducted 400 trials in the formal experiment, which we divided into 8 blocks with 50 trials per block. We set the interval between two blocks at 2 min. The entire experiment took 40-60 min. Figure 1 shows the experimental procedure.
EEG Recording and Processing.
We recorded electroencephalography (EEG) using a 32-channel cap according to the International 10-20 Electrode Placement System (Bio-Semi). While recording the EEGs online, we used the average value of the bilateral mastoids as a reference and the AFz electrode as the grounding electrode. We made vertical electrooculogram (EOG) recordings using electrodes placed above and below the left eye and recorded horizontal eye movements using electrodes placed over the outer canthus of both eyes. EEG signals were filtered using 0.05-100 Hz, with 512 Hz as the sampling rate. All interelectrode impedances were maintained below 5 kΩ.
Because we mainly examined the effects of the interfering words, the interfering word screen was used as the stimulus onset, and epochs of the former 100 ms and later 600 ms were analyzed. We used the 100 ms waveform before 0 point as the baseline. ERP processing was conducted with MTLAB R2013a (RRID:SCR_001622; MATLAB) and EEGLAB 12.0 (RRID:SCR_007292; EEGLAB). After reducing the sampling rate to 500 Hz, we filtered the data through 0.1-40 Hz.
Continuous data were segmented into the epochs mentioned above. First, bad epochs were marked if more than 20% individual electrodes contained artifacts, and files that contained more than 10 bad channels were discarded from further analysis. Additionally, we performed visual inspection trial-by-trial to ensure that trails with large interference and unstable baseline were appropriately rejected. Independent component analysis (ICA) was conducted to eliminate 2 Neural Plasticity EOG and electromyogram activities after we removed the bad epochs and channels and the interpolation of the electrodes with high noise. Any epochs with voltage values exceeding ±100 μV were rejected accordingly. Only the epochs of the correct responses were averaged. Finally, each condition had enough epochs (at least 30) for average to ensure a variability of the EEG data.
Data and Statistical Analysis
2.4.1. Behavioral Data. We compared the behavioral indexes related to the four types of words. E-Prime 3.0 was used to extract the indexes as response time (RT) and accuracy (AC). Trials with incorrect responses and nonresponses were deleted. We conducted three-way repeated measures analyses of variance (RM-ANOVAs; SPSS version 20.0; IBM) to examine the differences in RT and AC as a function of the interfering word, cognitive load, and group, in which group was chosen as the between-participant factor and cognitive load and interfering word as the within-participant factors.
ERP Data.
After examining the grand-averaged waveforms in our study and referring to those in previous studies [7,22], we set the time windows for the ERP components as follows: N1, 70-170 ms, with a peak at about 120 ms; P2, 150-250 ms, with a peak at about 200 ms; and N3, 250-350 ms, with a peak at about 300 ms. We included nine electrodes in the analysis on the basis of previous findings according to the frontal (F3, Fz, and F4), central (C3, Cz, and C4), and partial (P3, Pz, and P4) sides, as reported elsewhere [22,23]. Amplitudes and latencies of N1, P2, and N3 were subjected to four-way RM-ANOVAs, with cognitive load, interfering word, and electrode site as the within-participant factors and group as the between-participant factor. Threeway RM-AVOVAs were conducted if any interactive effects occurred, followed by two-way RM-AVOVAs. Simple effect analysis was performed if interactions between any of the variables were significant. Bonferroni adjustments for multiple comparisons were used for post hoc analyses. Probability values were corrected using Greenhouse-Geisser correction for multiple degrees of freedom when violations of the sphericity assumption occurred. Table 1 lists the basic participant information. Because this study is part of a past study, the participant information was the same as in our previous work [24].
Control Group.
We conducted three-way RM-ANOVAs with load, word, and electrode site as the within-participant factors. Figure 3 shows the latency (abc) and amplitude (def) results. Figure 4 shows the grand-averaged ERPs and topographic maps.
Frontal sites
Central sites Partial sites N3 Negative Pain (f) Figure 3: T-bar plots of the ERP components in the control group. Word difference between neutral words and other words was significant in N1 latency and amplitude (a, d). Word difference between pain words and other words was significant in P2 latency and amplitude (b, e). Word difference between neutral words and other words was not significant in N3 latency (c), but significant in N3 amplitude (f). * means P < 0:05.
Discussion
Many studies have examined the differences in the processing of different words on a behavioral level, but not sufficient for pain words for persons with pain. Our findings supported those of previous studies by reporting facilitative processing of special words [7,22,25] and added new evidence for a particular group of people.
Because behavioral performance in this study was the result of a combination of stimulus word interference and decision-making, the interfering effect of words cannot be judged only from the behavioral results. However, a significant load difference in both RT and AC supported that we properly applied the load factor in this experiment. In addition, the ERP results provided more detailed information for the neuronal processing of words between the EP and control groups.
AB to Pain Words in the EP Group Occurred Early.
According to previous research focusing on emotional words or faces [7,26], neural processing can be categorized into three stages. In stage 1, negative words or faces were distinguished early (N1 or P1), and neutral and emotional words or neutral and emotional faces were distinguished in stage 2 (VPP and N170), followed by stage 3, an in-depth assessment of the stimulation affective valence.
Our results were slightly different from those of previous studies in that both the control and EP groups did not show a preference for negative words. We deduced that this finding might be related to the warning nature of pain materials. Vigilance to pain is an environmental adaptation in human beings in order to warn them away from danger and is vital for human survival. In this way, pain imposes a higher priority on negative stimulus. Moreover, we found that neural processing of words is different between the groups. Semantic differentiation (between neutral and other words, N1) came first in the control group, followed by pain word identification (between pain and other words, P2). However, in the EP group, pain word identification occurred in the early stage (N1 and P2), and semantic differentiation followed (N3), suggesting an AB to pain.
AB described in the literature includes three characteristics [27]: attentional avoidance (e.g., allocating attention towards locations opposite to that of pain), facilitated attention (e.g., pain stimuli are detected faster or stronger than nonpain stimuli), and difficulty in disengagement (e.g., it is harder to disengage attention from a pain stimulus vs. a neu-tral stimulus). In our study, as compared with the control group, the EP group first showed attentional avoidance to pain words, revealed by N1 latency, with the longest latency in pain words as compared with positive, neutral, and negative words. Later, we found facilitated attention to pain words, revealed by P2 amplitude, with the highest amplitude in pain words as compared with other words, suggesting that processing pain materials occurred before processing the other words. Although we did not find difficulty in disengagement because of the short presentation times and the nontarget stimulus nature of the pain materials, these results validate our first hypothesis that AB to pain stimuli did happen at an early stage in the EP group.
N3, as a semantic differentiator in the EP group, was evident only in low-load tasks in stage 2. As Figure 5(f) shows, the amplitudes of neutral words had the largest negative waves, significantly different in every word type, which revealed a second stage of semantic processing in the brain. This finding was more evident for N1 (see Figure 3(d)) in both high-and low-load tasks for the control group.
Cognitive Load Is an Influencing Factor in Word
Differentiation and AB. In the ERP results, we found that word differentiation mainly occurred in low-load tasks. In the control group, word differences in all N1, P2, and N3 latencies occurred only in low-load tasks (Figures 3(a)-3(c)), while in the EP group, word differences in P2 and N3 amplitudes occurred only in low-load tasks (Figures 5(e) and 5(f)). As cognitive load increased, participants were unable to distinguish potential word stimulus, and AB to pain vanished in the EP group, supporting our second hypothesis.
Cognitive load has long been researched for the interactive relation with attention and pain perception. According to cognitive load theory, human capacity of information processing is limited in that only a certain amount of information can be processed at a time. When a person engages in a variety of activities in performing difficult task, cognitive resources must be allocated to different tasks, which can tax the resources and drive down the efficiency of the task, called "cognitive overload." Reduced pain perception is an effect of increased cognitive load in persons with pain. Legrain et al. [28] reported that cognitive load may help lower pain experiences by increasing distraction from pain. Some distraction paradigms also suggest that less pain is reported when performing a high-load task [20,29]. fMRI researches [19][20][21] further supported this opinion by revealing that mediumto-high cognitive tasks, as compared with low cognitive tasks, can activate or deactivate brain areas related to pain.
In our study, however, we could not get concrete information about pain perception under high or low cognitive load because of the experiment protocol. It is hard to tell whether it is the result of alleviated pain perception under high cognitive tasks that led to no word differentiation or not. To our surprise, we discovered here that such phenomenon happened in both EP and control groups. As healthy subjects reported no pain in both high-and low-load tasks, pain relief resulting from higher cognitive task cannot provide a reasonable explanation. We thus suggested the notion of capacity limitations. As cognitive load theory raises, when the resources invested in cognitive tasks increased, the resources invested in other decreased. Word stimulus was designed as potential nontarget stimulus in our study. Therefore, when cognitive load increased, subjects had to pay more attention to complete the target cognitive tasks, while less attention was paid to nontarget word stimulus. As a result, word differentiation in the control group was insufficient, and AB to pain in the EP group disappeared under high-load tasks. We here for the first time suggested that cognitive load can influence word differentiation, as well as AB in pain subjects.
We must admit that experimental pain has fundamentally different qualities to clinical pain, in that the former is somewhat artificial, transient, and controllable. It would be intriguing to see, therefore, whether similar, or perhaps even stronger, attention-interference effects would be found in real world pains. Although our data are based on an experimental pain model, there are potential clinical implications if these results are replicated in real world pain, both acute pain and chronic pain, and shed light on AB management or intervention in the future pain treatment.
Electrode Effect Validates Attention Alerting to Pain
Words. Prior studies have indicated a general dominance of the right hemisphere for all emotions [30,31]. In our study, although not statistically treated, the amplitudes of all ERP components (N1, P2, and N3) elicited in the right hemisphere were greater than those elicited in the left hemisphere, a finding consistent with those of previous studies. In addition, we found electrode site effects among the frontal, central, and partial brain sites: N1 had the largest negative amplitude on the frontal sites in the control group, and P2 had the largest amplitude on the partial sites in both groups.
N1 amplitude in the control group peaked at the frontal sites, which is related to senior neural processing, such as planning, memorizing, and decision-making. Semantic differences were quickly identified in this region. P2 peaked at the partial brain regions, which may be highly related to attention alerting. As has been reported, the alerting network for attention is associated with areas in the parietal lobes, especially with the right hemisphere of the brain [32,33]. Analysis of lateralization for patients with brain injury has indicated the right hemisphere's superiority to the alerting system [34], and the brain areas associated with innate vigilance are mainly in the parietal regions of this hemisphere [35]. In both the EP and control groups in our study, P2 was prominent in the partial sites, with a differentiation effect mainly for pain materials. Therefore, we propose that the electrode side effect on P2 was most likely related to both groups' attention alerting priority to pain words, which also is consistent with the warning nature of pain materials.
Study Limitations
Some limitations of this study call for further exploration in future research. First, we used a small sample size. However, despite this size, significant results emerged, which demonstrated attentional avoidance and the facilitated attention of pain words to other words in the EP group. Larger sample size may yield more findings. Second, the pain bias that we found may be affected by the participants' intensity of pain sensing and psychological traits (e.g., anxiety, depression, and pain catastrophizing). Further research should consider these issues to obtain more detailed information. Third, subjects in our study were participants with experimental pain, which is different from clinical pain patients. Further researches with real pain patients are suggested for future clinical application.
Conclusions
Our study provided additional evidence for AB to pain words in participants with experimental pain. The control group and EP group behaved differently with different words in neural processing. The EP group had an early pain bias, with a later somatic difference. Cognitive load is an influencing factor in word differentiation. It also affected AB in that as cognitive load increased, AB to pain disappeared. Future researches can be conducted on clinical pain in the hope for better treatment for pain.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Additional Points
Our manuscript's preprint can be found in the link at https://www.researchsquare.com/article/rs-66816/v1.
Conflicts of Interest
No conflicts of interest, financial or otherwise, are declared by the authors. | 2021-11-02T15:07:15.403Z | 2021-10-31T00:00:00.000 | {
"year": 2021,
"sha1": "56bc6c7793b8c8e5e5803323e1a50c34a173b6fe",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/np/2021/9940889.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3acf8aea7c46f49a5a5226f90de0421dce197836",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255849121 | pes2o/s2orc | v3-fos-license | T follicular helper cells in cancer
Increased frequencies of Tfh cells in solid organ tumors of non-lymphocytic origin are often associated with a better T follicular helper (Tfh) cells provide essential help to B cells for effective antibodymediated immune responses. Although the crucial function of these CD4 T cells in infection and vaccination is well established, their involvement in cancer is only beginning to emerge. Increased numbers of Tfh cells in Tfh cell-derived or B cellassociated malignancies are often associated with an unfavorable outcome, whereas in various solid organ tumor types of non-lymphocytic origin, their presence frequently coincides with a better prognosis. We discuss recent advances in understanding how Tfh cell crosstalk with B cells and CD8 T cells in secondary and tertiary lymphoid structures (TLS) enhances antitumor immunity, but may also exacerbate immune-related adverse events (irAEs) such as autoimmunity during immune checkpoint blockade (ICB) and cancer immunotherapy. prognosis.
T follicular helper cells in cancer
Nicolás Gutiérrez-Melo 1 and Dirk Baumjohann 1, * T follicular helper (Tfh) cells provide essential help to B cells for effective antibodymediated immune responses. Although the crucial function of these CD4 + T cells in infection and vaccination is well established, their involvement in cancer is only beginning to emerge. Increased numbers of Tfh cells in Tfh cell-derived or B cellassociated malignancies are often associated with an unfavorable outcome, whereas in various solid organ tumor types of non-lymphocytic origin, their presence frequently coincides with a better prognosis. We discuss recent advances in understanding how Tfh cell crosstalk with B cells and CD8 + T cells in secondary and tertiary lymphoid structures (TLS) enhances antitumor immunity, but may also exacerbate immune-related adverse events (irAEs) such as autoimmunity during immune checkpoint blockade (ICB) and cancer immunotherapy.
Tfh cells in immunity and autoimmunity
Tfh cells (see Glossary and Box 1) are the specialized CD4 + T cell subset that provides essential help to B lymphocytes for effective antibody responses against various pathogens, including viruses, bacteria, fungi, and helminths [1,2]. These functions of Tfh cells are also being utilized during vaccination through their ability to drive immunoglobulin class-switching and affinity maturation in germinal centers (GCs), resulting in the generation of long-lived plasma cells and memory B cells [1,2] (Figure 1, Key Figure). In contrast, Tfh cell dysregulation may be associated with the development of autoimmunity [3]. Although most attention in cancer and immunooncology research has been given to effector CD4 + and in particular CD8 + T cells [4], Tfh cells are now emerging as another highly relevant immune cell population in various cancer types. However, the participation of Tfh cells in oncological settings differs significantly across different cancer entities such as Tfh cell-derived neoplasms, B cell lymphomas, and solid organ tumors of non-lymphocytic origin. In this review we discuss all three scenarios, with a focus on the recently fast-paced discoveries describing their involvement in solid tumors. In this regard, the role of Tfh cells in TLS, in ICB therapies, and as a potential link for the development of irAEs following ICB are of particular importance.
Tfh cells in cancer: the good, the bad, and the ugly One of the main challenges in assessing the involvement of Tfh cells in cancer has been the broad biological diversity of cancer entities because each of them presents unique genetic and phenotypic characteristics, as well as different disease progression and clinical outcomes. It is therefore not surprising that the role of Tfh cells varies depending on the type of cancer. To approach such diversity, it is useful to make a distinction between those instances in which Tfh cells (or Tfh-like cells) are the cancer entities themselves, and those in which Tfh cells participate in the disease pathobiology of other cancer entities derived from a different cell of origin (Table 1).
Tfh cell-derived lymphocytic neoplasms These malignancies contain T cell neoplasms in which the cells are derived from and/or have a Tfh cell phenotype and genetic signature, usually defined by the expression of Tfh cell-related markers such as CD279/PD-1, CD10, BCL6, CXCL13, or ICOS [5]. This category includes CD4 + T follicular helper (Tfh) cells provide efficient help to B cells and can be found in tertiary lymphoid structures of tumors.
Increased frequencies of Tfh cells in Tfh cell-derived or B cell malignancies are often associated with an unfavorable outcome.
Increased frequencies of Tfh cells in solid organ tumors of non-lymphocytic origin are often associated with a better prognosis.
The beneficial function of Tfh cells in solid tumors may extend beyond their primary role of providing help to B cells, for example, by fueling cytotoxic CD8 + T cell responses.
Tfh cells may contribute to the development of immune-related adverse events (irAEs) following checkpoint immune blockade (ICB)-based cancer immunotherapy. different entities previously known as angioimmunoblastic T cell lymphoma (AITL), follicular T cell lymphoma, and nodal peripheral T cell lymphomas, which have now been grouped as subtypes under the umbrella designation of nodal T follicular helper cell lymphoma (nTFHL) in the most recent World Health Organization (WHO) classification of lymphoid neoplasms [5]. Mutations in TET2 and DVMT3A are regularly found in nTFHL patients, although they can also be present in other types of lymphoma [5,6]. Efforts to discover early diagnostic markers specific for nTFHL led to the identification of the RHOA Gly17Val (c.50G>T) mutation that can be detected on average 7.87 months earlier than the lymphoma diagnosis; nonetheless, this detection method is not able to discriminate among the three nTFHL subtypes [6]. Differential diagnosis among the subtypes remains challenging because it has largely relied thus far on histopathological characteristics rather than on robust biological markers [5,7]. Some even argue that the significant clinical and molecular overlap between the three nTFHL subtypes suggests that these lymphomas are not separate entities and are instead a continuum or differential manifestations of the same disease [8]. Using genetic multivariate analyses, distinct molecular profiles have been identified that support the current WHO classification [9]. However, these profiles were only identified under supervised analyses, whereas unsupervised analyses did not show any clear subtype-defining clustering, suggesting that genetic differences might not be as strong as expected [9]. Tfh cell-associated genetic markers [10] or immunohistological findings [11] have also been reported in histiocyte-rich large B cell lymphoma or in nodal marginal zone B cell lymphoma, calling for extra caution when interpreting results using such markers. Despite this biomarker heterogeneity, Tfh cell-defining markers remain crucial for nTFHL progression, as shown in an AITL-like mouse model in which ablation of BCL6 or loss of SLAM-associated protein (SAP) led to a significant reduction of tumor growth [12]. Studies addressing nTFHL with specific mutations have also revealed a strong association between Tfh cell markers and this type of cancer. For instance, IDH2 R172 tumors present higher expression of CXCL13 and CD10 [13], whereas RHOA Gly17Val tumors were more likely to present two or more Tfh cell markers including PD-1, ICOS, CD10, or CXCL13 compared to wild-type RHOA (RHOA WT ) [14]. Although no difference in proliferation was detected between RHOA Gly17Val and RHOA WT , it is interesting that the former had significantly higher B symptoms and splenomegaly [14]. Altogether, the evidence suggests that Tfh cell markers in nTFHL not only serve for classification purposes but also play an important role in disease pathogenesis. Investigating how normal Tfh cell function is dysregulated in each of these entities will be essential for the design of clinical strategies and the identification of novel therapeutic targets.
Tfh cells that provide help to malignant B cells As participants in other types of cancer, the contribution of Tfh cells to disease progression is widely variable, and can either promote malignancies owing to their intrinsic B cell helper activity or participate in the immune response against solid tumors. Therefore, it is again useful to differentiate between those instances in which Tfh cells have been shown to be detrimental by promoting disease progression, and those in which they have been associated with positive clinical outcomes or improved immune control (Table 1). Box
What are T follicular helper (Tfh) cells?
For many years type 2 T helper (Th2) cells were regarded as the main provider of help to B cells, and it was not until the turn of the millennium that CXCR5 + CD4 + T cells residing in CXCL13 + follicles of SLOs were named Tfh cells [89,90]. Over the years it was then clarified that, after their priming by dendritic cells (DCs), Th2 cells emigrate into the periphery to coordinate type 2 immune responses in peripheral tissues, and were not in close proximity to B cells after all [91], and that Tfh cell differentiation required the transcriptional repressor Bcl6 [92][93][94]. Tfh cells also express high levels of costimulatory (e.g., ICOS, CD40L) and coinhibitory (e. g., PD-1) receptors on their cell surface and secrete important cytokines (e.g., IL-21, IL-4), all of which are tailored to their B cell helper activity [1,2,95]. Tfh cell differentiation is characterized by stepwise differentiation that requires continuous interactions with antigen-presenting cells in different microanatomical locations within SLOs, namely priming by DCs in the T zone, and subsequent interactions with B cells at the T-B zone border and within GCs [96].
Glossary
Germinal center (GC): a T cell-dependent anatomical structure in which B cells mutate their antigen receptors to generate high-affinity antibodies. Immune checkpoint blockade (ICB): inhibitors that interfere with immune checkpoint molecules (e.g., PD-1) to reinvigorate antitumor immune responses. Immune-related adverse events (irAEs): a complication of cancer immunotherapy (e.g., induced by ICB) in which serious side effects can arise (e.g., the development of autoimmunity). T peripheral helper (Tph) cells: CD4 + T cells first described in rheumatoid arthritis patients that share several characteristics with Tfh cells but do not reside in SLOs. Secondary lymphoid organ (SLO): complex organized immune cell-containing structures in which T and B cells are activated and differentiate into effector cells. T follicular helper (Tfh) cells: the primary CD4 + T cell subset that provides essential help to B cells for the generation of potent antibody responses. Although this review mostly focuses on the involvement of Tfh cells in solid organ tumors of nonlymphocytic origin, it is relevant to briefly discuss the role of Tfh cells in B cell lymphomas because this is perhaps the most prominent case in which these cells are associated with a negative Although some cellular and molecular mechanisms have been elucidated (unbroken arrows), the ontogeny of these cell subpopulations in tumors and tertiary lymphoid structures (TLS) remains largely unknown (broken arrows with question marks). (Bottom left) After priming by dendritic cells (DCs) in tumor-draining lymph nodes (dLNs), activated CD4 + and CD8 + T cells differentiate into effector T cells such as type 1/2/17 T helper (Th1/Th2/Th17) cells and cytotoxic T lymphocytes (CTLs), respectively, which then leave the dLN and migrate to peripheral tissues such as the tumor. Activated CD4 + T cells differentiate into early Tfh cells that interact with B cells to initiate the early extrafollicular antibody response. Some of these Tfh cells leave the dLN to become circulating Tfh (cTfh) cells, and others join antigen-specific B cells and enter the follicle to form germinal centers (GCs). In these microanatomical structures, high-affinity antibodies, long-lived plasma cells, and memory B cells are formed. GCs also harbor follicular dendritic cells (FDCs) that present native antigen to B cells. (Top right) Interestingly, similar GC-like structures consisting of B cells, FDCs, and Tfh-like cells can be found in TLS that form adjacent to or within tumor tissues. TLS-resident Tfh cells may be differentially polarized depending on the environmental context of the tumor to Tfh1, Tfh2, or Tfh17 cell types. IL-21 produced by Tfh cells supports B cells but also CTL function. Other Tfh-like cells are TfhX13 cells and microbiota-specific Tfh cells that produce CXCL13 but lack CXCR5 expression and may contribute to TLS formation. Upon ICB with anti-PD-1, the function of Tfh cells is boosted but may also contribute to the development of immune-related adverse events (irAEs). The precise kinetics and dynamics of this phenomenon are still unknown, but current evidence indicates a positive correlation between antitumor activity and the development of irAEs (bottom right).
Trends in Cancer
OPEN ACCESS impact (Table 1). These malignancies, such as follicular lymphoma, present an increased number of Tfh cells that overexpress genes such as TNF, LTA, IL4, or CD40L which alter the tumor microenvironment (TME) and promote malignant B cell survival [15]. Interactions between B cells and Tfh cells are mediated through a diverse set of membrane receptors, including ICOS, CD40L, SLAM, BTLA, PD-1, and FASL, whose impact on follicular lymphoma has been thoroughly reviewed elsewhere [16]. Pathological B cell-Tfh cell interactions are also mediated by epigenetic mechanisms that lead to repression of Tfh cell-associated genes such as Bcl6 [17]. Interestingly, single-cell analysis has recently shown that different follicular lymphoma sites from the same patient present site-restricted B cell receptor (BCR) clonotypes, and found a positive correlation between site heterogeneity and Tfh cell abundance [18]. In chronic lymphocytic leukemia (CLL) there is also a significant increase in the number of Tfh cells, particularly of the Tfh1 subset, whose frequency is associated with disease burden and which are phenotypically different from those in healthy controls in that they display higher levels of CD40L, IL-21, IFN-γ, and TIGIT [19]. Histological analysis of active CLL lymph nodes revealed a tight spatial association between CLL and Tfh cells that was not seen for any other CD4 + T cell subset, and whose expansion correlated with CLL proliferation [20]. Targeting CLL cells with the tyrosine kinase inhibitor ibrutinib led to a reduction of Tfh cell frequencies as well as a change in their phenotype, and partially restored the frequencies of Tfh subpopulations (Tfh1, Tfh2, Tfh17) to those of healthy controls [19]. Together, these findings support the idea that Tfh cells play a major role in disease development and the progression of B cell malignancies, suggesting that Tfh cells could be important targets for diagnosis and treatment.
Tfh cells as markers of positive clinical outcome in solid organ tumors In most solid organ tumors of non-lymphocytic origin, Tfh cells correlate with a better immune response against cancer, improved clinical outcomes, and increased therapy responsiveness. Although most evidence in humans is limited to observed associations, recent studies in animal models have started to elucidate the causal links for such observations (Table 1). Tfh cell gene signatures in tumor tissue are tightly correlated with immune activation and infiltration, tumor burden score, and overall survival, and have proved to be useful for patient stratification and clinical outcome prediction [21][22][23]. Research in the past decade has unraveled a broad number of mechanisms through which Tfh cells can have a positive impact on the immune response against cancer. For instance, Tfh cells have been identified as the main producers of IL-21 in the TME of different human cancers and mouse models [24,25] (Figure 1). This cytokine has a major role in mediating humoral responses by promoting B cell activation, class-switch recombination, and antitumor IgG1 and IgG3 secretion [24,26,27]. IL-21 blockade alone was able to drastically reduce B cell activation induced by coadministration of anti-PD-1 and anti-CTLA-4 therapy, highlighting the importance of Tfh cell-secreted effector molecules in cancer immunotherapy [26]. IL-21 also regulates CD8 + T cells derived from a MC38 cancer model by enhancing the expression of IFN-γ and granzyme B (GzmB), as well as the expression of surface markers such as Tigit and Lag3 [27]. Inhibition of IL-21 not only induced a reduction in CD8 + T cells with this phenotype but also abrogated the antitumor effect seen upon adoptive transfer of Tfh cells [27]. Likewise, deletion of Il21r caused a drastic drop in the frequency of mouse PD-1 hi GZMB hi CD8 + tumorinfiltrating lymphocytes (TILs) [25]. Surprisingly, a study with non-small cell lung carcinoma (NSCLC) samples found decreased levels of IL-21 production despite elevated numbers of Tfh cells, particularly in patients with advanced stages of the disease [28]. This anomaly was explained by an expansion of abnormal PD-1 + Tfh2 and PD-1 + Tfh17 subtypes which induced the differentiation of regulatory B cells [28]. By contrast, infiltrating type 1 T helper cell (Th1)polarized PD-1 hi ICOS int Tfh cells were associated with higher expression of IL-21 in the TME of breast cancer and were able to promote immunoglobulin and IFN-γ production by providing help to B and CD8 + TILs, respectively [29]. These findings highlight the need for robust phenotypic characterization of each Tfh cell subtype when assessing their impact in cancer because each might lead to a different clinical outcome (Box 2). Manipulating the balance between these subtypes could be an interesting therapeutic target, although the exact mechanisms that determine the frequencies of each population in the TME remain largely unknown.
Another key Tfh cell-derived molecule that orchestrates the immune response to cancer is the chemokine CXCL13. Although several cell types such as CD8 + T cells [30,31], follicular dendritic cells (FDCs) [32], and even cancer-associated fibroblasts [33] secrete CXCL13 in the TME, Tfh cells have been shown to be a major source of this chemokine in different cancer types [22][23][24]29,32,[34][35][36] (Figure 1). CXCL13 is essential for the recruitment of different cell types to the TME, including B cells [24] and CD8 + T cells [32], as well as for the proper formation of tumor-associated TLS [37]. In breast cancer, CXCL13 is mainly produced by a very particular PD-1 hi ICOS int CD4 + T cell subset named TfhX13 [35]. These cells display an overall Tfh-like phenotype but are characterized by a lack of the canonical Tfh cell marker CXCR5 [35]. TfhX13 cells have the capacity to recruit B cells and promote their maturation as well as functional GC formation [35]. Similar findings were reported in muscle-invasive bladder cancer, where CXCL13 was produced by central memory Tfh cells and was significantly expressed after administration of anti-PD-1 therapy [34]. These observations are consistent with another study in an ovarian cancer mouse model in which blockade of CXCL13 disrupted TLS formation and led to higher tumor volume [38]. It is important to note that CXCL13 expression by Tfh or Tfh-like cells might not be the same in all types of cancer, and therefore caution must be taken in generalizing different findings.
For instance, although Tfh-like cells were identified as the main source of CXCL13 in breast cancer [35], an inverse correlation was described between this chemokine and Tfh cell enrichment in a clear cell renal carcinoma study [39]. Whether this divergence is due to differences in activation states or phenotypic subtypes remains to be determined. High levels of CXCL13 in the TME are not always associated with a better prognosis. In different types of cancer such as breast cancer, NSCLC, colorectal cancer (CRC), and prostate cancer, high levels of CXCL13 can be correlated with more advanced disease stages and poorer prognosis [40]. Whether CXCL13 is the cause or the consequence of tumor progression remains an open question. CXCL13 has also been associated with metastases by promoting the migration, proliferation, and invasion of CXCR5 + cancer cells [40]. Nevertheless, recent data in ovarian cancer showed that CXCL13 expression shifted Box 2. Tfh cell nomenclature in cancer One particularity regarding Tfh cells in cancer is that there is great variety in the definition of Tfh and Tfh-like cells used in various studies. Usually determined by a combination of markers including CXCR5, BCL6, PD-1, CD200, CD38, and/or ICOS, a consistent approach to define these cells in the TME is still lacking (see Table 1 in main text). A previous proposal for uniform nomenclature for canonical Tfh cells is compatible with most of the variation observed in tumor-associated Tfh cells, and is therefore a useful system to implement [1]. Although most studies agree on defining bona fide Tfh cells as CD4 + CXCR5 + PD-1 + ICOS + BCL6 + IL-21 + , classification of Tfh cells into groups (or subtypes) requires assessment of at least one of the following: surface expression of CXCR3 and CCR6, cytokine expression of IFN-γ, IL-4, and IL-17, or expression of the master transcription factors T-BET, GATA3, and RORγt [1,95]. In addition to systematically capturing and classifying Tfh cell plasticity, committing to a more robust characterization will facilitate the assessment of each group in different cancer entities. For instance, Tfh1 cells, but not all Tfh or CD4 + PD-1 + cells, have been reported to be predictive of better disease-free survival in NSCLC [28]. By contrast, Tfh2 and Tfh17 cells, but not all Tfh cells, were associated with increased overall survival in clear cell renal cell carcinoma [39]. These observations also highlight how a lack of robust characterization may lead to the loss of relevant biological information. Another interesting case is the classification of CXCR5 − Tfh-like cells in cancer, such as TfhX13 cells in breast cancer [22]. CXCR5 − Tfh-like cells have been shown to be crucial for tumor control and TLS formation; however, their origin remains uncertain [81]. Lastly, moving towards a more systematic characterization and classification of Tfh and Tfh-like cells will also allow better comparison with results in other fields, particularly in autoimmunity where Tfh-like cells such as Tph cells have also been described to participate in TLS formation and B cell response stimulation [97]. To conclude, shifting towards a more uniform classification system for Tfh cells in cancer will not only simplify the interpretation of results from different types of cancer or immune conditions but will also unveil relevant biological mechanisms that would otherwise remain undiscovered. from CD4 + T cells (presumably Tfh or Tfh-like cells) in early stages of the disease to CD21 + FDCs in mature TLS [32]. This opens new questions regarding how Tfh cell function in the TME might vary over time, and how this change might impact on disease progression. It should be pointed out that, although CXCL13 is strongly produced by tumor-associated Tfh cells in humans, mouse Tfh cells do not express large amounts of CXCL13 [2].
Cell-to-cell interactions of Tfh cells with other immune cell subsets play an important role in mounting a coordinated response against cancer. Deficiency in ICOS or CD40L, hallmark molecules of Tfh cells and key mediators of interactions with B cells, have been shown to severely compromise immune control, leading to greater tumor volume over time [25]. Coculture of Tfh cells with nurse-like cells, a subset of blood mononuclear cells that are characteristic of CLL patients, induced downregulation of CD40L on the former, contributing to disease progression [20]. Studies in non-human primates reported a significant drop in the number of circulating Tfh (cTfh) cells upon administration of the anti-ICOS monoclonal antibody KY1044 [41]. Although such therapy resulted in tumor regression, it remains unclear whether local Tfh cells in the TME were also affected by it and how [41]. By contrast, silencing of the SAT1B regulator has recently been shown to increase the expression of ICOS and to induce greater Tfh cell differentiation, thereby driving enhanced infiltration and activation of B cells to the tumor site, as well as promoting TLS formation and tumor control [38]. Moreover, shorter distances between CD20 + B cells and Tfh cells in TLS have been demonstrated to correlate with longer patient survival in oral squamous cell carcinoma [42].
Altogether, participation of Tfh cells in the immune response to cancer is highly diverse and context-dependent. It relies on the secretion of different effectors as well as on surface interactions with other cells. Despite significant advances in our knowledge of the mechanisms involved, inconsistent definitions of Tfh cells across studies (Table 1) remain a significant difficulty when generalizing or comparing results from different studies (Box 2). This complication may also derive from the limited number of markers that could be simultaneously assessed until recently with conventional flow cytometry and immunohistochemistry (IHC) techniques. New high-dimensional technologies such as codetection by indexing (CODEX) have now been adapted to study the TME, thus improving the definition of different tissue cell types and introducing the concept of cell neighborhoods to TLS [43]. Robust in situ characterization of Tfh cells with such technologies will allow important gaps to be filled, particularly those regarding how different subpopulations localize and develop, and how they are regulated at the molecular level.
Tfh cells and the tumor-associated TLS architecture
One of the most important features of Tfh cells in cancer is their role in the formation and function of tumor-associated TLS, which are ectopic lymphoid aggregates that form in the context of chronic inflammation, and which resemble secondary lymphoid organs (SLOs) in function and structure to varying degrees [44] (Box 3). Current research efforts aim to determine what drives TLS formation, how it happens, and which cells and molecules are implicated. Enhanced Tfh cell differentiation in an ovarian cancer mouse model has been shown to induce larger TLS and B cell recruitment accompanied by increased FAS and GL7 expression, suggesting an increase in GC B cells [38]. In fact, transfer of Tfh cells from tumor-bearing mice alone was sufficient to induce enhanced TLS formation [38]. This is consistent with findings in a CRC mouse model in which ablation of Tfh cells led to a loss of TLS, decreased immune infiltration, and loss of tumor control [45]. These features were restored upon adoptive transfer of pathogen-specific CD4 + T cells, indicating that cells other than tumor-specific T cells can participate in the immune response against cancer [45]. In human breast cancer, activated Tfh cells were found to be indicative of overall TLS activity, characterized by B cell proliferation, immunoglobulin production, and Th1-skewed cytotoxic activity by CD8 + T cells [29]. Moreover, weaker interactions between Tfh cells and B cells through the CXCL13-CXCR5 axis were recently shown to lead to smaller TLS in NSCLC [46]. Since they are critical for TLS formation and function, Tfh cells and B cells are often correlated with positive outcomes in multiple cancer settings [44]. In this context, Tfh cells have been found to be associated with GC B cells [29,42,47,48] and memory B cells [35,49], and across different B cell maturation stages [26,50,51]. Nevertheless, conclusions in this regard remain limited given that the depth of characterization of both Tfh and B cells across studies remains highly heterogeneous (Box 2).
Numerous studies have assessed the value of B cells and CD8 + T cells within TLS as predictive biomarkers of clinical prognosis or therapy responsiveness [37,52,53]; however, it is intriguing that less is known about which CD4 + T subsets within TLS may be associated with the positive correlation between TLS and antitumor immune responses. For instance, high BCL6 expression in tumor-associated tissue is associated with TLS development in a CRC mouse model; nonetheless, it is unclear whether this master regulator is being expressed by B cells, Tfh cells, or both [54]. The prognostic value of TLS in lung squamous cell carcinoma has been shown to be determined by GCs within those TLS; however, which cells mediate GC formation and function within these structures remains to be further studied [55]. Furthermore, Tfh cell-associated gene signatures have long been recognized as indicators of TLS activity and predictors of clinical outcomes [22]. Better characterization of the CD4 + subsets in TLS and their interactions within these structures remains a pressing issue to be further addressed. Of particular importance is the interplay between TLS-Tfh cells and T follicular regulatory (Tfr) cells. The latter, which can be broadly defined as CXCR5 + FOXP3 + CD4 + T lymphocytes, are well-known regulators of Tfh cell activity (reviewed in more detail in [56]). In the context of cancer, Tfr cells can inhibit TLS Tfh cell function [29] and can curtail cancer immunotherapy efficacy [57]. In fact, the ratio between Tfr and Tfh cells, rather than the presence or absence of these subsets in TLS, has been shown to negatively correlate with CD8 + T cells in TLS [58]. The tight relationship between Tfh cell function and location has been one of the biggest challenges in characterizing this subset within TLS, given that conventional approaches largely rely on disruption of tissue structures for cell isolation. New technologies are now emerging that allow such complications to be circumvented. Spatial transcriptomics has been recently used to determine B cell gene expression and BCR repertoires within TLS in human clear cell renal cell carcinoma [51]. Applying such technologies to CD4 + T cells will also reveal a clearer picture of T helper cell networks in TLS.
Expanding ICB targets
One of the most significant advances in the treatment of cancer has been the development of ICB therapies that aim at blocking particular inhibitory receptors on T lymphocytes to overcome Box 3. B cells in TLS: partners in crime TLS consist of different immune cell types, including GC and memory B cells, plasma cells, CD8 + T cells, and CD4 + T cells such as Tfh and Tfr cells [37,44,98]. Mesenchymal stromal cells, particularly FDC, are also present in TLS [99], as well as a complex network of endothelial cells and cancer-associated fibroblasts that are required for TLS formation [33]. Among the cells that make up TLS, B lymphocytes have been particularly widely studied and, similar to Tfh cells, have also been strongly correlated with a better prognosis and response to ICB in various solid tumor entities [44,53,98,100]. The mechanisms underlying the B cell effector function in cancer include cytokine production, antigen presentation, antibody-dependent cell-mediated phagocytosis and cytotoxicity, as well as antibody-mediated signaling interference [53,101]. Expression of Ki67 and activation-induced cytidine deaminase (AID) by B lymphocytes in TLS supports the idea that BCR affinity maturation and immunoglobulin class-switching also occur within these structures [102]. However, the spatial organization of these cell populations varies widely, and is believed to be related to the maturity and function of a given TLS [103]. In fact, TLS do not always exhibit GC-like follicle structures, nor clearly defined dark and light zones, suggesting that Tfh-B cell interactions in these structures might not necessarily be the same as those in GCs in SLOs [101]. A closer look at these interactions will reveal how the development and maturation of antitumor responses in TLS differs from that in SLOs.
tumor-induced cell exhaustion and to promote effective antitumor immune responses [59,60]. Numerous blocking antibody therapies have been developed that target receptors such as TIM-3, CTLA-4, LAG-3, and, in particular, PD-1 or its ligands PD-L1/PD-L2 [60]. Owing to their crucial role in the antitumor response, infiltrating CD8 + T cells have been the focus of most of the research assessing the effects of ICB on the immune system [34,61]. Paradoxically, despite their characteristic high expression of PD-1 and other inhibitory surface ligands, the effects of ICB on Tfh cells have remained only poorly covered. Over recent years, however, compelling evidence has started to point to Tfh cells as a key determinant and predictor of ICB success. For instance, cTfh cells from a syngeneic NSCLC mouse model treated with anti-PD-1 displayed an enhanced helper capacity defined by a higher expression of CD38, as well as by increased secretion of IL-21 and IL-4 [62]. Likewise, upon coadministration of anti-PD-1/anti-CTLA4 in a breast cancer mouse model, Tfh cell-associated transcriptional signatures were elevated in ICBsensitive tumors compared to resistant tumors, and increased expression of IL-21 was identified in cells exhibiting such expression profiles [26]. Depletion of IL-21 in this experimental setting caused a dramatic decrease in the number of tumor-associated IgG + cells [26].
In humans, the frequencies of infiltrating Tfh, B, and CD8 + T cells were significantly increased in different types of cancer after ICB administration [26,[62][63][64][65][66], and the overall cTfh cell concentration was shown to be higher in those patients that respond to therapy compared to those who do not [39,62]. Similar results were observed in melanoma samples in which patients with high Tfh cell infiltration were found to be more likely to positively respond to anti-PD-1 treatment [21]. Using this and other validating cohorts, it was demonstrated that the Tfh cell score, rather than an overall immune score, was able to predict favorable clinical outcomes of anti-PD-1 therapy and a higher proportion of complete response or partial response (CR/PR) in patients [21]. By contrast, Zappasodi et al. described a CD4 + Foxp3 − PD-1 hi population that presents a Tfh-like phenotype and which actively limits the antitumor immune response in melanoma and NSCLC models [67]. The identity of these cells and their relationship to bona fide Tfh cells remains to be determined; nonetheless, these results highlight how distinct Tfh and/or Tfh-like cell subsets might behave differently. Moreover, the authors found opposing effects of anti-CTLA-4 and anti-PD-1 therapies for this specific cell population. While anti-PD-1 therapy had a positive effect on the antitumor response, anti-CTLA-4 therapy had an overall negative effect that was able to abrogate that of anti-PD-1 when administered in combination [67]. Binding of the anti-PD-1 antibody pembrolizumab was shown to be particularly intense in CD38 + Tfh-like cells among all PD-1 + CD4 + T cells, especially for those residing in TLS, indicating that Tfh cells are arguably the main target of ICB within the CD4 + compartment [34]. Strikingly, Escherichia coli-specific memory Tfh and B cell responses, but not those for other resident bacteria, were shown to be predictive of clinical responses to neoadjuvanted anti-PD-1 therapy [34]. In the past few years it has been demonstrated that the gut microbiota can have a significant effect on ICB outcomes [68]. The exact mechanisms that define this association remain unclear, although it is believed that the gut microbiome can either produce metabolites that stimulate T cells or promote pre-existing immunity that would then be amplified by ICB [68] (Figure 1). The presence of E. coli in bladder cancer is unsurprising given that colonization of this tissue by gut-derived bacteria is frequently observed in urinary tract infections [34]. However, how bacteria-specific Tfh cells are able to promote antitumor responses remains unknown. IgA transcytosis has been previously demonstrated to be a key process that can explain antigen-independent responses against ovarian cancer [69]. Nonetheless, it remains puzzling that, in the study by Goubet et al., a clinical response was seen for IgG but not for IgA antibodies [34], suggesting that other unknown mechanisms might trigger antitumor responses in a pathogen-specific fashion (Figure 1).
ICB also has a strong impact on tumor-associated TLS. Anti-PD-1 monotherapy or in combination with anti-CTLA-4 has shown a significant increase in the size and number of TLS in mouse models, and also promoted a more classical microanatomical organization characterized by distinct T cell and B cell/FDC regions [33]. Although both treatments led to an increase in the number of intratumoral T cells, a reduction in tumor size was only seen in intraperitoneal tumors, whereas subcutaneous tumors remained unchanged [33], suggesting that both anatomical location and TME composition are crucial for ICB effectiveness. The specific cell types that increased in TLS upon ICB administration remain to be defined. TLS also act as a robust predictor of therapy outcome through the creation of a TLS gene signature that is associated with increased survival in melanoma patients treated with anti-CTLA4 [52]. Noteworthy, this TLS signature was independent of tumor mutational burden, and validation with previously published datasets demonstrated that it outperformed other immune-related signatures [52]. Switched B cells are also enriched in responders versus non-responders, and they locate within TLS [70]. These cells present higher clonal expansion, greater BCR diversity, and a more activated phenotype than those found in non-responders [70]. It is tempting to speculate that Tfh cells within TLS provide the necessary help to B cells for their activation during ICB; however, further investigation is needed. Regarding anti-PD-L1 treatment (atezolizumab), single-cell analysis from a triple-negative breast cancer cohort showed that CXCL13 + CD4 + T cells were expanded in ICB responders, and had positive predictive value for therapy responsiveness [65]. These cells presented features of both Th1 and Tfh exhausted cells, and were reduced upon administration of chemotherapy regime (paclitaxel), highlighting the fact that coadministration of chemotherapy regimens might blunt the positive effects of ICB [65]. Everything considered, the current evidence supports the idea that Tfh cells are a major target of ICB therapy, and shows that these cells can be predictive for clinical responses. Further investigation will be necessary to assess how different therapies affect specific Tfh cell subsets in different cancer entities.
irAEs in cancer immunotherapythe dark side of Tfh cells?
Despite the uncontestable success of ICB in the treatment of cancer, recurrent identification of irAEs associated with this type of therapy has become a major concern in the medical community [71][72][73]. The precise causes, drivers, and mechanisms of ICB-induced irAEs remain somewhat elusive [74], and the development of clinical management guidelines has been challenging [71,[75][76][77]. Although further work will be necessary to clearly define the specific role of the different Tfh cell subtypes and phenotypes in cancer (Box 2), it is very intriguing that cells with similar characteristics and functions have been extensively studied in the context of autoimmune disorders [78,79]. These cells, which have been named T peripheral helper (Tph) cells, are strongly associated with a proinflammatory milieu and the secretion of autoantibodies, and can be broadly defined as PD-1 hi CXCR5 − CD4 + T cells with a robust capacity to secrete CXCL13 [79,80]. It is remarkable how these cells resemble Tfh-like cell populations that can be found in multiple types of cancer (Table 1 and Figure 2), not only in their phenotype but also in their function [81], as Tph cells have been shown to mediate B cell help through production of IL-21, and are believed to participate in the formation of TLS in rheumatoid arthritis [80].
Considering their high expression of inhibitory receptors such as PD-1 and their resemblance to Tph cells, it is possible that administration of ICB therapies could promote a dysregulated Tfh cell response, leading to the development of irAEs, as previously hypothesized [82] (Figure 1). Importantly, more evidence supporting this proposition is starting to emerge. Vaccination studies in melanoma patients showed that subjects treated with anti-PD-1 had a significant increase in cTfh cell and plasmablast responses, as well as increased CXCL13 secretion, compared to the control group [83]. Transcriptomic analysis also revealed increased expression of cell proliferation and activation-related pathways in cTfh cells from anti-PD-1-treated patients. Strikingly, cell-cycle and proliferation pathways were also identified in cTfh cells from patients who developed irAEs compared to those from healthy subjects [83], suggesting that anti-PD-1-induced dysregulation of cTfh cells might contribute to the development of irAEs. Moreover, despite similar antibody titers at later time-points after vaccination, reduced antibody galactosylation, sialylation, and overall affinity at baseline were found in the ICB-treated group only, indicating that the quality of the immune response might be compromised upon anti-PD-1 treatment, despite higher activation of Tfh cells [83]. A multi-omic approach on >18 000 patient samples also seems to suggest a correlation between Tfh cells and irAEs [84]. This correlation did not reach the threshold of statistical significance; however, this could have been due to the high heterogeneity of the samples given that they were derived from 26 different types of cancer. When considered independently, the relationship between Tfh cell and ICB-induced irAEs might differ across different cancer types. For instance, a strong correlation between Tfh cells and irAEs was found in renal cell and urothelial carcinomas [83], whereas in two independent cohorts of melanoma patients such an association was absent [85]. Considering that tumor patients often undergo preconditioning chemotherapy or radiation treatment, the observation that T cell lymphopenia may Circulating Tfh (cTfh) cells derive from early Tfh cell precursors that are generated before GC entry and possess several features of memory T cells. They express CXCR5, but mostly lack BCL6 expression and only express higher amounts of PD-1 and ICOS upon (re)activation. TfhX13 cells were first described in human breast cancer and similar cells have now also been described in other human solid tumors. They produce large amounts of CXCL13 and share several other features of Tfh cells (IL-21 production, PD-1 expression, etc.), but do not express CXCR5. TfhX13 cells share striking similarities with T peripheral helper (Tph) cells that were first described in the inflamed joints of rheumatoid arthritis patients. In addition, Tph cells are characterized by elevated levels of BLMP-1, a transcriptional repressor that counteracts BCL6, and they express chemokine receptors that mediate migration to inflamed sites (e.g., CCR2).
Trends Trends in in Cancer Cancer
promote impaired antigen-specific antibody responses, hypergammaglobulinemia and autoantibody production [86], could provide another link for a potential predisposition to developing irAEs following ICB treatment. Another factor that seems to be a determinant of the development of ICB-induced irAEs is age. Aged mice in an anti-PD-1-treated melanoma model showed a significant increase in multi-organ pathology, mostly due to excessive IgG accumulation [87]. Paradoxically, aged mice presented features characteristic of good prognosis such as larger T and B cell infiltration, an increased number of TLS, and higher production of IL-21 and CXCL13 [87]. Depletion of CD4 + T cells or blockade of IL-21 was sufficient to prevent IgG deposition and organ damage, which is intriguing considering that most of the CD4 + TILs were BCL6 + CXCR5 − IL-21 + cells [87]. IgG transfer from aged mice was sufficient to induce multi-organ pathology in aged recipients but not in young mice, suggesting that ageassociated changes on the immunological milieu (also known as immunosenescence) have a pivotal role in the occurrence of irAEs [87]. Finally, no difference in CXCL13 levels before anti-PD-1 therapy was found between patients and healthy donors, although a strong correlation with irAEs was observed after treatment administration [87].
Regarding anti-CTLA-4 therapy, the number of T cells with a Tfh cell phenotype appears to increase upon its administration [3]. Direct proof that this type of therapy induces Tfh-mediated irAEs is still lacking; however, deficiencies in the CTLA4 gene have been widely associated with several autoimmune manifestations, and antibody-mediated blockade of this receptor promotes spontaneous Tfh cell differentiation as well as GC formation [3]. Together, these observations hint that dysregulation of Tfh cells during anti-CTLA-4 immunotherapy could in turn promote autoimmune manifestations. Further studies will be necessary to determine the causal links that may exist between Tfh cells and immunotherapy-induced irAEs [82], as well as the impact of other factors such as age or tumor type.
Concluding remarks and future perspectives
Tfh cells represent a promising target in human diseases and vaccination [88]. Reducing Tfh cell numbers or their function may be beneficial in settings in which Tfh cells are either the origin of T cell malignancies or in cases where Tfh cells provide help to malignant B cells. By contrast, in many solid organ tumor entities that exhibit TLS, promoting Tfh cell numbers or their function may help to boost the antitumor immune response. While new possibilities in personalized medicine hold promise to enable the development and fine-tuning of individual therapy strategies, for example, through concerted combination therapies consisting of various biologics and/or small-molecule drugs, it will be important to balance the positive and negative effects of these treatments to avoid the development of irAEs. To facilitate the applicability of targeting tumor-associated Tfh cells or boosting their function in cancer immunotherapy, it will be particularly important to further elucidate the identity and ontogeny of these cells as well as their functions beyond the classical help to B cells (see Outstanding questions). In summary, a better understanding of the cellular and molecular mechanisms driving Tfh cell responses in cancer and cancer immunotherapy will be necessary to improve the efficacy and safety of existing therapies and to determine the full potential of this subset as a novel therapeutic target.
Outstanding questions
What is the origin and function of different Tfh-like cell populations in tumor tissues, in particular in TLS? An answer to this question should provide a foundation for reaching a consensus on what is and what is not a tumorassociated Tfh cell subset.
What are the precise functions of Tfhlike cells in tumors? Most studies on Tfh cells in cancer have so far been addressed directly in cancer patients, thus remaining largely descriptive. Novel sophisticated genetic in vivo models will provide complementary mechanistic insights into the role of Tfh cells in cancer.
How can the beneficial effects of ICB therapy on Tfh cell function be isolated from the detrimental effects that may contribute to the development of irAEs, particularly autoimmunity?
Can personalized omics or other approaches be used to predict therapy outcome, thus facilitating the development of tailored therapy regimens that could enable increased response rates?
Recent evidence has shown that tissue-resident, pathogen-specific Tfh cells can take part in the antitumor immune response upon ICB. Could the tissue-resident microbiota be modulated to boost ICB-induced responses? Further research in this field could open up new avenues for clinical interventions that improve immunotherapy outcomes. | 2023-01-17T06:16:45.173Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "481be354b060d5816bb359b0b7e0bd7611966c81",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.trecan.2022.12.007",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ad121e4ecddea77672c086eebb3625b9b64a3953",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261822790 | pes2o/s2orc | v3-fos-license | Spectrum-Aware Adjustment: A New Debiasing Framework with Applications to Principal Component Regression
We introduce a new debiasing framework for high-dimensional linear regression that bypasses the restrictions on covariate distributions imposed by modern debiasing technology. We study the prevalent setting where the number of features and samples are both large and comparable. In this context, state-of-the-art debiasing technology uses a degrees-of-freedom correction to remove the shrinkage bias of regularized estimators and conduct inference. However, this method requires that the observed samples are i.i.d., the covariates follow a mean zero Gaussian distribution, and reliable covariance matrix estimates for observed features are available. This approach struggles when (i) covariates are non-Gaussian with heavy tails or asymmetric distributions, (ii) rows of the design exhibit heterogeneity or dependencies, and (iii) reliable feature covariance estimates are lacking. To address these, we develop a new strategy where the debiasing correction is a rescaled gradient descent step (suitably initialized) with step size determined by the spectrum of the sample covariance matrix. Unlike prior work, we assume that eigenvectors of this matrix are uniform draws from the orthogonal group. We show this assumption remains valid in diverse situations where traditional debiasing fails, including designs with complex row-column dependencies, heavy tails, asymmetric properties, and latent low-rank structures. We establish asymptotic normality of our proposed estimator (centered and scaled) under various convergence notions. Moreover, we develop a consistent estimator for its asymptotic variance. Lastly, we introduce a debiased Principal Components Regression (PCR) technique using our Spectrum-Aware approach. In varied simulations and real data experiments, we observe that our method outperforms degrees-of-freedom debiasing by a margin.
We introduce a new debiasing framework for high-dimensional linear regression that bypasses the restrictions on covariate distributions imposed by modern debiasing technology. We study the prevalent setting where the number of features and samples are both large and comparable. In this context, state-of-the-art debiasing technology uses a degrees-of-freedom correction to remove the shrinkage bias of regularized estimators and conduct inference. However, this method requires that the observed samples are i.i.d., the covariates follow a mean zero Gaussian distribution, and reliable covariance matrix estimates for observed features are available. This approach struggles when (i) covariates are non-Gaussian with heavy tails or asymmetric distributions, (ii) rows of the design exhibit heterogeneity or dependencies, and (iii) reliable feature covariance estimates are lacking.
To address these, we develop a new strategy where the debiasing correction is a rescaled gradient descent step (suitably initialized) with step size determined by the spectrum of the sample covariance matrix. Unlike prior work, we assume that eigenvectors of this matrix are uniform draws from the orthogonal group. We show this assumption remains valid in diverse situations where traditional debiasing fails, including designs with complex rowcolumn dependencies, heavy tails, asymmetric properties, and latent low-rank structures. We establish asymptotic normality of our proposed estimator (centered and scaled) under various convergence notions. Moreover, we develop a consistent estimator for its asymptotic variance. Lastly, we introduce a debiased Principal Components Regression (PCR) technique using our Spectrum-Aware approach. In varied simulations and real data experiments, we observe that our method outperforms degrees-of-freedom debiasing by a margin.
1. Introduction. Regularized estimators constitute a basic staple of high-dimensional regression. These estimators incur a regularization bias, and characterizing this bias is imperative for accurate uncertainty quantification. This motivated debiased versions of these estimators [98,55,88] that remain unbiased asymptotically around the signal of interest. To describe debiasing, consider the setting of a canonical linear model where one observes a sample of size n satisfying y = Xβ ⋆ + ε.
Here y ∈ R n denotes the vector of outcomes, X ∈ R n×p the design matrix, β ⋆ ∈ R p the unknown coefficient vector, and ε the unknown noise vector. Supposeβ denotes the estimator obtained by minimizing L(· ; X, y) : R p → R + given by (1) L(β; X, y) := 1 2 ∥y − Xβ∥ 2 + FIG 1. Histograms of (τ comparing [11] with ours, whereβ u is the debiased Elastic-Net estimator with tuning parameters λ 1 = 1, λ 2 = 0.1. The first row uses the Gaussian-based formula [11] with their suggested choice ofτ * . This is also known as degrees-of-freedom correction (DF). The second row uses our Spectrum-Aware formula (24) with our proposal forτ * . The signal entries are iid draws from N (−20, 1) + 0.06 · N (10, 1) + 0.7 · δ 0 where δ 0 is Dirac-delta function at 0. Thereafter, the signal is fixed. The columns correspond to five right-rotationally invariant designs of size n = 500, p = 1000: (i) MatrixNormal: X ∼ N (0, Σ (col) ⊗ Σ (row) ) where Σ (col) ij = 0.5 |i−j| , ∀i, j ∈ [n] and Σ (row) ∼ InverseWishart(Ip, 1.1 · p) (see Remark F.1 for notation); (ii) Spiked: X = α · VW ⊤ + n −1 N (0, In ⊗ Ip) where α = 10 and V, W are drawn randomly from Haar matrices of dimensions n, p, and then we retain m = 50 columns; (iii) LNN: X = X 1 · X 2 · X 3 · X 4 where X i ∈ R n i ×p i 's have iid entries from N (0, 1). Here, n 1 = n and p 4 = p whereas p 1 = n 2 , p 2 = n 3 , p 3 = n 4 are sampled uniformly from n, ..., p; (iv) VAR: X i,• = τ ∨i k=1 α k X i−k,• + ε i where X i,• denotes the i-th row of X, ε i ∼ N (0, Σ) with Σ ∼ InverseWishart(Ip, 1.1 · p). We set τ = 3, α = (0.4, 0.08, 0.04), X 1,• = 0; (v) Mult-t: rows of X are sampled iid from multivariate-t distribution Mult-t(3, Ip) (see Remark F.1 for notation). All designs are re-scaled so that the average of eigenvalues of X ⊤ X is 1. The solid black curve indicates a normal density fitted to the blue histograms whereas the dotted black line indicates the empirical mean corresponding to the histogram. See the corresponding QQ plot in Figure 8. we require that the singular value decomposition of X satisfies certain natural structure that allows for dependent samples and potentially heavy-tailed distributions. Specifically, we assume that X is right-rotationally invariant. We present the formal assumption in Definition 2.1 and discuss why it is natural for debiasing in Section 1.2. However, we emphasize that this assumption covers a broad class of designs, many of which fall outside the purview of the Gaussian designs considered in prior literature. In particular, it covers designs (i)-(v) discussed in the preceding paragraph. We propose that under this assumption, one should choose M = I p and obtain adj by solving the equation where {d 2 i } 1≤i≤p denote the eigenvalues of X ⊤ X (h ′′ extended by +∞ at non-differentiable points, c.f. Lemma 2.7). We call the solution adj of (3) Spectrum-Aware adjustment since it depends on the eigenvalues of X ⊤ X. We name the corresponding debiasing procedure Spectrum-Aware debiasing (in short, SA debiasing). Figure 1 demonstrates the power of our approach-it provides accurate debiasing across rather diverse settings. We comment that, unlike degrees-of-freedom debiasing, we do not require an estimate of Σ, yet we can tackle several correlation structures among features (c.f. Figure 1).
Despite the strengths of SA debiasing, we observe that it falls short when X contains outlier eigenvalues and/or the signal aligns with some eigenvectors of X. Since these issues occur commonly in practice (c.f. Figure 5), we introduce an enhanced procedure that integrates classical Principal Components Regression (PCR) ideas with SA debiasing. In this approach, we employ PCR to handle the outlier eigenvalues while using a combination of PCR and SA debiasing to estimate the parts of the signal that do not align with an eigenvector. We observe that this hybrid PCR-Spectrum-Aware approach works exceptionally well in challenging settings where these issues are present.
We next summarize our main contributions below.
(i) We establish that our proposed debiasing formula is well-defined, that is, (3) admits a unique solution (Proposition 3.1). Then we establish thatβ u − β ⋆ , with this choice of adj, converges to a mean-zero Gaussian with some variance τ ⋆ in a Wasserstein-2 sense (Theorem 3.1; Wasserstein-2 convergence notion introduced in Definition 2.2). Under an exchangeability assumption on β ⋆ , we strengthen this result to convergence guarantees on finite-dimensional marginals ofβ u − β ⋆ (Corollary 3.7). (ii) We develop a consistent estimator for τ * (Theorem 3.1) by developing new algorithmic insights and new proof techniques that can be of independent interest in the context of vector approximate message passing algorithms [73,76,36] (details in Section 5.2). (iii) To establish the aforementioned points, we imposed two strong assumptions: (a) the signal β ⋆ is independent of X and cannot align with any subspace spanned by a small number of eigenvectors of X ⊤ X; (b) X ⊤ X does not contain outlier eigenvalues. To mitigate these, we develop a PCR-Spectrum-Aware debiasing approach (Section 4) that applies when these assumptions are violated. We prove asymptotic normality for this approach in Theorem 4.1. (iv) We demonstrate the utility of our debiasing formula in the context of hypothesis testing and confidence interval construction with explicit guarantees on quantities such as the false positive rate, false coverage proportion, etc. (Sections 3.4 and 4.4). (v) As a by-product, our PCR-Spectrum-Aware approach introduces the first methodology for debiasing the classical PCR estimator (Theorem 4.1), which would otherwise exhibit a shrinkage bias due to omission of low-variance principal components. We view this as a contribution in and of itself to the PCR literature since inference followed by PCR is under-explored despite the widespread usage of PCR. (vi) As a further byproduct, we establish rigorously the risk of regularized estimators under right-rotationally invariant designs, a class much larger than Gaussian designs (Theorem 5.1). One should compare this theorem to [7] that developed a risk characterization of the Lasso in high dimensions under Gaussian design matrices. (vii) Finally, we demonstrate the applicability of our Spectrum-Aware approach across a wide variety of covariate distributions, ranging from settings with heightened levels of correlation or heterogeneity among the rows or a combination thereof (Figure 4), to diverse real data ( Figure 5). We observe that PCR-Spectrum-Aware debiasing demonstrates superior performance across the board.
In the remaining Introduction, we walk the readers through some important discussion points, before we delve into our main results. In Section 1.1, we provide some intuition for our Spectrum-Aware construction using the example of the ridge estimator, since it admits a closed form and is simple to study. In Section 1.2, we discuss right-rotationally invariant designs and related literature, with the goal of motivating why this assumption is natural for debiasing, and emphasizing its degree of generality compared to prior Gaussian assumptions. In Section 1.3, we discuss the importance of PCR-Spectrum-Aware debiasing in the context of the PCR literature.
Intuition via ridge estimator.
To motivate SA debiasing, let us focus on the simple instance of a ridge estimator that admits the closed-form (4)β = X ⊤ X + λ 2 I p −1 X ⊤ y, λ 2 > 0.
Recall that we seek a debiased estimator of the formβ u =β + adj −1 X ⊤ (y − Xβ). Suppose we plug in (4), leaving adj unspecified for the moment. If we denote the singular value decomposition of X to be Q ⊤ DO, we obtain that where o ⊤ i ∈ R p denotes the i-th row of O and recall that d 2 i 's denote the eigenvalues of X ⊤ X. Forβ u to be unbiased, it appears necessary to choose adj so that it centers V around the identity matrix I p . We thus choose adj to be solution to the equation This choice guarantees that the average of the eigenvalues of V equals 1. Solving for adj, we obtain This is precisely our Spectrum-Aware adjustment formula for the ridge estimator! However, it is not hard to see that centering V does not guarantee debiasing in general: for instance,β u would have an inflation bias if β ⋆ completely aligns with the top eigenvector o 1 . To ensure suitable debiasing, one requires X and β ⋆ to satisfy additional structure. To this end, if we further assume that O is random, independent of β ⋆ , and satisfies we would obtain, after choosing adj following (7), that This motivates us to impose the following assumption on O.
ASSUMPTION 1. O is drawn uniformly at random from the set of all orthogonal matrices of dimension p, independent of β ⋆ (this is the orthogonal group of dimension p that we denote as O(p)), in other words, O is drawn from the Haar measure on O(p).
We operate under this assumption since it ensures (⋆) holds and our Spectrum-Aware adjustment turns out to be the correct debiasing strategy in this setting.
Meanwhile, the degrees-of-freedom adjustment [11] yields the correction factoȓ Notably, adj andȃ dj may be quite different. Unlike adj,ȃ dj may not center the spectrum of V, and does not yield E(β u | β ⋆ ) = β ⋆ in general. However, it is important to note that they coincide asymptotically andȃ dj would provide accurate debiasing if one assumes that the empirical distribution of d 2 i p i=1 converges weakly to the Marchenko-Pastur law (cf. Appendix A.4), a property that many design matrices do not satisfy. In other words, DF debiasing is sub-optimal in the sense that it implicitly makes the assumption that the spectrum of X ⊤ X converges to the Marchenko-Pastur law, rather than using the actual spectrum. We provide examples of designs where DF debiasing fails in Figure 1. In contrast, adj is applicable under much broader settings as it accounts for the actual spectrum of X ⊤ X. Figure 1 shows the clear strengths of our approach over DF debiasing.
1.2.
Right-rotationally invariant designs. Roughly speaking, Assumption 1 lands X in the class of right-rotationally invariant designs (Definition 2.1). Our arguments in the preceding section indicate that right-rotational invariance is a more fundamental assumption for debiasing than prior Gaussian/sub-Gaussian assumptions in the literature. Since it preserves the spectral information of X ⊤ X, we expect methods developed under this assumption to exhibit improved robustness when applied to designs that may not satisfy the right-rotational invariance property or designs observed in real data. We demonstrate this via Figure 5, where we conduct experiments with our PCR-Spectrum-Aware debiasing on designs arising from six real datasets spanning image data, financial data, socio-economic data, and so forth.
Indeed varied research communities realized the strength of such designs, as demonstrated by the recent surge in literature in this space [83,87,84,26,73,101,76,6,70,71,63,72,85,62,90,33,60,75]. In particular, [27] established that properties of high-dimensional systems proven under such designs continue to hold for a broad class of designs (including nearly deterministic designs as observed in compressed sensing [23]) as long as they satisfy certain spectral properties. In fact, the universality class for such designs is far broader than that for Gaussians, suggesting that these may serve as a solid prototype for modeling highdimensional phenomena arising in non-Gaussian data. Despite such exciting developments, there are hardly any results when it comes to debiasing or inference under such designs (with the exception of [82] which we discuss later). This paper develops this important theory and methodology.
Despite the generality of right-rotationally invariant designs, studying these presents new challenges. For starters, analogs of the leave-one-out approach [66,65,8,29,97,28,80,79,20,56] and Stein's method [78,19,11,9,10,3], both of which form fundamental proof techniques for Gaussian designs, are nonexistent or under-developed for this more general class. To mitigate this issue, we resort to an algorithmic proof strategy that the senior authors' earlier work and that of others have used in the context of Gaussian designs. To studyβ u , we observe that it depends on the regularized estimatorβ. However,β does not admit a closed form in general, thus studying these turns out difficult. To circumvent this, we introduce a surrogate estimator that approximatesβ in a suitable high-dimensional sense. We establish refined properties of these surrogates and use their closeness to our estimator of interest to infer properties about the latter. Prior literature has invoked this algorithmic proof strategy for Gaussian designs under the name of approximate message passing theory [24,7,80,79,100]. In case of right-rotationally invariant designs, we create the surrogate estimators using vector approximate message passing (VAMP) algorithms [73] (see details in Appendix A.6 and B.1).
However, unlike the Gaussian case, proving that these surrogates approximateβ presents deep challenges. We overcome this via developing novel properties of VAMP algorithms that can be of independent interest to the signal processing [83], probability [87], statistical physics [84], information and coding theory [73,72,90] communities that seek to study models with right-rotationally invariant designs in the context of myriad other problems.
Among the literature related to right-rotationally invariant designs, two prior works are the most relevant for us. Of these, [40] initiated a study of the risk ofβ under right-rotationally invariant designs using the VAMP machinery. However, their characterization is partially heuristic, meaning that they assume certain critical exchange of limits is allowed and that limits of certain fundamental quantities exist. The former assumption may often not hold, and the latter is unverifiable without proof (see Remark 5.1 for further details). As a by-product of our work on debiasing, we provide a complete rigorous characterization of the risk of regularized estimators under right-rotationally invariant designs (Theorem 5.1) without these unverifiable assumptions. The second relevant work is [82], which conjectures a population version of a debiasing formula for the Lasso using non-rigorous statistical physics tools. To be specific, they conjecture a debiasing formula that involves unknown parameters related to the underlying limiting spectral distribution of the sample covariance matrix. This formula does not provide an estimator that can be calculated from the observed data. In contrast, we develop a complete data-driven pipeline for debiasing and develop a consistent estimator for its asymptotic variance.
Debiased Principal Components Regression (PCR).
After describing SA debiasing, we identify two common scenarios that remain challenging: (i) a small subset of eigenvectors of X ⊤ X align strongly with the true signal (referred to as the alignment PCs); (ii) the top few eigenvalues are significantly separated from the bulk of the spectrum (referred to as the outlier PCs). The presence of these two types of PCs breaks crucial assumptions of degreesof-freedom and Spectrum-Aware debiasing and has important practical implications. Many real-world designs contain a latent structure where a small number of PCs dominate the rest. These dominant PCs can distort normality of the debiased estimator significantly, depending on how they align with the signal vector. Motivated by these issues, we develop the debiased Principal Components Regression, leveraging SA debiasing as a sub-routine.
The classical PCR estimator, when transformed back to the original basis, exhibits a shrinkage bias [34,37,38,13,25,58]. This bias arises from the loss of the portion of the signal that aligns with the subspace spanned by the discarded PCs. Our approach involves "re-purposing" the discarded PCs to form a new regression problem, where the new signal corresponds to the lost segment. We then form a debiased estimator for this lost segment using SA debiasing. We combine the resulting estimator ("complement PCR estimator") with the classical PCR estimator ("alignment PCR estimator") to form a debiased PCR estimator for the original signal.
This combination of PCR ideas with SA debiasing proves effective in handling scenarios (i) and (ii). As a result, one observes substantial improvement compared to prior debiasing strategies across an array of real-data designs, in addition to challenging designs that exhibit extremely strong correlations and heterogeneities.
Beyond addressing the two specific challenges, our work also contributes to an extensive and growing body of work on PCR [57,49,5,30,2,47,1,46,99,50,77,81]. Most recent works focus on improving or characterizing the predictive performance of PCR. Although the shrinkage bias of PCR estimators is well understood, limited prior work explores how one should remove this bias in high dimensions and the potential benefits of such debiasing in statistical inference. As far as our knowledge extends, our work provides the first approach with formal high-dimensional guarantees on debiasing the classical PCR estimator.
We organize the rest of the paper as follows. In Section 2, we introduce our assumptions and preliminaries. In Sections 3 and 4, we introduce our Spectrum-Aware and PCR-Spectrum-Aware methods with formal guarantees. In Section 5, we present our proof outline and technical novelties. Finally in Section 6, we conclude with potential directions for future work.
Assumptions and Preliminaries.
In this section, we introduce our assumptions and preliminaries that we require for the sequel.
2.1. Design matrix, signal and noise. We first formally define right-rotationally invariant designs. DEFINITION 2.1 (Right-rotationally invariant designs). Consider the singular value decomposition X = Q ⊤ DO where Q ∈ R n×n and O ∈ R p×p are orthogonal and D ̸ = 0 ∈ R n×p is diagonal. We say a design matrix X ∈ R n×p is right-rotationally invariant if Q, D are deterministic, and O is uniformly distributed on the orthogonal group.
We work in a high-dimensional regime where p and n(p) both diverge and n(p)/p → δ ∈ (0, +∞). Known as proportional asymptotics, this regime has gained increasing popularity in recent times owing to the fact that asymptotic results derived under this assumption demonstrate remarkable finite sample performance (cf. extensive experiments in [80,79,15,100,61,56] and the references cited therein). In this setting, we consider a sequence of problem instances {y(p), X(p), β ⋆ (p), ε(p)} p≥1 such that y(p), ε(p) ∈ R n(p) , X(p) ∈ R n(p)×p , β ⋆ (p) ∈ R p and y(p) = X(p)β ⋆ (p) + ε(p). In the sequel, we drop the dependence on p whenever it is clear from context. For a vector v ∈ R p , we call its empirical distribution to be the probability distribution that puts equal mass 1/p to each coordinate of the vector. Some of our convergence results will be in terms of empirical distributions of sequences of random vectors. Specifically, we will use the notion of Wasserstein-2 convergence frequently so we introduce this next. DEFINITION 2.2 (Convergence of empirical distribution under Wasserstein-2 distance). For a matrix (v 1 , . . . , v k ) = (v i,1 , . . . , v i,k ) n i=1 ∈ R n×k and a random vector (V 1 , . . . , V k ), we write to mean that the empirical distribution of the columns of (v 1 , . . . , v k ) converge to (V 1 , . . . , V k ) in Wasserstein-2 distance. This means that for any continuous function f : R k → R satisfying 1 for a review of the properties of the Wasserstein-2 convergence. ASSUMPTION 2 (Measurement matrix). We assume that X ∈ R n×p is right-rotationally invariant (Definition 2.1) and independent of ε. For the eigenvalues, we assume that as n, p → ∞, where D 2 has non-zero mean with compact support supp(D 2 ) ⊆ [0, ∞). We denote d − := min(x : x ∈ supp(D 2 )). Furthermore, we assume that as p → ∞, REMARK 2.3. The constraint (12) states that X ⊤ X has bounded operator norm. It has important practical implications. It prevents the occurrence of outlier eigenvalues, where a few prominent eigenvalues of X ⊤ X deviate significantly from the main bulk of the spectrum.
We work with Assumption 2 for part of the sequel, in particular, Section 3. But later in Section 4, we relax restriction (12).
Since our debiasing procedure relies on the spectrum of X ⊤ X, analyzing its properties requires a thorough understanding of the properties of D (from (11)), the limit of the empirical spectral distribution of X ⊤ X. Often these properties can be expressed using two important quantities-the Cauchy and the R-transform. We define these next. For technical reasons, we will define these transforms corresponding to the law of −D 2 .
where G −1 (·) is the inverse function of G(·). See properties and well-definedness of these in Lemma A.8. We set G(−d − ) = lim z→−d− G(z).
We next move to discussing our assumptions on the signal. ASSUMPTION 3 (Signal and noise). We assume throughout that ε ∼ N (0, σ 2 · I p ) for potentially unknown noise level σ 2 > 0. We require that β ⋆ is either deterministic or independent of O, ε. In the former case, we assume that β ⋆ W2 → B ⋆ where B ⋆ is a random variable with finite variance. In the latter case, we assume the same convergence holds almost surely. REMARK 2.5. The independence condition between β ⋆ and O, along with the condition that O is uniformly drawn from the orthogonal group enforces that β ⋆ cannot align with a small number of these eigenvectors. Once again, we require these assumptions in Section 3 but we relax these later in Section 4. REMARK 2.6. We believe the assumption on the noise can be relaxed in many settings. For instance, if we assume Q (Definition 2.1) to be uniformly distributed on the orthogonal group independent of O and β ⋆ , one may work with the relaxed assumption that ε W2 → E for any random variable E with mean 0 and variance σ 2 . This encompasses many noise distributions beyond Gaussians. Even without such an assumption on Q, allowing for sub-Gaussian noise distributions should be feasible invoking universality results. However, in this paper, we prefer to focus on fundamentally breaking the i.i.d. Gaussian assumptions on X in prior works. In this light, we work with the simpler Gaussian assumption on the noise.
In the next segment, we describe the penalty functions that we work with.
2.2. Penalty function. As observed in the vast majority of literature on high-dimensional regularized regression, the proximal map of the penalty function plays a crucial role in understanding properties ofβ. We introduce this function next.
Let the proximal map associated to h be ASSUMPTION 4 (Penalty function). We assume that h : R → [0, +∞) is proper, closed and satisfies that for some c 0 ≥ 0, ∀x, y ∈ R, t ∈ [0, 1], (14) h Here, c 0 = 0 is equivalent to assuming h is convex and c 0 > 0 to assuming h is c 0 -strongly convex. Furthermore, we assume that h(x) is twice continuously differentiable except for a finite set D of points, and that h ′′ (x) and Prox ′ vh (x) have been extended at their respective undefined points using Lemma 2.7 below. LEMMA 2.7 (Extension at non-differentiable points). Fix any v > 0. Under Assumption 4, x → Prox vh (x) is continuously differentiable at all but a finite set C of points. Extending functions x → h ′′ (x) and x → Prox ′ vh (x) on D and C by +∞ and 0 respectively, we have that for all x ∈ R, After the extension, for any w > 0, x → 1 w+h ′′ (Proxvh(x)) is piecewise continuous with finitely many discontinuity points on which it takes value 0.
We defer the proof to Appendix A.2. We considered performing this extension since our debiasing formula involves the second derivative of h(·). The extension allows us to handle cases where the second derivative may not exist everywhere. As an example, we compute the extension for the elastic net penalty and demonstrate the form our debiasing formula takes after plugging in this extended version of h(·). EXAMPLE 2.8 (Elastic Net penalty). Consider the elastic-net penalty (16) h(x) = λ 1 |x| + λ 2 2 This is twice continuously differentiable except at x = 0 (i.e. D = {0}). Fix any v > 0.
respectively, so that (15) holds for all x ∈ R. Note also that for any w > 0, x → 1 1+wh ′′ (Proxvh(x)) = 1 1+λ2w I(|x| > λ 1 v) is piecewise continuous and takes value 0 on both of its discontinuity points. It follows that our adjustment (3) can be written as As a sanity check, if one sets λ 2 = 0 and solves the population version of the above equation with D 2 drawn from the Marchenko-Pastur law, then one recovers the well-known degreesof-freedom adjustment for the Lasso: adj = 1 −ŝ/n.
The following assumption is analogous to [11,Assumption 3.1] for the Gaussian design: we require either h to be strongly convex or X ⊤ X to be non-singular with smallest eigenvalues bounded away from 0.
2.3.
Fixed-point equation. Recall our discussion from Section 1.2. We study the regularized estimatorβ by introducing a more tractable surrogateβ t . Later we will see that we construct this surrogate using an iterative algorithmic scheme known as Vector Approximate Message Passing algorithm (VAMP) [73]. Thus to study the surrogate, one needs to study the VAMP algorithm carefully. One can describe the properties of this algorithm using a system of fixed point equations in four variables. We use γ * , η * , τ * , τ * * , ∈ (0, +∞) to denote these variables, and define the system here: where Z ∼ N (0, 1) is independent of B ⋆ . We remind the reader that x → Prox ′ γ −1 * h (x) is well-defined on R by the extension described in Lemma 2.7.
The following assumption ensures that at least one solution exists.
ASSUMPTION 6 (Existence of fixed points). There exists a solution γ * , η * , τ * , τ * * ∈ (0, +∞) such that (19) holds. The following Proposition shows that Assumption 6 holds for any Elastic Net penalties (that are not Lasso). See proof in Appendix A.5.3. PROPOSITION 2.10 (Assumption 6 holds for Elastic Net). Suppose that D 2 and B ⋆ are as in Assumption 2 and 3. For the Elastic Net penalty h defined in Example 2.8 with λ 2 > 0, Assumption 6 holds for any σ 2 > 0. REMARK 2.11 (Verifying Assumption 6). Proving Assumption 6 is difficult for general penalties and D 2 . One therefore hopes to verify it on a case-by-case basis. For instance, when D 2 follows the Marchenko-Pastur law and h is the Lasso, [7, Proposition 1.3, 1.4] proved that (19) admits a unique solution. Similarly, [80] proved existence and uniqueness of the solution to an analogous fixed point equation in the context of logistic and probit regression. For general D 2 in the context of linear regression, there are no rigorous results establishing that the system admits a solution. In Proposition 2.10, we take the first step and show for the first time that Assumption 6 indeed holds for the Elastic Net for a general D 2 satisfying our assumptions. This result may be of independent interest: we anticipate that our proof can be adapted to other penalties beyond the Elastic Net; however, since our arguments require an explicit expression for the proximal operator, we limit our presentation to the Elastic Net. That said, one expects this to always hold true under Assumptions 2-5 (see e.g. [39,40]). ASSUMPTION 7 (Feasibility of noise-level estimation). When the noise-level σ 2 is unknown, we require that γ * , η * defined in (19) and D 2 defined in Assumption 2 satisfy (20) δ · REMARK 2.12. Assumption 7 serves as a technical condition to rule out degenerate scenarios where estimating σ 2 is impossible. For example, this condition is not satisfied when n = p and X = I p : in this case, our sole observation is y = β ⋆ + ε and it is indeed impossible to estimate σ 2 . We provide a consistent estimator ifor the left-hand side of (20) in (52), facilitating the verification of Assumption 7.
3. Debiasing with Spectrum-Aware adjustment. Recall that our debiasing formula involved adj obtained by solving (3). To ensure our estimator is well-defined, we need to establish that this equation has a unique solution. In this section, we address this issue, establish asymptotic normality of our debiased estimator (suitably centered and scaled), and present a consistent estimator for its asymptotic variance.
3.1. Well-definedness of our debiasing formula. To show that (3) admits a unique solution, we define the function g p : (0, +∞) → R as Here h ′′ (·) refers to the extended version we defined using Lemma 2.7.
The following Proposition is restated from Proposition B.10.
PROPOSITION 3.1. Fix p ≥ 1 and suppose that Assumption 4 holds. Then, the function γ → g p (γ) is well-defined, strictly increasing for any γ > 0, and admits a unique solution in (0, +∞) if and only if there exists some i ∈ [p] such that h ′′ (β i ) ̸ = +∞ and one of the following holds: As a concrete example, the assumptions of Proposition 3.1 hold for the Elastic Net with λ 2 > 0. REMARK 3.3. To find the unique solution of g p (γ) = 1, we recommend using Newton's method initialized at γ = 1 i . In rare cases where Newton's method fails to converge, we suggest using a bisection-based method, such as the Brent's method, to solve (3) on the interval 0, max i∈[p] d 2 i , where convergence is guaranteed (by Jensen's inequality, the solution must be upper bounded by max i∈[p] d 2 i ). For numerical stability, we suggest re-scaling the design matrix X such that average of its eigenvalues equals 1, i.e.
3.2. The procedure. In this section, we introduce our Spectrum-Aware debiasing algorithm (Algorithm 1). We also include our methodology for constructing a consistent estimator for the asymptotic variance corresponding to this estimator (centered at the truth) as part of Algorithm 1. But to introduce these, we require some intermediate quantities that depend on the observed data and the choice of the penalty. We define these next. Later in Section 5, we will provide intuition as to why these intermediate quantities are important and how we construct the variance estimator. DEFINITION 3.4 (Scalar statistics). Let adj(X, y, h) ∈ (0, +∞) be the unique solution to (3). We define the following scalar statistics whereσ 2 is an estimator for the noise level σ 2 (see Remark 3.5 below). Note that the quantities in (23) are well-defined for any p (i.e. no zero-valued denominators) if there exists some Going forward, we suppress the dependence on X, y, h for convenience.
REMARK 3.5. The computation ofτ * andτ * * in (23) requires an estimatorσ 2 for the noise level σ 2 when it is not already known. We provide a consistent estimator in (51) that we use in all our numerical experiments.
We state the Spectrum-Aware debiasing procedure in Algorithm 1. On a first look, the definitions (23) might appear opaque. However, we construct them from crucial algorithmic insights that we will explain later in Section 5.
3.3. Asymptotic normality. Theorem 3.1 below states that the empirical distribution of (τ converges to a standard Gaussian.
Algorithm 1 Spectrum-Aware debiasing procedure
Input: Response and design (y, X) and a penalty function h 1: Find minimizerβ of (1) 2: Compute the eigenvalues ( and the associated variance estimatorτ * (X, y, h) from (23) THEOREM 3.1 (Asymptotic normality ofβ u ). Suppose that Assumption 2-7 hold. Then, we have that almost surely as p → ∞, We illustrate Theorem 3.1 in Figure 1 under five different right-rotationally-invariant designs (cf. Remark F.2) with non-trivial correlation structures, and compare with DF debiasing with M = I p . The corresponding QQ-plot can be found in Figure 8. We observe that our method outperforms DF debiasing by a margin.
We next develop a different result that characterizes the asymptotic behavior of finitedimensional marginals ofβ u . Corollary 3.7 below establishes this under an additional exchangeability assumption on β ⋆ . To state the corollary, we recall to readers the standard definition of exchangeability for a sequence of random variables. DEFINITION 3.6 (Exchangeability). We call a sequence of random variables exchangeable if for any permutation π of the indices 1, ..., p, the joint distribution of the permuted sequence V π(i) p i=1 is the same as the original sequence. Corollary 3.7 below is a consequence of Theorem 3.1. We defer its proof to Appendix C.2. COROLLARY 3.7. Fix any finite index set I ⊂ [p]. Suppose that Assumption 2-7 hold, and (β ⋆ ) p j=1 is exchangeable independent of X, ε. Then as p → ∞, we have where ⇒ denotes weak convergence. Observe that we once again outperform degrees-of-freedom debiasing. Corollary 3.7 is naturally useful for constructing confidence intervals for finite-dimensional marginals of β ⋆ with associated false coverage proportion guarantees. We discuss this in the next section in detail.
3.4.
Inference. In this section, we discuss applications of our Spectrum-Aware debiasing approach to hypothesis testing and construction of confidence intervals. Consider the null . We define p-values P i and decision rule T i (T i = 1 means rejecting H 0,i ) for the test H 0,i based on the definitions where Φ denotes the standard Gaussian CDF and α ∈ [0, 1] is the significance level. We define the false positive rate (FPR) and true positive rate (TPR) below when their respective denominators are non-zero. Fix α ∈ [0, 1]. We can construct confidence intervals for any a, b ∈ R such that Φ(b) − Φ(a) = 1 − α. One can define the associated false coverage proportion (FCP) for any p ≥ 1. Theorem 3.1 directly yield guarantees on the FPR, TPR and FCP as shown in Corollary 3.8 below. We defer the proof to Appendix C.1.
COROLLARY 3.8. Suppose that Assumption 2-6 hold. We have the following.
(a) Suppose that P (B ⋆ = 0) > 0 and there exists some µ 0 ∈ (0, +∞) such that Then for any fixed i such that β ⋆ i = 0, we have lim p→∞ P (T i = 1) = α, and the false positive rate satisfies that almost surely lim p→∞ FPR(p) = α. Refer also to Remark C.1 for the exact asymptotic limit of TPR. (b The false coverage proportion satisfies that almost surely lim p→∞ FCP(p) = α.
We demonstrate Corollary 3.8 in Figure 3. We note that the FPR and FCP values obtained from DF debiasing diverge from the intended α values, showing a clear misalignment with the 45-degree line. In contrast, the Spectrum-Aware debiasing method aligns rather well with the specified α values, and this occurs without much compromise on the TPR level. 4. PCR-Spectrum-Aware debiasing.
4.1.
Challenges of alignment and outlier eigenvalues. We revisit our discussion in Section 1.1 on debiasing the ridge regression. Recall that we chose adj to center the spectrum of V below at 1: where d 2 i and o i denote, respectively, the eigenvalues and eigenvectors of X ⊤ X. Our motivation was that with this choice of adj, (29) V ≈ I p + unbiased component, andβ u would be centered around β ⋆ . However, a simple calculation reveals a potential issue with this approach: if β ⋆ perfectly aligns with the top eigenvector o 1 , we would obtain This incurs an inflation bias since d 2 One can generalize this to the alignment of β ⋆ with any small subset J 1 of [p], where the bias could be inflation or shrinkage depending on the set of eigenvectors β ⋆ aligns to. We refer to this as the alignment issue. Formally, such alignment violates the independence assumption between β ⋆ and X that we had required earlier in Assumption 3 (cf. Remark 2.5).
Another common issue arises: often the top few eigenvalues of X ⊤ X, indexed by J 2 := {1, ..., J 2 }, are significantly separated from the bulk of the spectrum. In this case, after centering the spectrum of V, the variance of the unbiased component of V in (29) will be large, thus making the debiasing procedure unstable. We refer to these eigenvalues as outlier eigenvalues. Formally, the existence of such eigenvalues violates (12) in Assumption 2 (cf. Remark 2.3).
We formalize both these issues by relaxing Assumptions 2 and 3 to Assumption 8 below. To this end, denote N := i ∈ [p] : d 2 i > 0 , N := |N |. We let J be a user-chosen, finite index set (30) J ⊆ N that contains J 1 ∪ J 2 as subset (See Remark 4.2). We denote its size as J := |J |.
ASSUMPTION 8. We assume that J defined in (30) is of finite size 2 and for some real- where we used J (i) to denote the i-th index in J . Both υ ⋆ and ζ ⋆ are unknown, and they can be either deterministic or random independent of O, ε. If ζ ⋆ is deterministic, we assume that ζ ⋆ W2 → C ⋆ as n, p → ∞, where C ⋆ is a random variable with finite variance. If ζ ⋆ is random, we assume the same convergence holds almost surely. Furthermore, we assume that Assumption 2 holds except that, instead of (12), we only require Note that when J = ∅, Assumption 8 reduces to Assumptions 2 and 3 precisely.
REMARK 4.1. Assumption 8 does not impose any constraints on υ ⋆ ∈ R J . For example, it is permitted that υ ⋆ = 0 or that p −1 ∥υ ⋆ ∥ 2 diverges as p → ∞. Note that Assumption 8 also permits ζ ⋆ = 0 but p −1 ∥ζ ⋆ ∥ 2 cannot diverge. REMARK 4.2. J needs to be a finite index set that contains both J 1 (as in (30)) and J 2 (as in (32)) as subsets. The index set J 2 can be determined by observing the spectrum of X ⊤ X. J 1 is generally not observed and requires prior information. However, we remark that eigenvectors indexed by J 1 ∩ J 2 tend to distort the debiasing procedure most severely. So often just including J 2 in J can significantly improve inference.
Under Assumption 8, we develop a debiasing approach that recovers both components of β ⋆ from (31). Since we utilize ideas from principal components regression, we first describe the classical PCR algorithm. As the reader will soon discover, our debiasing approach also produces a debiased version of this classical PCR estimator and thereby we contribute independently to the PCR literature.
The PCR algorithms.
4.2.1. Classical PCR. Algorithm 1 below describes the classical PCR procedure with respect to principal components indexed by K. It will be used as a sub-routine in the algorithms we present in the sequel. ALGORITHM 1 (Classical PCR). Given input (X, y) and a user-chosen size-K index set K satisfying the PCR procedure computes the singular value decomposition X = Q ⊤ DO 3 and outputs.
where W K := XO ⊤ K ∈ R n×K denotes the basis-transformed design matrix with orthogonal columns and O K ∈ R K×p denotes the rows of O indexed by K.
Alignment PCR.
In this section, we introduce a procedure to recover β ⋆ al , the component of β ⋆ that aligns with the J 1 -indexed PCs.
ALGORITHM 2 (Alignment PCR). Given input (X, y) and J defined in (30), the alignment PCR procedure runs Algorithm 1 with K ← J and obtainsθ pcr (J ). It then outputs the alignment PCR estimator It is easy to show thatβ al (J ) is a consistent estimator of β ⋆ al (we formalize this in Theorem 4.1, (a)). Left multiplication of O ⊤ J toθ pcr in (35) is interpreted as transformingθ pcr back to the standard basis. However, the resulting estimatorβ al incurs a shrinkage bias since it only recovers β ⋆ al portion of the signal (see Theorem 4.1, (a)). Our discussions so far is standard in current PCR pipelines. However, to obtain asymptotically unbiased estimators for the entire signal β ⋆ , more work is required and in particular, debiasing this classical PCR estimator is critical. We achieve this below.
Complement PCR.
In the last section, we developedβ al (J ) to estimate the alignment component β ⋆ al . We now leverage our Spectrum-Aware debiasing theory to devise a modified PCR procedure that we call the complement PCR, to estimate the remaining component ζ ⋆ . Let us denote Note thatJ indexes all "discarded" PCs that are not used to constructβ al (J ). The complement PCR procedure is described below.
ALGORITHM 3 (Complement PCR). Given (X, y), a penalty function h andJ from (36), the complement PCR runs Algorithm 1 with K ←J and obtainsθ pcr (J ). It then constructs the new data In practice, one would choose K after the SVD. Note also that Q ∈ R n×n , O ∈ R p×p and D ∈ R n×p .
Algorithm 2 PCR-Spectrum-Aware debiasing
Input: Response and design (y, X), a penalty function h and an index set of PCs J ⊂ N (see (30)). 1: Conduct eigen-decomposition: (22) and compute complement PCR estimator (23) where DJ ∈ R n×(N −J) consists of columns of D indexed byJ and OJ ∈ R (N −J)×p consists of rows of O indexed byJ . It then runs Spectrum-Aware debiasing procedure in Algorithm 1 with respect to the new data and outputs the complement PCR estimator and the associated variance estimatorτ * ←τ * (X new , y new , h) as well as noise level estimator σ 2 ←σ 2 (X new , y new , h) from (51).
We establish the asymptotic behavior ofβ co (J ) in Theorem 4.1, (b). In particular,β co (J ) is a debiased estimator of ζ ⋆ in a suitable asymptotic sense.
Debiased PCR.
In the last two sections, we developed the alignment PCR procedure to estimate β ⋆ al and the complement PCR procedure to estimate ζ ⋆ . It is then natural to combine them to form a debiased estimator for β ⋆ . ALGORITHM 4 (Debiased PCR). Given input (X, y), penalty function h, J in (30), the debiased PCR runs Algorithm 2 to obtainθ pcr (J ) andβ al (J ). It then runs Algorithm 3 to obtainβ co (J ) and variance estimatorτ * . The procedure then outputs the debiased PCR estimator (38)β u pcr ←β al (J ) +β co (J ) along withτ * . We summarize the entire procedure in Algorithm 2 for clarity. 4.3. Asymptotic normality. We now state the asymptotic properties of the debiased PCR procedure. The proof of the theorem below is deferred to Appendix D.
The proof of Corollary 4.3 below is deferred to Appendix D.
We test our claim (42) under two sets of design matrices. The first set, presented in Figure 4 (see associated QQ plot in Figure 9), represents more challenging variants of the designs showcased in Figure 1. These designs contain heightened levels of correlation, heterogeneity, or a combination of both (see Remark F.3 for a detailed comparison). Furthermore, they contain outlier eigenvalues and we construct these experiments so that β ⋆ aligns with the top eigenvectors. The second set of design matrices, presented in Figure 5 (associated QQ plot in Figure 10), presents debiasing results for designs taken from real data from five different domains-audio features, genetic data, image data, financial returns, and socio-economic data.
Observe that across the board, PCR-Spectrum-Aware debiasing outperforms other debiasing methods with histograms of the empirical distribution of pivotal quantities aligning with the standard Gaussian pdf exceptionally well. Also observe that enhancing degrees-offreedom debiasing with the PCR methodology (this amounts to substituting Spectrum-Aware debiasing in Algorithm 3 with degrees-of-freedom debiasing) yields improvements over the vanilla degrees-of-freedom debiasing. However, the empirical distributions deviate significantly from a standard Gaussian even with this version of degrees-of-freedom debiasing whereas our PCR-Spectrum-Aware approach continues to work well.
4.4.
Inference. In this section, we discuss inference questions surrounding υ ⋆ , ζ ⋆ and β ⋆ . For any a, b ∈ R such that Φ(b) − Φ(a) = 1 − α, define the confidence intervals for β ⋆ i to be (44) CI where V, W are drawn randomly from Haar matrices of dimensions n, p respectively with 3 columns retained, and R = diag(500, 250, 50); Penalty h used in complement PCR steps is identical to that used in Figure 1. When applying PCR-Spectrum-Aware debiasing, we set J to be top 10, 10, 20, 40, 40 for the five design, respectively. See the corresponding QQ plot in Figure 9. and those for υ ⋆ i , ζ ⋆ i respectively to be For the null hypotheses, H υ ⋆ i,0 : υ ⋆ i = 0 and H ζ ⋆ i,0 : ζ ⋆ i = 0 we define the p-values and decision rules respectively as (46) where P i (·, ·) and T i (·, ·) are defined as in (26). Recall from the definition of υ ⋆ , (31), that if υ ⋆ = 0, it implies that β ⋆ does not align with any eigenvector of the sample covariance matrix. Thus, rejection of the null hypothesis H υ ⋆ i,0 for any i indicates alignment of β ⋆ with the PC indexed by J (i). Thus the aforementioned hypothesis testing procedure provides an educated guess for the PCs in J that align with β ⋆ . Since the p-values P i (θ pcr , are asymptotically independent (second result in (40)), the Benjamini-Hochberg procedure [12] may be used to control the False Discovery Rate (FDR), i.e. the ratio of PCs falsely identified to align with β ⋆ out of all PCs identified to align with β ⋆ . We will showcase an application of this idea to real data designs later in this section. To the authors' knowledge, this is the first such test of alignment in the context of high-dimensional linear regression.
The following is a straightforward Corollary of Theorem 4.1.
COROLLARY 4.5. Suppose Assumption 4-8 hold. The asymptotic FCP of CI i β u pcr,i ,τ * and CI i β co,i ,τ * converges to α almost surely as p → ∞. If ζ ⋆ is exchangeable, we also have that converges to α without requiring exchangeability of ζ ⋆ . For any fixed i such that υ ⋆ i = 0, we have that almost surely Meanwhile, for the hypothesis tests of (ζ ⋆ i ) p i=1 , Corollary 3.8 (a) holds for T i β co,i ,τ * with C ⋆ in place of B ⋆ .
We now show numerical experiments demonstrating the results above. Figures 6 and 7 plot the false coverage proportion (FCP) of the confidence intervals for β ⋆ i under the designs from Figures 4 and 5 respectively. These figures demonstrate that the FCP of the PCR-Spectrum-Aware debiasing method matches the intended α values across all the designs exceptionally well while other methods falls short. We refer the reader to Appendix F.3.2 for inference results on ζ ⋆ . Figure 5 in Table 1 (see also Remark 4.4). Observe that the tests are rejected at both levels 0.01 and 0.05 for the first two columns suggesting the absence of strong alignment between the signal and the top 5 eigenvectors of X ⊤ X for the first 2 designs (Speech and DNA). But this is not the case for the last 3 designs (SP500, FaceImage and Crime). This matches our observation that the vanilla Spectrum-Aware debiasing performs well under the first two designs but fails when applied to the last three designs. We refer the reader to Appendix F.3.1 for the adjusted p-values under setting of Figure 4.
Proof outline and novelties.
Our main result relies on three main steps: (i) a characterization of the empirical distribution of a population version ofβ, (ii) connecting this population version with our data-driven Spectrum-Aware estimator, (iii) developing a consistent estimator of the asymptotic variance. We next describe our main technical novelties for step (i) in Section 5.1, and that for steps (ii) and (iii) in Section 5.2.
Here, r * can be interpreted as the population version of the debiased estimatorβ u and r * * as an auxiliary quantity that arises in the intermediate steps in our proof. The following theorem characterizes the empirical distribution of the entries ofβ and r * . We prove it in Appendix B.1.
where Z ∼ N (0, 1) is independent of B ⋆ . Furthermore, almost surely as p → ∞ Result B: Consistent estimation of fixed points. Note that the population debiased estimator r * cannot be used to conduct inference since γ * is unknown. Furthermore, the previous theorem says roughly that r * − β ⋆ behaves as a standard Gaussian with variance τ * , without providing any estimator for τ * . We address these two points here. In particular, we will see that addressing these points ties us to establishing consistent estimators for the solution to the fixed points defined in (19). The theorem below shows that ( adj,η * ,τ * ,τ * * ) from (23) serve as consistent estimators of the fixed points (γ * , η * , τ * , τ * * ), andβ u ,r * * as consistent estimators of r * and r * * , wherer * * is defined as in (50) below. For the purpose of the discussion below, we note thatτ * * from (23) can be written as follows. Furthermore, recall that when the noise level σ 2 is unknown, one requires an estimator for σ 2 to calculateτ * ,τ * * in (23). We define such an estimator below and show that that it estimates σ 2 consistently.
Note this is well-defined when In particular, the LHS of (52) consistently estimates the LHS of (20) in Assumption 7.
THEOREM 5.2 (Consistent estimation of fixed points). Suppose that Assumption 2-7 hold. Then, the estimators in (23) and (50) are well-defined for any p and we have that almost surely as p → ∞, We prove this theorem in Appendix B.2. It is not hard to see that Theorem 5.1 combined with Theorem 5.2 proves our main result Theorem 3.1.
Proof of result A.
In this section, we discuss our proof novelties for Theorem 5.1. Appendix B.1 contains this proof.
We base our proof on the approximate message passing (AMP) machinery (cf. [24,97,79,35,67] for a non-exhaustive list of references). In this approach, one constructs an AMP algorithm in terms of fixed points (η * , γ * , τ * , τ * * in our case) and shows that its iteratesv t converge to our objects of interestv (v can beβ or r * in our case) in the following sense: almost surely AMP theory provides a precise characterization of the following limit involving the algorithmic iterates for any fixed t: where v 0 is usually a suitable function of β ⋆ around which one expectsv should be centered. Thus plugging this in (53) yields properties of the object of interestv. Within this theory, the framework that characterizes (54) is known as state evolution [7,52]. Despite the existence of this solid machinery, (53) requires a case-by-case proof, and for many settings, this presents deep challenges. We use the above algorithmic proof strategy, but in case of our right-rotationally invariant designs that, the original AMP algorithms fail to apply. To alleviate this, [73] proposed vector approximate message passing algorithms. We use these algorithms to create ourv t 's.
Subsequently, proving (53) presents the main challenge. To this end, one is required to show the following Cauchy convergence property of the VAMP iterates: almost surely, We prove this using a Banach contraction argument (97). Such an argument saw prior usage in the context of Bayes optimal learning in [60]. However, they studied a "matched" problem where the signal prior (analogous to B ⋆ in our setting) is known to the statistician and she uses this exact prior during the estimation process. Arguments under such matched Bayes optimal problems do not translate to our case, and proving (97) presents novel difficulties in our setting. To mitigate this, we leverage a fundamental property of the R-transform, specifically that −zR ′ (z)/R(z) < 1 for all z, and discover and utilize a crucial interplay of this property with the non-expansiveness of the proximal map Proposition A.7 (b).
REMARK 5.1 (Comparison with [39,40]). In their seminal works, [39,40] initiated the first study of the risk of regularized estimators under right-rotationally invariant designs. They stated a version of Theorem 5.1 with a partially non-rigorous argument. In their approach, an auxiliary ℓ 2 penalty of sufficient magnitude is introduced to ensure contraction of AMP iterates. Later, they remove this penalty through an analytical continuation argument. However, this proof suffers two limitations. The first one relates to the non-rigorous applications of the AMP state evolution results. For instance, [40,Lemma 3] shows that for each fixed value of p, lim t→∞ ∥x t −x∥ 2 p = 0. However, in [40, Proof of Lemma 4], the authors claim that this would imply (53) upon exchanging limits with respect to t and p. Such an exchange of limits is non-rigorous since the correctness of AMP state evolution is established for a finite number of iterations (t < T , T fixed) as p → ∞. The limit in T is taken after p. The other limitation lies in the analytic continuation approach that requires multiple exchanges of limit operations [40, Appendix H] that seem difficult to justify and incur intractable assumptions [40, Assumption 1 (c), (e)] (in particular, it is unclear how to verify the existence claim in Assumption 1 (c) beyond Gaussian designs). Our alternative approach establishes contraction without the need for a sufficiently large ℓ 2 -regularization component, as in [39,40], and thereby avoids the challenges associated with the analytic continuation argument.
Proof of result B.
In this section, we discuss the proof of Theorem 5.2. See Appendix B.2 for the proof details.
Recall that we have established Theorem 5.1 that shows p → ∞, almost surely, Combining (55) and (56), we expect that .
Using the definition of R-transform, we can rewrite (19c) as η −1 * = E 1 D 2 +η * −γ * which, along with (11), implies that Combining (57), (58) and eliminating η * , we obtain that Setting ≈ above to equality, we obtain our exact equation for the Spectrum-Aware adjustment factor, i.e. (22). One thus expects intuitively that adj consistently estimates γ * . To establish the consistency rigorously, we recognize and establish the monotonicity of the LHS of (59) as a function of γ * , and study its point-wise limit. We direct the reader to Lemma B.11 and Proposition B.13 for more details.
Once we have established the consistency of adj as an estimator for γ * , we substitute adj back into (57) to obtain a consistent estimatorη * for η * . It is important to note that the definition of r * * , as given in (47), only involves the fixed points η * and γ * . As a result, we can utilize adj andη * to produce a consistent estimatorr * * for r * * . Now note that (49) would give us a system of linear equation The estimators (τ * * ,σ 2 ) in (23) for (τ * * , σ 2 ) are solved from the two linear equations above with the 2-by-2 matrix on RHS replaced by its sample version. Note that (20) is required to ensure the 2-by-2 matrix is non-singular. Now with estimators for γ * , η * , σ 2 and τ * * , we can construct the estimatorτ * for τ * using (19d) and (11).
6. Discussion. We conclude our paper with a discussion of two main points. First, we clarify that our setting covers designs with i.i.d. Gaussian entries. On the other hand, although our method captures different kinds of dependencies through the right-rotational invariance assumption, anisotropic Gaussian designs where each row of X comes from N (0, Σ) for an arbitrary non-singular Σ fall outside our purview (unless Σ is right-rotationally invariant). Moreover, contrasting with [11], our Spectrum-Aware adjustment (3) does not apply directly to non-separable penalties, e.g. SLOPE, group Lasso, etc. Nonetheless, we note that the current framework can be expanded to address both these issues. In Appendix E, we suggest a debiased estimator for "ellipsoidal designs" X = Q ⊤ DOΣ 1/2 and non-separable convex penalties. We also conjecture its asymptotic normality using the non-separable VAMP formalism [36]. We leave a detailed study of this extensive class of estimators to future works.
We discuss another potential direction of extension, that of relaxing the exchangeability assumption in Corollaries 3.7 and 4.3 that establish inference guarantees on finite-dimensional marginals. One may raise a related question, that of constructing confidence intervals for a ⊤ β ⋆ for a given choice of a. Under Gaussian design assumptions, such guarantees were obtained using the leave-one-out method as in [17,Section 4.6] or Stein's method as in [11] without requiring the exchangeability assumption (at the cost of other assumptions on β ⋆ and/or Σ). Unfortunately, these arguments no longer apply under right-rotational invariant designs owing to the presence of a global dependence structure. Thus, establishing such guarantees without exchangeability can serve as an exciting direction for future research.
APPENDIX A: PRELIMINARY
A.1. Empirical Wasserstein-2 convergence. We will use below the following fact. See [31, Appendix E] and references within for its justification.
holds for every function f : R k → R satisfying, for some constant C > 0, the pseudo-Lipschitz where g(·) is applied row-wise to V.
PROOF OF LEMMA 2.7. Under Assumption 4, for any v > 0, x → Prox vh (x) is continuous, monotone increasing in x, and continuously differentiable at any x such that This follows from the assumption that h(x) is twice continuously differentiable on D c and the implicit differentiation calculation shown in [39,Appendix B1]. For x ∈ {x : Prox vh (x) ∈ D}, Prox vh (x) is differentiable and has derivative equal to 0 except for a finite set of points. To see this, note that preimage Prox −1 vh (y) for y ∈ D is either a singleton set or a closed interval of the form [x 1 , x 2 ] for x 1 ∈ R ∪ {−∞}, x 2 ∈ R ∪ {+∞} and x 1 < x 2 , using continuity and monotonicity of x → Prox vh (x). This implies that {x : Prox vh (x) ∈ D} is a union of finite number of singleton sets and a finite number of closed intervals. Furthermore, Prox vh (x) is constant on each of the closed intervals. It follows that Prox vh (x) is differentiable and has derivative equal to 0 on the interiors of the closed intervals, and that C is union of some of the singleton sets and all of the finite-valued endpoints of the closed intervals.
We extend functions h ′′ (x) and Prox ′ vh (x) on D and C respectively in the following way: We show that it is impossible to have some y 0 ∈ D such that Prox −1 vh (y 0 ) is a singleton set {x 0 } and that x → Prox vh (x) is differentiable at x 0 with non-zero derivative. This means that all y ∈ D belongs to cases (i), (ii) and (iii) above. Suppose to the contrary. We know from the above discussion that there exists some e > 0 such that Prox ′ vh (x) is continuous on (x 0 , x 0 + e) and (x 0 − e, x 0 ). We claim that x → Prox ′ vh (x) is continuous at x 0 . To see this, note that for any ∆ > 0, we can find ε ∈ (0, e) such that • there exists some x + ∈ (x 0 , x 0 + ϵ) such that for any x ∈ (x 0 , x 0 + ϵ), Then for any x ∈ (x 0 − ϵ, x 0 + ϵ), we have |Prox ′ vh (x 0 ) − Prox ′ vh (x)| < ∆ by triangle inequality. This proves the claim. Now, since x → Prox vh (x) is continuously differentiable on (x 0 − e, x 0 + e) and Prox ′ vh (x 0 ) ̸ = 0, inverse function theorem implies that y → Prox −1 vh (y) is a well defined, real-valued function and it is continuous differentiable on some open interval U containing y 0 . This implies that h is differentiable at any y ∈ U and that y → Prox −1 vh (y) = y + vh ′ (y) is continuously differentiable. But this would imply that h is twice continuously differentiable on U which contradicts the assumption that y 0 ∈ D.
Note that we have assigned +∞ to h ′′ on D and 0 to Prox ′ vh on C. Piecewise continuity of x → 1 w+h ′′ (Proxvh(x)) for any w > 0 follows from the discussion above.
A.3. Properties of R-and Cauchy transform. The following shows that the Cauchyand R-transforms of −D 2 are well-defined by (13), and reviews their properties. LEMMA A.8. Let G(·) and R(·) be the Cauchy-and R-transforms of −D 2 under Assumption 2.
(f) For all sufficiently small z ∈ (0, G(−d − )), R-transform admits convergent series expansion given by LEMMA A.9. If the empirical distribution of the eigenvalues of X ⊤ X weakly converges Marchenko-Pastur law, then adj −ȃ dj → 0.
PROOF OF LEMMA A.9. By weak convergence, where z → G(z) is the Cauchy transform of Marchenko-Pastur law 4 . Then we have that Observe that the limiting values of adj andȃ dj above are equal if and only if the following holds Here, (64) indeed holds true since G (−λ 2 ) is one of the root of the quadratic equation (64). This is by referencing the explicit expression of the 1 which also violates Assumption 6. The inequalities in the second line follows immediately from (15) and that γ * > 0.
A.5.2. Uniqueness of fixed points given existence. Suppose that Assumption 2-6 hold. Our proof of Theorem 5.1 and Theorem 5.2 does not require (γ * , η * , τ * , τ * * ) to be a unique solution of (19), only that it is one of the solutions. However, if there are two different solutions of (19), it would lead to a contradiction in Theorem 5.2. More concretely, suppose that there exists two different solutions of (19): (2) * * . By Theorem 5.2, we would have adj,η * ,τ * ,τ * * converges almost surely to both x (1) and x (2) , hence the contradiction.
APPENDIX B: PROOF OF ASYMPTOTIC NORMALITY
B.1. Proof result A: Distribution characterization. In this section, we prove Theorem 5.1 using VAMP algorithm as proof device. We define the version of VAMP algorithm we will use in Appendix B.1.1, prove Cauchy convergence of its iterates in Appendix B.1.2, and prove Theorem 5.1 in Appendix B.1.3. To streamline the presentation, proofs of intermediate claims are collected in Appendix B.3. We also assume without loss of generality that for the remainder of this section. The general case for arbitrary σ 2 > 0 follows from a simple rescaling argument.
B.1.1. The oracle VAMP algorithm. We review the oracle VAMP algorithm defined in [39] and present an extended state evolution result for the algorithm. This algorithm is obtained by initializing the VAMP algorithm introduced in [73] at stationarity r 10 = β ⋆ + N (0, τ * I p ), γ −1 10 = γ −1 * . See Appendix A.6 for a review. Then for t ≥ 1, we have iteratesx Note that for any fixed x, F ′ (q, x) equals to the derivative of q → F (q, x) whenever the derivative exists, and at the finitely many points where q → F (q, x) is not differentiable F ′ (q, x) equals to 0 (cf. Lemma 2.7). We also define some quantities (77) We note some important properties of these quantities, which are essentially consequence of Assumption 2 and (19). We defer the proof of Proposition B. p Then, one can show that by eliminatingx 1t ,x 2t and introducing a change of variables (79) x t = r 2t − β ⋆ , y t = r 1t − β ⋆ − e, s t = Ox t (75) is equivalent to the following iterations: with initialization q 0 ∼ N (0, τ * · I p ), The following Proposition will be needed later. Its proof is deferred to Appendix B.3.1.
REMARK B.4. In case where Prox ′ γ −1 * h (x) is constant in x (e.g. ridge penalty), the iterates converges in one iteration and the above result holds for t ≤ 1.
Proof of the following Corollary is deferred to Appendix B.3.1. COROLLARY B.5. Under Assumptions 2-6, almost surely as p, n → ∞ Furthermore, almost surely as p, n → ∞, We can then obtain the convergence of vector iterates for the oracle VAMP algorithm. We defer the proof to Appendix B.3.1. where the inner limits exist almost surely for each fixed t.
Combining Proposition B.8 and Corollary B.5 yields the proof of Theorem 5.1.
PROOF OF THEOREM 5.1. We prove (48) first. Fix function ψ : R 3 → R satisfying, for some constant C > 0, the pseudo-Lipschitz condition For any fixed t, we have where (⋆) is by Cauchy-Schwarz inequality. This, along with Proposition B.8, Assumption 3, Corollary B.5 implies that Using Corollary B.5 and Proposition A.1, we have that By triangle inequality, we also have Taking p and then t to infinity on both sides of the above, by (84) and (85), where we used the fact that lhs does not depend on t. An application of Proposition A.1 with p = 2, k = 3 completes the proof for (48).
To see first result in (49), note that (c) Given that h ′′ (β) 0 = p or for all i, d i ̸ = 0, by which g p is well-defined from (a), (87) has a unique solution if and only if there exists some j ∈ [p] such that h ′′ β j ̸ = +∞. The following assumption is made to simplify the conditions outlined in Lemma B.9.
ASSUMPTION 9. Fix p ≥ 1 and suppose that Assumption 4 holds. If h ′′ (β) 0 = p or that X ⊤ X is non-singular, we require only that there exists some i ∈ [p] such that h ′′ (β i ) ̸ = +∞. Otherwise, we require in addition that The following is a direct consequence of Lemma B.9 which in turn has Proposition 3.1 as a special case. PROPOSITION B.10. Fix p ≥ 1 and suppose that Assumption 4 holds. Then, Assumption 9 holds if and only if the function γ → g p (γ) is well-defined for any γ > 0, strictly increasing, and the equation (87) admits a unique solution contained in (0, +∞).
B.2.2. Population limit of the adjustment equation.
From now on, we use notation for the following random variable Define g ∞ : (0, +∞) → R by which is well-defined under Assumption 4, 6 as shown in Lemma B.11 below. We defer its proof to Appendix B.4.2.
We can show that the LHS of the sample adjustment equation converges to the LHS of the population adjustment equation. We defer its proof to Appendix B.4.2.
B.2.3. Consistent estimation of fixed points.
We are now ready to prove Theorem 5.2 which shows that the quantities defined in (23) indeed converges to their population counterparts.
Noting that ∆ t is the upper-left submatrix of ∆ t+1 , let us denote ∆ t+1 = ∆ t δ t δ ⊤ t δ * We now show by induction on t the following three statements: where U t , U ′ t are Gaussian variables with strictly positive variance, independent of H, (Y 1 , . . . , Y t−1 ), and (S 1 , . . . , S t−1 ).
We take as base case t = 0, where the first two statements are vacuous, and the third statement requires (H, x 1 ) W2 → (H, X 1 ) almost surely as p → ∞. Recall that x 1 = F (p 0 , β ⋆ ), and that F (p, β) is Lipschitz by Proposition Proposition B.1. Then this third statement follows from Propositions B.2 and A.3.
Supposing that these statements hold for some t ≥ 0, we now show that they hold for t + 1. To show the first statement ∆ t+1 ≻ 0, note that for t = 0 this follows from ∆ 1 = δ * > 0 by Assumption 6. For t ≥ 1, given that ∆ t ≻ 0, ∆ t+1 is singular if and only if there exist constants α 1 , . . . , α t ∈ R such that . . , Y t−1 and hence also of E, B ⋆ , X 1 , ..., X t . We now show that for any realized values (e 0 , x 0 , w 0 ) of This would imply that ∆ t+1 ≻ 0. Suppose to the contrary, we then have that Since U t is Gaussian with strictly positive variance, the above implies that the function is constant almost everywhere. This in turn is equivalent to that Prox γ −1 * h (u) = C + γ * η * u almost everywhere for some constant C ∈ R by a change of variable. Noting that u → Prox γ −1 * h (u) is continuous, we thus have that Prox γ −1 * h (u) = C + γ * η * u for all u ∈ R. This implies that Prox γ −1 * h (u) is continuously differentiable and has constant derivative γ * η * , which contradicts to the assumption that x → Prox ′ γ −1 * h (x) is non-constant. We thus have proved the first inductive statement that ∆ t+1 ≻ 0.
PROOF OF PROPOSITION B.8. We assume that c 0 > 0. Proof for the case where d − > 0 is completely analogous. From strong convexity of the penalty function, almost surely, for all sufficiently large p, . By Cauchy-Schwartz inequality, we have that Note that the denominators of both terms in (111) are non-zero (and thus g ∞ is well-defined) if γ+U . Meanwhile we have that Note that P(U ̸ = 0) > 0 if and only if P(U ̸ = 0, +∞) > 0 or P(U = +∞) > 0. This shows that (112) holds and thus g ∞ is well-defined since P(U ̸ = 0) > 0 by Lemma A.10. It follows from (19a), (19c) and (15) that γ * is a solution of the equation g ∞ (γ) = 1 . We prove that γ * is a unique solution by showing g ∞ is strictly increasing. Applying [86, Proposition A.2.1], we obtain that g ∞ is differentiable and can be differentiated inside the expectation as follows and the above will be positive. Also note that if P(U = 0) > 0, then I D 2 > 0 D 2 1 γ 2 P(U = 0) > 0 with positive probability and the above will be positive. Note that P(U ̸ = +∞) > 0 if and only if P(U ̸ = 0 and U ̸ = +∞) > 0 or P(U = 0) > 0. Therefore, the positivity of g ′ ∞ (γ) follows from P(U ̸ = +∞) > 0 which holds by Lemma A.10. The proof is now complete.
(25) then follows from consistency ofτ * and adj (cf. Theorem 5.2) and the Slutsky's theorem. Let U ∈ R p×p denote a permutation operator drawn uniformly at random independent of β ⋆ , X, ε. We have that where we use L = to denote equality in law. Note that β = argmin where h applies entry-wise to its argument. The above then implies (116) Uβ, XU ⊤ , Uβ ⋆ , ε = argmin APPENDIX F: NUMERICAL EXPERIMENTS F.1. Details of the design matrices used in numerical experiments. Throughout the paper, we have illustrated our findings using different design matrices. We provide additional details in this section. REMARK F.1 (Notations used in caption). we use InverseWishart(Ψ, ν) to denote inverse-Wishart distribution [91] with scale matrix Ψ and degrees-of-freedom ν, Multt(ν, Ψ) to denote multivariate-t distribution [92] with location 0, scale matrix Ψ, and degrees-of-freedom ν.
REMARK F.2 (Right-rotationally invariant). All design matrices in Figure 1, 4 satisfies that X L = XO for O ∼ Haar(O(p)) independent of X. It is easy to verify that this is equivalent to right-rotational invariance as defined in Definition 2.1. Figure 1 and Figure 4). The designs featured in Figure 4 can be seen as more challenging variants of the designs in Figure 1, characterized by heightened levels of correlation, heterogeneity, or both.
REMARK F.3 (Comparison between designs in
Specifically, Σ (col) under MatrixNormal-B has a higher correlation coefficient (0.9) compared to the correlation coefficient (0.5) in MatrixNormal. This results in a stronger dependence among the rows of the matrix X. Concurrently, the Σ (row) in MatrixNormal-B is sampled from an inverse-Wishart distribution with fewer degrees of freedom, leading to a more significant deviation from the identity matrix compared to the MatrixNormal design presented in Figure 1.
In Spiked-B, there are three significantly larger spikes when compared to Spiked in Figure 1, which contains 50 spikes of smaller magnitudes. Consequently, issues related to alignment and outlier eigenvalues are much more pronounced in the case of Spiked-B.
Design under LLN-B is product of four independent isotropic Gaussian matrices whereas LLN-B contains 20th power of the same X 1 . The latter scenario presents greater challenge for DF or SA debiasing, primarily because the exponentiation step leads to the emergence of eigenvalue outliers.
Larger auto-regressive coefficients are used in VAR-B, leading to stronger dependence across rows.
When designs are sampled from MultiCauchy, it is equivalent to scaling each row of an isotropic Gaussian matrix by a Cauchy-distributed scalar. This results in substantial heterogeneity across rows, with some rows exhibiting significantly larger magnitudes compared to others. DEFINITION F.4 (Designs from the real dataset). Without loss of generality, all designs below are re-scaled so that average of the eigenvalues of X ⊤ X is 1.
(i) Speech: 200 × 400 with each row being i-vector (see e.g. [51]) of the speech segment of a English speaker. We imported this dataset from the OpenML repository [93] (ID: 40910) and retained only the last 200 rows of the original design matrix. The original dataset is published in [41].
(ii) DNA: 100 × 180 entries with each row being one-hot representation of primate splicejunction gene sequences (DNA). We imported this dataset from the OpenML repository [94] (ID: 40670) and retained only the last 100 rows of the original design matrix. The original dataset is published in [69]. (iii) SP500: 300 × 496 entries where each column representing a time series of daily stock returns (percentage change) for a company listed in the S&P 500 index. These time series span 300 trading days, ending on January 1, 2023.. We imported this dataset from Yahoo finance API [95]; (iv) FaceImage: 1348 × 2914 entries where each row corresponds to a JPEG image of a single face. We imported this dataset from the scikit-learn package, using the handle sklearn.datasets.fetch_lf2_people [96]. The original dataset is published in [48] (v) Crime: 50 × 99 entries where each column corresponds to a socio-economic metric in the UCI communities and crime dataset [74]. Only the last 50 rows of the dataset is retained.
F.2. QQ plots. F.3. Inference for debiased PCR. We include here plots and tables for the inference procedures described in Section 4.4.
MatrixNormal-B Spiked-B LNN-B VAR-B MultiCauchy
υ ⋆ 1 0.00 ** 0.00 ** 0.00 ** 0.00 ** 0.00 ** υ ⋆ Figure 11 depict the True Positive Rate (TPR), False Positive Rate (FPR) of the hypothesis tests as outlined in (46), and the False Coverage Proportion (FCP) of confidence intervals as defined in (45) for ζ ⋆ i . These plots illustrate the changes in TPR, FPR, and FCP as we systematically vary the targeted FPR/FCP level α from 0 to 1. FIG 11. Under the setting of Figure 4, we plot the True Positive Rate (TPR) and False Positive Rate (FPR) corresponding to hypothesis tests outlined in (46), and the False Coverage Proportion (FCP) of confidence intervals defined in (45) . The x-axis spans α values from 0 to 1, while the y-axis ranges between 0 and 1. The dotted black line represents the 45-degree reference line. | 2023-09-15T06:42:25.108Z | 2023-09-14T00:00:00.000 | {
"year": 2023,
"sha1": "b1533c084ddc3afc38b206469d21b7dd7420be77",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4907d6146409974742379f8cdea6319adeba2adf",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
49208239 | pes2o/s2orc | v3-fos-license | MMP-3 in the peripheral serum as a biomarker of knee osteoarthritis, 40 years after open total knee meniscectomy
Background To explore potential biomarkers in a meniscectomy-induced knee osteoarthritis model, at forty years after meniscectomy. Methods We carried out a forty-year study of 53 patients who, as adolescents, underwent open total meniscectomy and assessed two potential synovial and serum biomarkers, namely glycosaminoglycan (GAG) and matrix metalloproteinase-3 (MMP-3). Of the 30 patients available for review, 8 had contralateral knee operations and were excluded. Of the remaining 22 patients, 17 had successful operated knee synovial fluid aspirations and 8 also had successful contralateral control knee aspirations. GAG and MMP3 levels in the synovial fluid and peripheral serum was measured using Alcian blue precipitation and ELISA quantification, respectively. Patients also had their knee radiographs assessed and their radiographic osteoarthritis classified as per the Kellgren-Lawrence and Ahlbӓck systems. Results At forty years after meniscectomy, synovial MMP-3 levels remain increased (p = 0.0132) while GAG levels were reduced (p = 0.0487) when compared to controls and these two levels correlate inversely. Furthermore, levels of synovial MMP-3 significantly correlated (p = 0.0032, r = 0.7734; p = 0.0256, r = 0.5552) and GAG levels significantly inversely correlated (p = 0.0308, r = − 0.6220; p = 0.0135, r = − 0.6024), respectively, with both radiological scoring systems. Interestingly, we found that the levels of serum MMP-3 correlated only with the synovial fluid levels of MMP-3 in the operated knee and not with the non-operated joint (p = 0.0252, r = 0.7706 vs. p = 0.0764, r = 0.6576). Multiple regression analysis for patient’s quality of life based on these biomarkers revealed an almost perfect result with an R2 of 0.9998 and a p value = 0.0087. Conclusion Our results suggest that serum levels of MMP3 could be used as a potential biomarker for knee osteoarthritis, using a simple blood test. Larger cohorts are desirable in order to prove or disprove this finding. Electronic supplementary material The online version of this article (10.1186/s40634-018-0132-x) contains supplementary material, which is available to authorized users.
Background
Osteoarthritis (OA), along with Alzheimer's disease, have been characterised as "high burden diseases with no curative treatments" by the World Health Organization (WHO) as they both lack an identifiable biomarker (Kaplan, 2004).
OA is a severely disabling disease which affects a third of the population over 50 years worldwide, resulting in major socio-economic burdens (Goldring et al., 2006;Kinds et al., 2011). Aging and mechanical injury to the joint are the primary causes of OA; however the symptoms associated with OA are often not displayed until months or years following a physical insult (Swiontkowski et al., 2000;Vincent & Saklatvala, 2008). Common knee injuries include direct post-traumatic injury to the articular cartilage, altered knee biomechanics from cruciate ligament rupture or meniscal loss or limb malalignment; resulting in disruption of joint biomechanics and the joints homeostatic pathways within the articular cartilage extracellular matrix (ECM), which in turn leads to irreversible cartilage destruction (Sherwood et al., 2014).
Despite the short-medium term relief from joint instability and discomfort, it has been suggested that partial or total meniscectomy (affecting 61/100,000 of the population per year) (Baker et al., 1985) has detrimental effects that eventually lead to symptomatic and radiological knee OA in up to 50% of patients within 5-15 years and increased risk of requiring knee arthroplasty in the future (Pengas et al., 2012a;Appel, 1970;Jorgensen et al., 1987;Hede et al., 1992).
Radiological assessment of joint osteoarthritis relies upon plain films as the gold standard investigation with high resolution Magnetic Resonance Imaging (MRI) assessment as a potential alternative (Eckstein et al., 2006). It is evident that these investigations demonstrate established disease, inadequate for early identification of the condition, which does not always correlate with patients' perception of symptoms or progression of the condition (Lohmander, 2004;Lawrence et al., 1966;Hannan et al., 2000). This may be attributed to the avascular and aneural nature of articular cartilage, suggesting that patients presenting with symptomatic osteoarthritis often have significant and irreparable cartilage damage, ultimately resulting in knee arthroplasty as a definitive solution.
In order to achieve an early and timely identification of the disease, patient reported outcome measures (PROMs) and biomarkers in isolation or in combination with radiographs have been suggested as possible diagnostic adjuncts. It would be invaluable to patients and healthcare systems if it were possible to screen high risk patients with a biomarker investigation. The success of biomarkers used in this context would be dependent on their ability to indicate the onset of this biological process and in theory respond to interventions in a timely manner (De Gruttola et al., 2001). Several molecules have been proposed as possible biomarkers of OA two of these being glycosaminoglycans (GAGs) and matrix metalloproteinases (MMPs).
Proteoglycans (Nagase & Matrix Metalloproteinases, 1999) are produced by chondrocytes as large hydrophilic negatively charged polysaccharides responsible for the compressive strength of articular cartilage. Increased levels of MMPs have been detected following knee injury and osteoarthritis, as these degradative enzymes play a key role in the proteolysis of cartilage matrix molecules, including proteoglycans (Nagase & Matrix Metalloproteinases, 1999;Lohmander et al., 1994;Lohmander et al., 1999). Proteolysis of proteoglycans, releasing glycosaminoglycan-containing fragments (GAGs), is an early and critical feature of cartilage breakdown seen following injury such as one involving the meniscus (Liu et al., 2017) or primary OA (Lohmander et al., 1989) and it has been demonstrated that both GAGs and MMPs are elevated in the synovial fluid of osteoarthritic knees (Lohmander et al., 1999;Lohmander et al., 1989;Chu et al., 2015).
In this study, the serum and synovial fluid levels of two biomarkers are reported and compared to the non-operated knee at a mean 40 year follow-up of a cohort of patients that underwent open total meniscectomies as adolescents with otherwise pristine knees under the care of a single surgeon. With abundant evidence in the literature indicating that total meniscectomy leads to joint degeneration, increased incidence of knee OA and need for arthroplasty procedures (Pengas et al., 2012b;Hoser et al., 2001;Englund & Lohmander, 2004;Rockborn & Gillquist, 1996;Lee et al., 2006). Previous analysis of this cohort revealed a > 4 fold increase in knee OA and 132 fold increase in knee arthroplasty. An association has also been established between the cohort's OA radiographic findings and recorded range of motion and PROMS (Pengas et al., 2012b), it follows that an attempt to evaluate biomarkers for OA in this cohort is logical.
Our hypothesis is that there is a significant difference in MMP-3 and GAG levels between the operated and non-operated knees which correlates with their serum levels, potentially allowing their use as biomarkers in tracking disease progression in this model of knee osteoarthritis.
Patient selection criteria
Under the care of the late Professor Ian Smillie, 313 adolescent patients underwent open total meniscectomy. One hundred of these patients that were identified as not having any other intra-articular knee pathology at the time of the operation were reviewed at 17 and 30 years post-operatively (Abdon et al., 1990;McNicholas et al., 2000). At the 30-year follow-up, both knees were evaluated radiographically in 53 patients for whom an ethical approval to be assessed at a mean of 40 years post operatively was obtained.
At the time of review, 7 patients had undergone a total knee replacement, 5 had passed away, 6 were lost to follow-up and a further 5 were unable to attend for clinical evaluation. Of the 30 patients available for clinical review, 8 had contralateral knee operations and were excluded as that knee would not be able to be used as a control. This resulted in 22 suitable patients for our study with no other knee intervention other than the removed meniscus.
All of the suitable patients (n = 22) had blood collected into 3 plain 6 mL EDTA vacutainer tubes for serum MMP-3 level quantification. This was inverted 8 times and allowed to clot at room temperature.
Patients also underwent operated and non-operated knee aspirations. Seventeen (n = 17) yielded a successful operated knee synovial fluid aspirate and out of those, eight (n = 8) yielded a successful contralateral control knee aspirate (please see addendum tables depicting the cohorts demographics, radiological, clinical, PROMs and biomarkers raw values). We were not able to increase our non-operated knee aspirate samples as knee washout-aspiration was not granted ethical approval and therefore was not performed. Aspirate samples were collected undiluted and transferred in plain tubes.
Samples were transferred to the on-site biochemistry laboratory and were centrifuged. They were labelled and stored at − 70°C and safeguarded against repetitive freeze-thawing cycles whilst awaiting transfer for analysis to the biochemical laboratory at Lund University, Sweden.
MMP-3 quantification
MMP-3 values were determined by a stromelysin-trapping, enzyme-lined immunosorbent assay (Walakovits et al., 1992). MaxiSorp surface (96-well) plates (Nunc, Roskilde, Denmark) were coated overnight at 4°C with 100 l/well of a 1.0 g/ml solution of the murine stromelysin anti-dog monoclonal antibody MAC085 (Celltech, Slough, UK) in phosphate buffered saline (PBS) coating solution (KPL, Gaithersburg, MD). The plates were then washed 4 times in a solution of 2 mM of imidazole-buffered saline and 0.02% Tween 20 (KPL, Gaithersburg, MD) and incubated for 20 min at room temperature with 1% bovine serum albumin (BSA) in PBS (BSA blocking solution; KPL) to block nonspecific protein binding to the wells.
Subsequently the BSA was washed from the plates and standard recombinant human prostromelysin (Merck, Rahway, NJ) or samples of plasma were added (100 l/ well) for 1 h at room temperature. The samples were then diluted with 0.67% BSA in PBS (BSA diluent solution; KPL). The plates were washed and then incubated with a 10 g/ml solution of rabbit polyclonal anti-human stromelysin IgG (Merck) in BSA diluent solution. The plates were washed again, after which they were incubated for 1 h at room temperature with peroxidase-labelled goat anti-rabbit IgG (KPL) diluted to 125 ng/ml in BSA solution (100 l/well). The plates were washed again and were incubated for 5 min with a etramethylbenzidine (TMB)hydrogen peroxide (H 2 O 2 ) solution (0.2 g/litre TMB, 0.01% H 2 O 2 ), after which the reaction was stopped by the addition of 1 M phosphoric acid. Absorbance at 450 nm was measured spectrophotometrically using a Multiscan Multisoft plate reader (Labsystems, Helsinki, Finland) and the software Ascent 2.4.2 (Thermo Electron, Waltham, WA).
GAG quantification
Concentration of GAG was measured by a modified Björnsson Alcian blue precipitation (Bjornsson, 1993). This measures Alcian blue dye binding in proportion to the number of negative charges on GAG chains. Samples and chondroitin sulfate standards (25 μl) were precipitated for two hours at 4°C with 0.04% w/v Alcian blue, 0.72 M guanidinium hydrochloride, 0.25% w/v Triton X − 100, and 0.1% v/v H 2 SO 4 (0.45 ml). The precipitates were collected after centrifugation (16,000 g, 15 min, 4°C), then dissolved in 4 M guanidinium hydrochloride, 33% v/v 1-propanol (0.25 ml), and transferred to 96-well micro-titer plates prior to absorbance measurement at 600 nm (Larsson et al., 2009).
Radiographic analysis
Radiographic analysis included bilateral knee antero-posterior weight-bearing radiographs, with the knee flexed at 15°. These radiographs were blindly and independently scored by two investigators (IP, AA) from a distance of 60 cm without magnifying equipment, using the Kellgren-Lawrence (Abdon et al., 1990) and the Ahlbäck (McNicholas et al., 2000) scoring systems for knee tibiofemoral joint (TFJ) OA. These are two of the most widely accepted radiographic scoring systems for knee OA.
Statistical analysis
Data were analysed using GraphPad Prism 6 (GraphPad Software, Inc., California, USA). The Shapiro-Wilk test was used to test data for normality. Parametric data were compared using a paired t-test analysis. Correlations were analysed using the Pearson's rank correlation coefficient. P-values < 0.05 were considered statistically significant. (* denoting P < 0.05, ** denoting P < 0.005, *** denoting P < 0.0005).
Serum MMP-3 levels can predict MMP-3 levels in the synovial fluid of post-meniscectomy patients
Positive correlation was observed between the serum MMP-3 and synovial MMP-3 concentrations of the operated joints (p = 0.0252, r = 0.7706) (Fig. 1a) but not with the MMP-3 concentrations of the non-operated joints (p = 0.0764, r = 0.6576) (Fig. 1b). Serum MMP-3 levels could therefore potentially be utilised as a surrogate marker of synovial MMP-3 levels in these operated knees.
MMP-3 is chronically upregulated and correlates with decreased GAG at 40 years post-meniscectomy
We compared levels of MMP-3 and GAGs in the operated and non-operated knees of the 8 patients that had successful bilateral knee synovial fluid aspirations. Paired analysis of the levels of MMP-3 in the synovial fluid revealed that even at this late time point after surgery, the levels of MMP3 are significantly higher in the operated knee (p = 0.0132) (Fig. 2a), with synovial GAGs levels in the operated joint significantly reduced (p = 0.0487) (Fig. 2b). There was an inverse correlation between the levels MMP-3 and GAGs levels within the synovial fluid when comparing all 16 samples from both operated and non-operated knees (p = 0.0458, r = − 0.73810) (Fig. 2c).
The mean values and standard deviations of GAG and MMP-3 concentrations are presented in Table 1.
GAG levels, patient age and MMP-3 levels predict quality of life in a multiple regression model
To understand how these parameters affect quality of life 40 years after meniscectomy, we built a mathematical model in which, using clinical and biochemical parameters we could predict the quality of life using a stepwise regression (AIC function in R) (Sakamoto & information, 1986). The final model included age at surgery, GAG 40 years after surgery, the change in GAGs from baseline and the amount of the MMP-3 levels 40 years after surgery. The coefficients are displayed in (Fig. 4) and (Table 2) and the R script and diagnostic plots are shown in Additional file 1. Remarkably this model predicted the quality of life (QOL) almost perfectly, with an R 2 of 0.9998 and a p value = 0.0087. Taken together, these data suggest that these factors all contribute to quality of life.
Discussion
The most interesting finding of this study is that synovial MMP-3 levels are upregulated in operated knees, correlating with radiographic scores of OA and inversely correlating to the drop in synovial GAG concentration. The fact that MMP-3 levels were raised so long after the cartilage insult indicates the presence of a chronic response and not a short lived inflammatory one as previously suggested. Interestingly, operated knee synovial MMP-3 levels correlated with serum MMP-3 levels, whereas non-operated knee synovial MMP-3 levels did not. This is an important finding, as it presents serum MMP-3 levels as a potential and easily measurable biomarker for knee OA.
In addition, multiple regression analysis demonstrated that we can measure the patient's quality of life based on these biomarkers, which unlike PROMs cannot be "manipulated or exaggerated" by the patients, providing a much more robust measurement which can also allow us to estimate the QoL a patient would likely have after surgery.
This study also demonstrated that synovial fluid GAG levels were reduced in operated knees compared to non-operated knees and were inversely correlated to both Kellgren-Laurence and Ahlb ck radiographic scores of OA. This study confirms that the greater the interval between meniscectomy, and synovial fluid sampling, the lower the concentration of GAG observed.
In OA, a number of proteases generated by the synovial membrane and chondrocytes damage the molecular and Two of these hydrolysed molecules are type-II collagen and aggrecan (Jones & Riley, 2005). Early matrix damage and acute injury have been associated with increased aggrecan proteolysis and increased levels of proteoglycan fragments in the synovial fluid which however were seen to decline with time (Lohmander et al., 1999;Lohmander et al., 1989;Pratta et al., 2003;Dahlberg et al., 1992;Lohmander et al., 1992;Lohmander et al., 1993;O'Driscoll, 1998;Rothwell & Bentley, 1973). A finding supported by Larsson et al. in a mean 18 years follow-up post meniscectomy, measuring aggrecanase-generated ARGS neopeptite study (Larsson et al., 2010;Larsson et al., 2012). When examining proteases, it seems that stromelysin (MMP-3) and collagenase (MMP-1) are two of the most extensively studied biomarkers (Poole, 2003). MMP-3 is produced by synovial membrane cells and by chondrocytes in response to mechanical stimulation and exposure to inflammatory cytokines (Fitzgerald et al., 2004). Although not all investigators agree that there is significant increase in the levels of MMP-3 or MMP-1 in patients with OA (Garnero et al., 2005), increased levels of synovial fluid MMP-3 have been detected in patients suffering with hip and knee OA (Chen et al., 2014;Ulrich-Vinther et al., 2003;Georgiev et al., 2018) and in smaller joint degeneration (Jiang et al., 2013), exhibiting greater increases as compared with MMP-1 levels (Lohmander et al., 1993;Ishiguro et al., 1999). Recently, elevated MMP-3 levels were highlighted in post-menisectomy subjects (Liu et al., 2017), suggesting that it would be reasonable to consider synovial fluid MMP-3 as a potential biomarker for OA and perhaps a valid biomarker for follow up in post-menisectomy patients.
We selected a model of knee OA on the background of total meniscectomy as there is strong evidence in the literature that total meniscectomy leads to joint degeneration resulting in increased incidence of knee osteoarthritis (Pengas et al., 2012b;Hoser et al., 2001;Englund & Lohmander, 2004;Rockborn & Gillquist, 1996;Lee et al., 2006). Hence, one of the strong suits of this study lies in the very well defined and studied cohort of patients. All patients in this cohort underwent open meniscectomy with a uniform operative technique and post-operative rehabilitation protocol and their follow-up period of 40 years is unique in the published literature.
The use of biomarkers as surrogate endpoints in the diagnosis of OA is understandably desirable to both scientists and surgeons alike, however it is acknowledged that these molecules are metabolised by other tissues and serum levels may be affected by this; oversimplification of their metabolic pathways should be avoided and interpretation should be made with caution (Myers, 1999). Arguably the small number of subjects in this truly long term follow-up study limits our ability to reach concrete conclusions that can be generalised; however we have indicated that MMP-3 has the potential of being utilised as a marker although further research is indicated.
An improved approach for assessment of disease progression in OA could be achieved via a 'multisource feedback' , combining radiographic evaluation with PROMs and biomarkers such as MMP-3, as the quest for a simple and easily measurable OA biomarker continues. The fact that this has not yet fully materialised serves to underline the complexity of this 'simple' monocellular tissue.
Conclusions
Our results suggest that serum levels of MMP3 could be used as a potential biomarker for knee osteoarthritis and possible disease predictor, using a simple blood test. Larger cohorts are desirable in order to prove or disprove this finding. | 2018-06-16T18:32:43.439Z | 2018-06-15T00:00:00.000 | {
"year": 2018,
"sha1": "be362db593870efa3c580d12ff5947dd9c589e47",
"oa_license": "CCBY",
"oa_url": "https://jeo-esska.springeropen.com/track/pdf/10.1186/s40634-018-0132-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be362db593870efa3c580d12ff5947dd9c589e47",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233220004 | pes2o/s2orc | v3-fos-license | Investigating Opportunities to Support Kids' Agency and Well-being: A Review of Kids' Wearables
Wearable devices hold great potential for promoting children's health and well-being. However, research on kids' wearables is sparse and often focuses on their use in the context of parental surveillance. To gain insight into the current landscape of kids' wearables, we surveyed 47 wearable devices marketed for children. We collected rich data on the functionality of these devices and assessed how different features satisfy parents' information needs, and identified opportunities for wearables to support children's needs and interests. We found that many kids' wearables are technologically sophisticated devices that focus on parents' ability to communicate with their children and keep them safe, as well as encourage physical activity and nurture good habits. We discuss how our findings could inform the design of wearables that serve as more than monitoring devices, and instead support children and parents as equal stakeholders, providing implications for kids' agency, long-term development, and overall well-being. Finally, we identify future research efforts related to designing for kids' self-tracking and collaborative tracking with parents.
GPS Technology (CGT) [8], which are designed for parents who want to ensure the safety of their child. In contrast, commercial kids' wearables are widely available and have recently become a topic of interest in HCI and family informatics research [58]. Existing commercial kids' wearables may also reflect technological trends and consumers' needs, and we aim to complement the previous research on kids' wearables by looking into how these commercial devices are intended to be used in the wild. More specifically, we set out to characterize the technological landscape of kids' wearables and evaluate the extent to which existing commercial devices meet parents' information needs, with the broader goal of investigating if and how kids' wearables can be used to enhance children's agency and well-being.
To that end, we surveyed 47 wearable devices marketed for kids. In analyzing the features of these devices, we built upon Kuzminykh and Lank's work on parents' information needs [48], which were based on interviews and ecological momentary assessment (EMA) studies with parents. As a starting point, we used the four categories drawn from their work-routine, health, schooling, and social & emotional information needs-to analyze specific features embedded in existing kids wearables. We then revised and extended these categories because our analysis revealed features that can satisfy additional needs that are not covered in Kuzminykh and Lank. We found that kids' wearables were, in fact, wearables for caregivers, designed mainly to ensure caregivers' peace of mind; the devices were equipped with sophisticated features to meet caregivers' safety and communication needs. That said, 57% of devices offered unique features to support children's health and 38% provided features to develop good habits, a promising direction that wearables for kids can offer beyond the current dominant trend toward surveillance. Such a shift in viewing kids as an object of monitoring to an agent of self-tracking affords designs for helping kids learn and practice self-tracking early on and develop a sense of agency to manage their health and well-being. We highlight opportunities to support children's self-tracking by designing for children's interests and values, to facilitate parental involvement through collaborative tracking and scaffolding, and to enrich parent-child communication through cross-device collaboration.
In summary, the key contributions of our work are: (1) surveying and synthesizing the technological capabilities of existing commercial kids' wearables; (2) evaluating the extent to which existing capabilities of kids' wearables meet parents' information needs; and (3) identifying design and research opportunities for kids' wearables that strike the balance between children's self-regulation and parents' information needs.
calories burned and heart rate, which are commonly observed in adult activity trackers. In addition, children's wearables are generally more colorful and have more features to encourage usage of the device itself (e.g., games).
Research on children's use of wearables for fitness remains sparse, and existing work involves children using devices designed for adults. For example, Miller and Mynatt developed StepStream [55], a social fitness system for middle school students that employs FitBit Zip pedometers. They found that students reported a greater sense of enjoyment around fitness, and that the least-active students increased their daily activity. Similarly, ThinkActive [26] has primary students wear a pedometer-based activity tracker to encourage reflection on personal activity. Both systems, however, provide children with limited access to their data. This work investigates the potentials and challenges of the wearables designed explicitly for children.
Researchers have also investigated the potential for wearables to support children with neurodevelopmental disorders [6] such as attention-deficit/ hyperactivity disorder (ADHD) and autism spectrum disorder (ASD). In their review of ASD-support technology for children, Sharmin and colleagues [69] found that wearables were used to enhance social interaction and teach important life skills. More recently, Cibrian and colleagues [16] explore the challenges in designing wearables supporting the self-regulation skills of children. They highlight the need for engaging both children and caregivers in setting goals and rewarding behaviors as a means of balancing co-regulation and the long-term goal of self-regulation. This work investigates the functionality of commercially available wearables for kids to shed light on how these devices can be leveraged to address this need.
Child Surveillance Technology
Other research on children's wearables concentrates on Child Surveillance Technologies (CSTs) and Child GPS Technologies (CGTs). Marx and Steeves [52] argue that the surveillance of children begins early in life with infant monitoring and nanny cams, and continues in the form of GPS tracking devices once children are old enough to leave the home. Infant monitoring technologies (e.g., [12,56]) are increasingly common, contributing to what Leaver [49] calls the normalization of "intimate surveillance, " which is defined as the "purposeful and routinely well-intentioned surveillance of young people" who have little or no agency to resist. While infant monitors are advertised as a way to provide parents with peace of mind, researchers discovered that the use of a baby wearable technology in the home aggravated parental anxiety and interfered with the social act of parenting [77].
Many other researchers have examined perceptions and usage of CGTs in families. Outside the home, CGTs are the predominant method of surveillance. Bettany and Kerrane [8] examine the controversy surrounding CGTs and their usage as mediators in parent-child relationships, as well as the implications for children's welfare and agency. Vasalou and colleagues [75] found that parents who favor location tracking feel that tracking technologies provide security, alleviate anxiety, and reduce uncertainty, while parents who oppose location tracking place more value on familial trust and children's self-direction. They discovered that only a minority of parents are in favor of location tracking. Boesen and colleagues [10] found that while location tracking can assist in digital nurturing [63], it also has the potential to undermine trust between parents and children by removing opportunities for trust-building encounters. Ferron and colleagues [22] propose proximity detection devices as a compromise between surveillance and trust, since they support parents' goals of protecting children while supporting their autonomy. Jørgensen and colleagues [42] shift focus from location tracking to activity monitoring technologies, and found that their usage reduces voluntary information disclosure from children and negatively influences trust in the parent-child relationship. While previous work addresses parental attitudes towards CSTs & CGTs and the impact of their usage on trust, it is of growing interest to investigate the potential for children's wearables beyond their capability to serve as surveillance tools. More recently, Kuzminykh and Lank situate the usage of CGTs within the context of parents' information needs and investigate the types of information that parents seek about their children, their motivations, and potential uses for this information [48]. We draw upon this work to understand the purpose and potential usage of features in children's wearables.
Family-oriented Information Systems
One promising research avenue is using technology to promote healthy behaviors in a family context: called "family informatics," [60] these technologies enable family members to share and monitor family-oriented activities, as well as health and parenting goals. In addition to supporting communication and connectedness (e.g., [23,35,38,73,78]), researchers have explored the potential for family-oriented information systems to promote health and wellness in a family context. Grimes and colleagues [29] found that the collaborative completion of and collaborative reflection on health data can support deeper reflection about health behaviors within the family. Our analysis of design opportunities for kids' wearables is motivated by research on the benefits of family-level tracking for improving children's health. Some family trackers are used by the whole family, as health activities are more sustainable when the entire family is involved [60]. Mobile applications such as Snack Buddy [68] and TableChat [51] aim to encourage healthy food choices within families by creating transparency between family members' eating habits.
Parents can take a large part in monitoring their children's health, guiding them until they gain the ability to self-track. Tools such as Baby Steps [46] and Estrellita [33] support parents in tracking health data for their infants and young children. Devices like WAKEY [13] and MAMAS [40] center on helping parents shape their children's habits, allowing the children to take partial responsibility in tracking in order to self-regulate their own routines. For the promotion of physical activity, Saksono and colleagues [67] developed Spaceship Launch, a collaborative exergame that employs activity trackers worn by both parents and children. In later work, Saksono and colleagues [65] noted that research on tools for physical activity promotion in a family context has been limited. Recently, Oygür and colleagues [58] investigated parents' and children's collaborative use of kids' activity trackers. They found that parents used activity trackers to motivate their children to be more active, monitor their children's health and wellness, and teach their children responsibility and independence. These studies motivate our work to evaluate how existing kids' wearables meet parents' information needs for monitoring their children's health, adding to a growing body of literature in HCI that investigates the potential for technology-based, family-oriented approaches to improving children's health and well-being.
METHODS
Our goal was to synthesize the technological capabilities of commercial kids' wearables and assess the extent to which existing capabilities meet parents' information needs. To that end, we collected commercial devices found online and evaluated their functionality using a combination of affinity analysis and deductive analysis.
Data Collection
We collected a total of 47 devices found online (22 from Google Shopping, 5 from Kickstarter, 35 from parenting blog posts) from February to March 2020 (see Figure 1). We ended our data collection once we reached data saturation and encountered mostly duplicates in our search process. We included devices that (1) were marketed as a children's device; (2) targeted children of ages 3-12; (3) were available (at the time of the data collection); and (4) had a product site available ensuring that we had access to all device details. The level of detail on product websites varied, and during our analysis, a few of the devices became unavailable or discontinued. Because we relied on product descriptions, we were unable to investigate the usability of these devices (e.g., battery life) and explore possible features that were not advertised online. However, we were able to collect rich data about each device's features. To find these devices, we used queries containing the search term "kids" and one or more of the following: "wearable," "tracker," and "smartwatch." Example queries include "kids" & "wearable" and "kids" & "tracker. " We chose to use "kids" rather than "children" because the former term appears to be more common in marketing, although they are often used interchangeably. Initially, we considered Amazon as a primary online source. However, we found that many products on Amazon did not provide sufficient detail, such as product sites and information about device functionality. On the other hand, the results from Google Shopping provided richer information than those from Amazon, and thus we decided to use Google Shopping to search for devices. We evaluated up until the fifth page of the results (where each page presents 40 items). Items on Google Shopping are not necessarily unique, resulting in a substantial number of duplicate devices. Therefore, instead of recording all items for a search term and then excluding devices, we viewed and included devices as we searched. For each device, if it had not already been recorded, we followed the link to the device. If the device was being sold from a third party seller, we searched the name of the product in Google to find the product site. By the third or fourth page of Google Shopping, most of the devices either did not fit our inclusion criteria, or were duplicates of previous devices. However, we searched up to the fifth page as a precaution. Given the frequency of duplicates and number of listings without product sites on Google Shopping, we turned to other platforms such as Kickstarter and parenting, lifestyle, and tech blogs, which were identified through Google searches using the queries mentioned above. Through our searches on Google, we found 11 blog posts on wearable devices for children [2,5,9,11,24,27,53,54,62,70,74]. Examining parenting blogs ensured the inclusion of most popular devices, and we found that many devices appeared repeatedly across blogs as well as in Google Shopping.
To elicit product features and technical specifications, we collected product descriptions available on Google Shopping and Kickstarter pages, stand-alone product websites, and user manuals (when available). For each device, we recorded all of the features with as much granularity as possible. For example, we recorded unidirectional calls with and without auto-pickup as separate ones. Features include various methods of communication (e.g., calls, text messages), device settings (e.g., parental restriction of incoming calls), fitness tracking capabilities (e.g., step count, movement), and other dimensions, resulting in 57 unique features. In addition, we captured each device's product name, cost, form factor, target user groups, and method of data access.
Data Analysis
We first performed affinity analysis [32] on the data. After collecting all the features for each device, we grouped similar features into representative categories. All four researchers met and engaged in this process iteratively and refined initial categories (e.g., "parent-set contacts" and "call firewalls") into higher-level sub-themes (e.g., "parental restrictions on incoming and outgoing calls").
Next, we performed deductive analysis to contextualize the purpose of the functionality we recorded and further categorize the data. Since many of the devices were advertised to parents and contained features for their benefit (e.g., setting virtual GPS boundaries), we aimed to understand parents' needs and information usage. Kuzminykh and Lank present a categorization of parents' information needs for managing their children, and find that parents are primarily interested in routine information, health information, and social-emotional information [48]. Routine information includes sub-themes of sleep, food, physical activity, and location and safety, while health information encompasses general well-being and special situations. Social-emotional information pertains to mood, personal connection, and getting to know and understand a child. While these categories provide a solid foundation, they do not capture all of the features present in the devices. To that end, we adopted some of the themes and sub-themes of Kuzminykh and Lank's categorization and re-categorized them based on the data. For example, "location and safety" is a sub-theme under routine information, but emergency and location tracking features were very prominent in the data. Furthermore, location is often a component of safety measures in the wearables we analyzed (e.g., the child presses an SOS button, which calls the parent and sends the child's location). Therefore we elevated "safety" to be a higher-level theme and included "location" as a sub-theme. We also re-categorized sleep and daily activity as health information rather than routine information, given the impact of sleep and activity on children's health.
Finally, we created new categories to fully capture the data. For example, Kuzminykh and Lank focus on what information parents want to obtain from their children, but not how they want their children to act or behave. Rather than focusing solely on parents' information needs, we call attention to how wearable devices can be used in caring for children. We created a new category, "habit formation, " which encompasses features that may help children develop habits like completing their chores.
FINDINGS
Our goal was to investigate the functionality of children's wearables and classify how existing features support parents' information needs. We found that wearables for children are complex devices that assist parents in communicating with their children and ensuring their safety, as well as in managing their children's health and habits. We also identified features that focus on children's interests rather than parents' needs. An overview of the devices we analyzed is shown in Table 1.
Children's Safety
Many features embedded in wearable devices aim to offer parents peace of mind about their children's safety, mainly by providing information about children's whereabouts, enabling rapid communication in case of emergency, and offering on-demand video and audio surveillance. This scheme is exemplified in Wizard Watch's [W13] advertising phrase: "Gain peace of mind by knowing exactly where your child is, how to reach them... and knowing they can alert you if they are in trouble, every moment of the day!" In this section, we detail the specific features that support children's safety (Table 2).
Whereabouts.
Our results show that a majority of devices support safety features, with an emphasis on information about children's whereabouts. This information is typically viewed on a partner mobile application belonging to the parent. Parental access to children's real-time GPS location is the most common feature overall, appearing in 68% of devices (n = 32), and nearly all devices with GPS tracking (27/32) also enable geofencing (i.e., setting virtual geographic boundaries). For some parents, continuous real-time information about children's movements can be seen as a way to reduce uncertainty, alleviate personal anxiety, and provide a sense of security [75]. In some cases, location tracking can afford more than momentary relief: 26% of devices (n = 12) provide location history, which gives parents more insight into how and where children spent their time.
Access to location information can be useful in both routine and emergency situations. With geofences, parents can verify that routine activities were performed without an issue. For example, a parent can set geofences around their home and the child's school and receive daily alerts to confirm that the child completed their journey. In case of an emergency, accessing real-time location allows parents to come to their children's aid. To continue with the previous example, if the child gets lost on their way home from school, triggering an alert, then the parent can use location information to plan their response (e.g., calling to give the child directions or going to pick them up). This functionality is in line with Kuzminykh & Lank's finding that one of the primary use-cases for parental information is performing emergency activities [48]. Although GPS is the dominant form of location tracking, six devices use proximity detection to inform parents about their children's whereabouts. For example, Buddy Tag [W20] will send an out of range alert to a parent's phone when their child moves farther than 80 to 120 feet away in an outdoor space. Another device, Mommy I'm Here Child Locator [W23] , sounds an alarm on the child's device to help parents locate them more easily in public. Two devices, Weenect Kids [W46] and Trax [W40] , take a different approach to proximity detection by incorporating augmented reality. This functionality makes it easier for parents to identify where their children are in crowds and how far away they are by overlaying children's location and distance on a real-world view using the phone's camera. A key difference between GPS tracking and proximity detection is that the former is used when children and parents are separated, while proximity detection can only be used when children and parents are together.
Alerts in Emergency
Situations. In addition to features locating kids' whereabouts, most devices have at least one feature designed to notify parents of emergency situations, or events that occur outside of children's normal routines. While Kuzminykh & Lank classified special situations as a subtheme of health information, we found that health emergencies (e.g., fever) were not a primary focus. Instead, the information collected in these situations emphasizes a child's immediate safety, which may be linked to whereabouts. One widespread feature that supports emergency activities is SOS buttons, appearing in 57% of devices (n = 27). By pressing a designated button, children can discreetly and easily call their parents in an emergency (e.g., getting lost on their way to school). In four devices, pressing the SOS button will also send the child's location to the parent, suggesting that Device-level controls and alerts Parents have remote control over device settings (9) (e.g., set quiet mode) and are alerted to device states such as inactivity (2) and removal (2) 11 Accessible by multiple caregivers Other caregivers (e.g., grandparents) can access the device's data 10 Audio & video surveillance Parents can access the device's microphone for voice monitoring (10) or the camera for video monitoring (2) 10 Proximity detection Parents receive an alert when children move farther than a set distance away from them 6 Send emergency info In case of an emergency, the device automatically sends the child's GPS location (4) or voice recording of the child's surroundings (2) to emergency contacts 5 In transit alert Parents are alerted to transit incidents, such as high speeds (2) or collisions (1) 3 Send directions to child Parents send directions to let children self-navigate 2 Augmented reality Device annotates children's location and distance on a real-world view seen through the parent's camera 2 Store medical info Device stores the child's medical information 1 Table 2. Safety features concentrate on parents' ability to monitor their children and communicate with them in times of emergency. Monitoring emphasizes the collection of information related to children's whereabouts, and a substantial portion of devices also provide on-demand video and audio surveillance.
location and communication are closely linked with ideas of safety. Another dimension of SOS functionality is that some devices alert multiple emergency contacts, rather than just the parent. For example, KidsConnect GPS Tracker Phone [W12] will text the child's GPS location to three emergency contacts, and then call until one of them answers. The responsibility to ensure a child's safety and respond to emergency situations typically does not rest solely on one parent, and some devices aim to distribute this responsibility among trusted adults in a child's life. Ten devices allow multiple caregivers to access the data collected about a child, as seen in Table 2, with a focus on the sharing of location information. Jiobit [W15] refers to these other caregivers as part of the "village" it takes to raise a child, including nannies, co-parents, grandparents, or any other trusted individuals.
Other features for emergency situations include device-level alerts such as device inactivity alerts and device removal alerts, as well as in transit alerts for speed and collisions, as seen in Table 2. These features can inform parents when their children are in immediate danger or might require assistance. Device inactivity and removal alerts are somewhat distinct in that they can notify parents of potential future threats rather than immediate danger (i.e., the parent will not have the information they need to respond to an emergency if the child is not wearing the device). AngelSense [W2] , a wearable designed for people with special needs, goes a step further by allowing device removal only with a special parent key.
On-Demand
Video and Audio Surveillance. Children's wearables are approaching safety in unprecedented ways beyond the collection of location information. A large share of devices allow parents to remotely monitor their children without notification as a means of ensuring safety or security. Twenty one percent of devices (n = 10) allow parents to remotely listen to the children's surroundings using a built-in microphone, and two devices even enable remote camera monitoring to view children's surroundings. Find My Kids GPS-watch [W6] describes audio monitoring as a way to "always be aware of the sounds surrounding your child. Know immediately if they're being bullied or going through a rough time, or potentially in danger." Similar to real-time GPS location, remote monitoring is marketed as a way to check on children throughout the day and have peace of mind about their well-being. Information captured through audio can also be used in emergency situations. For example, two devices send parents an audio recording of the child's surroundings when the child presses the SOS button. These recordings can provide parents with information about the type of emergency, other involved parties, and their child's status, which kids might not be able to communicate well.
Connectedness
Another primary purpose of wearables is to facilitate communication between parents and children. Some devices also aim to connect children with their peers. In this section, we highlight the complexities of children's wearables as communication devices (see Table 3 for an overview).
Connecting with Parents.
Our data suggest that communication features in children's devices are sophisticated and varied with respect to the modality of communication, the level of control to initiate and decline communication, and parties allowed to communicate through the device. Modalities of communication include voice, video, text, and vibrations. Voice calls are the most common communication medium (n = 21), followed by text messages (n = 14). The power to initiate and decline communication adds another layer of complexity: communication can be unidirectional (e.g., from parent to child) or bidirectional, and one device enables auto-pickup (i.e., a child cannot refuse a call from their parent). By enabling fine-grained control, children's wearables position themselves as safe alternatives to phones that can meet parents' specific needs. For example, My Gator Watch [W37] advertises itself as "For 5-12 years old kids who are too young for mobile phones. " Further, Weenect GPS tracker [W46] promises parents, "they [children] can call you but not send messages to their friends. If they don't answer, no worries, you know the GPS location of the tracker. The advantages of the mobile phone without the disadvantages. " Children's wearables afford parents with the ability to communicate with their children when necessary, while also protecting them from the perceived dangers of mobile phones. Eleven devices enable parental restrictions on incoming and outgoing calls, which include pre-programming approved phone numbers for outgoing calls and blocking incoming calls. By providing parents with contingency plans (e.g., auto-pickup and access to location), wearables also provide parents with security in case children do not respond to communication requests.
Connecting with Friends.
While communication features center on connecting children with their parents, some devices draw attention to children's ability to connect with their peers. In total, eight devices let children add friends with the same watch, though the supported methods of communication vary. Some devices allow children to call their friends (e.g., POMO Waffle [W29] and DokiPal [W5] ), while others enable voice messages and emojis between friends (e.g., V-Kids Watch by Vodafone [W43] and Ojoy A1 [W25] ). Devices market this functionality not only as a way to motivate children to use the device (e.g., "more fun with friends" [W5] ), but also as an opportunity to help children learn how to communicate better (e.g., "[improve] their social skills" [W29] , "broaden your child's interpersonal skills" [W24] ). Additionally, connecting with friends can take place in the context of activity competitions (e.g., highest step count), in which children can engage with their friends and classmates.
Feature
Description (number of devices) # Devices (N = 47) Calling Parents and children can call using voice (21) and video (5) 22 Messaging Parents and children can message using text (14), voice (10), group messaging (2), and video (1) 19 Parental restrictions on incoming and outgoing calls Parents can limit communication for children (e.g., setting call firewalls)
11
Add/ find friends with the same watch Children can add friends with the same device to a contact list and communicate with them 8 One-way communication (parent to child) Only parents can initiate contact with children through haptic feedback (2), calling with auto pickup (1), and texting (1) 3 (21), calories burned (5), movement (5), and distance (4) 26 Activity-based competitions Compete (e.g., for highest step count) with friends (6) However, there does not appear to be a direct relationship between the ability to communicate with peers and the presence of activity competitions (i.e., most devices have either one feature or the other). Regardless, both features support children in interacting with their peers and exercising their social skills.
Children's Health
Around half of devices provide parents with health-related information about their children, primarily related to physical activity. A minority of devices provide some other type of health-related information such as sleep, temperature monitoring, and perspiration monitoring. Table 4 shows an overview of health features.
Activity tracking.
We observe that 55% of devices provide support for activity tracking, with a focus on step count (n = 21). To a lesser extent, some devices track calories burned (n = 5), movement (n = 5), and distance (n = 4). Unlike activity trackers for adults, which emphasize caloric expenditure and heart rate in pursuit of specific activity goals (e.g., losing weight), wearables for children are more focused on general physical activity promotion and habit formation.
Step count is typically displayed on the device as a standalone number, and some devices also situate children's activity in the context of their activity goals (e.g., Garmin Vivofit Jr 2 [W8] shows a radial progress bar for active minutes). For more detailed information (e.g., aggregate data over a week-long period), children need to use their parents' phones. Other devices shift the focus away from counting steps by instead converting movement into points (e.g., Octopus by Joy [W16] ). Sqord [W35] argues that by converting movement into points, which can be used for in-game rewards, the device can "track play" rather than just steps. There are different methods to encourage children to be active: goals, rewards, games, and competitions. Goals and rewards can be built-in to the device or set by parents. As an example of a built-in goal and reward, Ojoy A1 [W25] has a target of 4000 steps per day and rewards children with an upgrade to their in-game character if they achieve that target. POMO Waffle [W29] allows parents to customize activity goals and create real-life rewards with their children. Another way to encourage activity is through games and competition. Four devices offer activity-based games that require children to move (e.g., "jump like a frog" in LeapFrog LeapBand [W21] ). Access to more games, which are not necessarily activity-based, can also be used as a reward. For example, Garmin Vivofit Jr 2 [W8] makes certain games available to children in the mobile app once they complete 60 minutes of daily activity and incentivizes activity beyond 60 minutes with more opportunities to play. In addition to games and rewards, socially-based activity events encourage children to be active. Six devices let children engage in activity competitions with friends, and three devices allow for family-based activity competitions. By framing exercise as an enjoyable social practice, children's wearables can help build social support for physical activity and motivate children to increase their activity levels while having fun. Family practices around exercise also have a significant impact on children's activity levels, and competitions can engage the entire family in building healthier habits.
Other Metrics of Health.
Although activity tracking is the main focus of children's wearables, our data show that other types of health-related information may be of interest to parents. Some devices track sleep (n = 4), temperature (n = 2), and heart rate (n = 1). Details on sleep tracking are sparse, with little additional information provided beyond time spent asleep. One device, Fitbit Ace 2 [W7] , analyzes sleep quality using the number of times awakened and restless. While sleep is an important topic in terms of habit formation (adhering to a bedtime schedule), it does not appear to be a major health concern. Only one device, Kiddo [W11] , tracks sleep and key vital signs (temperature, heart rate, skin temperature, and perspiration) to assist parents in communicating with healthcare providers. Children without specific health conditions usually do not have specific health concerns [60], which may explain the shortage of devices with cardiac and body measurement capabilities.
Nurturing Good Behaviors and Habits
In this section, we draw attention to features that assist parents in helping their children form desirable habits or behaviors (see Table 5 for an overview). These habits are not restricted to a particular domain; rather, they encourage children to autonomously engage in a variety of behaviors, such as completing their chores and staying focused on relevant tasks. These features support varying degrees of parental involvement.
The most common feature for habit formation is parent-set scheduled events and alerts, which are present in 26% of devices (n = 12). Parents can schedule tasks for their children, such as brushing their teeth or completing their homework, to send reminders. The objective is to train children to adhere to schedules and develop a regular routine, so that at some point they do not need to be reminded. Octopus by Joy [W16] , for example, allows parents to create a visual schedule for their children through the usage of icons, which it says "empowers kids by teaching good habits and the concept of time. " Revibe Connect [W31] , which is designed for children with ADHD, uses vibration reminders to signal the start and end of work intervals (15 minutes of work, 5 minutes of break) to teach children to work independently and "become aware of their behavior and get back on-task. " Habit formation can also be supported by rewards, which are either set by parents or built-in to the device. With Garmin Vivofit Jr 2 [W8] , parents can assign virtual coin values to chores and let children redeem them for agreed-upon rewards
Features
Description (number of devices) # Devices (N = 47) Parent-set scheduled events and alerts Parents can set or send children tasks and reminders 12 Educational games Games that promote learning 4 Children-specific view in mobile app Mobile app has a view designed for children to access their own data 3 Send directions to child Parents can let child self-navigate by sending directions to device 2 Focus rate tracking Parents can track their child's focus rate, attention span and other data points to measure their on-task behavior 1 Table 5. Some devices promote good habits and behaviors such as adhering to a schedule and developing a regular routine, mainly by allowing parents to schedule events and reminders for their children.
in real life. By involving both parents and children in this process, children's wearables can simultaneously encourage healthy habits and facilitate connectedness between parents and children. Other devices favor a more restrictive approach through remote parental control of device settings (n = 9), which include turning on quiet mode, deactivating the device, and disabling games. Through restricting device functionality, parents can encourage children to stay focused on the task at hand and limit their screen time.
Designing for Children's Interests
Although the information collected from wearables is mainly meant for parents' consumption, the devices are worn and used in practice by children. Four features fall outside of the structure for parents' information needs and instead cater to children's interests. These features include physical customization of the tracker (e.g., change the watch face or wristband); in-game customization of the tracker (e.g., choose an avatar); a voice assistant; and recreational games (i.e., not educational or activity-based). Twenty one percent of devices (n = 10) allow for some type of tracker customization. Previous research showed that customization features in wearable health trackers may increase one's sense of identity, which in turn is associated with favorable attitude, higher exercise intention, and greater sense of attachment towards the tracker [43]. As such, providing customization features could prevent device abandonment and may incentivize children to continue using the wearable and encourage feelings of ownership, as opposed to feeling forced to use the device (e.g., with device removal alerts).
DISCUSSION AND FUTURE WORK
The technological landscape of wearable devices for children is diverse. In this section, we discuss the key themes that emerged from our data, situate our findings in the context of parents' information needs, and make design recommendations for children's wearables. We also highlight opportunities for future work, including the integration of collaborative tracking into children's devices.
Insights from the Current Landscape of Children's Wearables
Recent work by Kuzminykh & Lank [48] considers wearables as a means to satisfy parents' information needs (routine information, health information, schooling information, and social-emotional information). We found that there is a mismatch between parents' stated information needs and the information that is collected by children's wearables. Although Kuzminykh & Lank's [48] semi-structured interviews revealed that parents resist continuous monitoring and are disinterested in location information, we found that a majority of devices are built for ongoing parental surveillance as a means of ensuring children's safety. Existing devices focus on the collection of safety information (e.g., real-time GPS location) rather than routine information, and physical activity data instead of general health information. Schooling information was not a focus of the devices we surveyed, and while most devices support parent-child communication, they do not necessarily provide the social-emotional information that parents are interested in. Despite wearables' emphasis on monitoring, our findings suggest that these devices have great potential beyond their capacity for surveillance. Close to 40% of devices are equipped with features to support children in developing good habits and self-regulation skills in the context of daily routine and time-management. Just as adults practice self-tracking to achieve various goals (e.g., improving health and productivity) [15], children stand to benefit from practicing self-tracking at an early age. Self-tracking can engage children in planning and reflection, which help children practice self-regulatory mechanisms and develop higher-level thinking skills [21]. However, more research is needed to understand how devices can support children in playing the main role in tracking and consuming their own data, rather than acting as passive wearers who collect data for their caregivers. In the following, we discuss the benefits and drawbacks of children's wearables, and how children can play a more active role in the design and use of these devices.
Supporting Kids' Self-tracking
Unlike wearables for adults, none of the wearables for children that we surveyed are marketed as "self-tracking" devices. Instead, the primary goal of children's trackers is parental surveillance. To effectively use wearables in promoting children's self-regulation, designers and researchers must re-envision children's role in the tracking process and empower them to become active stakeholders. As a starting point, designers should consider features that support children's agency (i.e., sense of control) and motivate them to self-track. Self-determination theory has shown that a feeling of agency enhances children's intrinsic motivation to perform target activities (e.g., physical activity [14]) and encourages the development of self-regulation [30]. As such, supporting children's sense of autonomy, agency, and empowerment has become a major topic of interest in the research community [44]. Recently, Oygür and colleagues [58] highlighted the need for kids' activity trackers that support children's agency. In their analysis of online reviews for nine kids' activity trackers, they found that parents expected trackers to teach their children to become more responsible for their health, daily tasks, and schedule. However, a lack of agency on children's side increased parents' workload and diminished their goal of facilitating children's independence. Recognizing this need, we provide design recommendations for wearables that enhance children's agency, which is not well-reflected in existing commercial wearables for kids.
While a few devices we analyzed allow children to make cosmetic changes to their devices (e.g., changing the watch face), wearables could provide more meaningful customization options, such as letting children set individualized goals and track behaviors that are important to them. Ananthanarayan and colleagues [3] explored the benefits of allowing children to craft their own tangible health visualizations based on data collected by wearable trackers, and found that children interpret health and wellness as the ability to perform activities that are important to them. Instead of focusing on traditional metrics of health like adults' wearables (e.g., heart rate), devices should support children's activities of interest and aim to introduce them to the idea of managing health and well-being, as well as help them become self-aware about their own health behaviors. Another important consideration in maintaining children's interest is to make the device fun to use [4,37]. Furthermore, rather than simply adopting the feedback mechanisms present in adults' wearables (e.g., graphical representations of activity data), designers should consider feedback that is easier for children to interpret. Toward this new design direction, involving kids in the design process early on can help designers elicit kids' values, motivators, and design insights. HCI researchers developed many design methods and techniques to involve kids during the design process, such as cooperative inquiry [20] and co-design using fictional inquiry and comicboarding [36], which can be employed to partnering and collaborating with kids.
Given that children cannot be expected to independently engage in self-tracking and reflection, parental involvement is crucial. Collaborative tracking and reflection present an opportunity for parents to teach their children good habits and behaviors, while also encouraging the development of self-regulation. To encourage sustainable, long-term collaborative tracking, scaffolding may be employed accounting for children's developmental progress and its impact on parent-child tracking dynamics. When children are young, wearables can help parents become involved in the tracking process to assist children in setting goals, interpreting feedback, and reflecting on the data. Collective goal-setting can support health reflection within families [17]. At this stage, parents might play a more active role in reminding their children to track, and they may even record some data for children if they are unable. As children age and begin to understand the purpose of self-tracking, parents may decrease the amount of control that they have over their children's tracking behaviors and support children in completing tasks independently. For example, children might set their own goals without parental input, but then reflect on their data with parents, who can add new insights and context to children's individually-collected data [29]. Through this process of collaborative tracking, wearables can create shared experiences for parents and children and facilitate emotional bonding [67]. That said, tracking alone does not necessarily lead to deeper reflection as a family, and therefore tracking tools should facilitate discussions around health data to help parents teach their children healthy habits [66]. An exciting avenue for future research is to explore different ways to share tracking responsibilities between parents and children, as well as ways to balance data access and the amount of information shared. While one option is to employ parents as "guides" in teaching children how to self-track, another option is to have parents track same or different data alongside their children (e.g., [59]).
Wearables as an Alternative to Smartphones
For parents who want to communicate with their children but are worried about mobile and online safety (e.g., [41,50]) and smartphone addiction [31], wearables present a compelling alternative. Because wearables can serve as children's primary "smart" devices, many wearables have communication features that are on par with smartphones (voice & video calls, and text messaging). At the same time, they provide parents with increased control over their children's communication (e.g., whitelisting contacts) and contingency plans in case of communication failures (e.g., on-demand audio monitoring and location information). However, there are limitations to replacing smartphones with wearables. The small screen and form factor of wearables can make it difficult for children to view their self-tracking data, causing them to rely on parents' devices. For example, several devices provide a "kid view" in the mobile app on parents' phones to allow children to access their data. Future work might explore how to turn the need for cross-device collaboration into a pleasurable experience for parents and children (e.g., as an opportunity for reflection and bonding).
We also highlight opportunities for wearables to serve as more than communication devices by facilitating connectedness between parents and children. While about half of the devices we surveyed allow for some type of communication between parents and children, the presence of communication features alone does not necessarily promote connectedness or address parents' desire for social-emotional information about their children. From a study on communication between working parents and their children in China, Sun and colleagues found that mobile phones, despite being the most common means of communication, were rated by children as the least desirable way for parents to show their love [71]. In response, they developed e-Seesaw, a tangible awareness system that facilitates connectedness by letting parents and children play together remotely. Given that young children may also struggle to communicate their needs and wants through speech, more research is needed to explore alternative methods of communication between parents and children. Rather than serving as simple communication devices or monitoring tools, children's wearables have the potential to enrich parent-child communications and thus enhance their relationships.
Limitations
We aimed to capture a snapshot of the current landscape of kids' wearables. Although some research prototypes may be more advanced than commercial kids' wearables, consumer wearables can still provide valuable insight into how these devices are intended to be used in a real-world context, given that research prototypes of kids' wearables are relatively sparse and not widely adopted. This work complements existing research by analyzing the functionality of existing commercial kids' wearables and identifying new directions in design.
Our findings rely on data collected from product descriptions available on Google Shopping, stand-alone product websites, and user manuals (when available). We acknowledge that these materials may not accurately represent the quality and usability of devices, as well as parents' and children's experiences with the devices. To address this limitation, we referred to Oygür and colleagues' study [58], which examines parents' perspectives of kids' activity trackers through an analysis of device reviews. Additionally, we relied on Kuzminykh and Lank's work [48] on parents' information needs to understand the potential purpose of device functionality. The use of online sources limited our ability to investigate certain features, though we note that this data collection method allowed us to provide broad coverage of a diverse array of devices, which would have been infeasible with in-depth user studies and interviews.
CONCLUSION
In this work, we surveyed the technological capabilities of commercial kids' wearables and examined how these devices address parents' information needs. We found that while many wearables emphasize children's safety and ability to communicate with their parents, a significant share are also equipped with features to support their health and the development of good habits and behaviors. Kids' wearables often have rich functionality, offering GPS tracking, activity tracking, and rich communication features. As we rethink the design of kids' wearables, we envision leveraging a combination of self-tracking and collaborative tracking to balance the needs of both parents and children. We believe that self-tracking can help children develop self-regulation skills and an awareness of their own behaviors, while collaborative tracking can facilitate parental involvement and help parents teach their kids good habits. Promising areas for future research include exploring ways to design for supporting kids' agency and interest, to leverage cross-device collaboration, and to share tracking responsibilities between parents and children. Wearable devices that empower kids and parents as equal stakeholders could promote health and wellness at a family level and help children develop the skills they need to be independent in the long-term, while supporting parents' short-term goals of ensuring their kids' safety and well-being. Our work contributes to the nascent field of family-centered informatics with a focus on the role of wearable devices.
SELECTION AND PARTICIPATION OF CHILDREN
No children participated in this work. | 2021-04-14T02:17:17.837Z | 2021-04-13T00:00:00.000 | {
"year": 2021,
"sha1": "6e2f2d919cd9b6b83c7adc8b9fad22d9a31732e6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6e2f2d919cd9b6b83c7adc8b9fad22d9a31732e6",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
169684420 | pes2o/s2orc | v3-fos-license | Multilevel Analysis on the Determinants of Overweight among Children Under Five in Kediri, East Java
Background: Child overweight and obesity are an important public health issue worldwide. Overweight and obese children are likely to stay obese into adulthood and more likely to develop non-communicable diseases like diabetes and cardiovascular diseases. This study aimed to determine factors associated with overweight among children under five in Kediri, East Java, using a multilevel analysis model. Subjects and Method: This was a case control study conducted at 25 posyandus (integrated family health posts) in Kediri, East Java, from April to May 2018. A sample of 200 children under five was selected by fixed disease sampling. Posyandu was selected by stratified random sampling. Children were located at level 1 and posyandu at level 2 in the multilevel analysis model. The dependent variable was overweight. The independent variables were maternal body mass index (BMI), exclusive breastfeeding, calorie intake, feeding pattern, and nutritional status monitoring. Overweight status was measured by weight for height z-score. The data were collected by questionnaire and analyzed by a multilevel logistic regression model run in Stata 13. Results: Maternal BMI ≥25 (b= 0.72; 95% CI= -0.98 to 1.54; p= 0.085) and calorie intake exceeding the recommended allowance (b= 1.45; 95% CI= 0.59 to 2.31; p= 0.001) increased the risk of overweight in children under five. Good feeding pattern (b= -1.11; 95% CI= -2.15 to -0.08; p= 0.034), exclusive breastfeeding (b= -0.97; 95% CI= -1.98 to 0.02; p= 0.057), and regular nutritional status monitoring (b= -4.34; 95% CI= -6.42 to -2.21; p<0.001) decreased the risk of overweight. Posyandu showed negligible contextual effect on the incidence of child overweight with ICC= 0.98%. Conclusion: Maternal BMI ≥25 and calorie intake exceeding the recommended allowance increase the risk of overweight in children under five. Good feeding pattern, exclusive breastfeeding, and regular nutritional status monitoring decrease the risk of overweight in children under five. Posyandu has a negligible contextual effect on child overweight.
BACKGROUND
Obesity is still a problem that is in the spotlight of the world. WHO has declared that obesity is a global endemic problem. An International Obesity Institute in London states that 1.7 billion people on Earth are overweight (Setiyaningsih et al., 2015). It is estimated that there are 3.4 million cases of child deaths in the world in 2010 due to cases of obesity, the prevalence of obesity in children reached 47.1% in 201347.1% in (Nugrahani et al., 2016. Data from WHO in 2010 mentioned that nearly forty million children in the world under the age of five are overweight (WHO.2010). Toddlers in Asia including Indonesia have a 2.5 to 3.5% greater risk of obesity (Nugrahani et al., 2016). e-ISSN: The results of the Basic Health Research (Riskesdas) in Indonesia in 2013 stated that the prevalence of obesity in children nationally experienced an increase and decrease over a period of six years from 2007 (12.2%), 2010 (14%), until 2013 (11.9%) (Riskesdas, 2013). Data from the East Java Health Office stated that the prevalence of obesity among under-fives reached a high rate of 17.1%, still above the prevalence rate of obesity in Indonesia. In Kediri there are 324 children under five who have fat nutritional status. The absence of programs from the health service on the handling of obesity in children, making the community also has the assumption that obesity is not a problem that must be addressed.
The causes of obesity are multifactorial, including genetic factors, socioeconomic, environmental, and behavioral conditions (Portela et al., 2015). Obesity can occur due to the interaction of genetic factors and the contribution of environmental changes (Prasetyaningrum et al., 2016). Lack of awareness of parents with overweight children, some studies show that more than 60% of parents underestimate or disregard the nutritional status of their children (Lundahl and Kidwell, 2014;Reitmeijer-Mentink et al., 2013;Tompkins et al., 2015;Howe et al., 2017).
Social factors that influence the attitudes of parents to obese children include the types of parental leadership, ethnic/ race/ ethnicity, income, education, age, and gender of children (Black et al., 2015). In a study conducted by Grazuleviciene in 2017, it was mentioned that the type of maternal care with children and smoking behavior is also associated with the incidence of obesity in children (Grazuleviciene et al., 2017). Another factor that causes obesity in children is the excessive consumption of unhealthy foods because these foods have high salt, sugar, and fat content in it (Nirvana, 2012).
Obesity in toddlers if not handled properly can continue until adulthood. Obesity has a bad impact on children as adults, not only disrupting physical health, but also psychological and social (Herawati et al., 2016). Some cardiovascular diseases can occur due to obesity are heart, stroke, diabetes, cancer, and ultimately can cause death (Wiardani et al., 2016).
One of the government's efforts to make early detection on fat nutritional status is the establishment of posyandu. One of the objectives of posyandu is to maintain and improve the health of infants, through the monitoring of nutritional status (Rahaju et al., 2008). Posyandu that currently exist still focus on malnutrition and malnutrition cases, not many posyandu that also handle obesity. Indonesian people still think that child obesity is not something that needs to be addressed.
SUBJECTS AND METHOD 1. Study Design
This was an analytics observational study with a case control design. The study was carried out in 25 posyandus located in Balowerti community health center in Kediri, East Java, from April to May 2018.
Population and Samples
Stuydy population was all children under five in Balowerti community health center. A sample of 200 chidlren was selected by fixed disease sampling.
Study Variables
The dependent variable was children nutritional status. The independent variables were maternal nutritional status, exclusive breastfeeding, feeding pattern, food intake, and nutrition status monitoring at posyandu.
Operational Definition of Variables
Maternal nutritional status was calculated by Body Mass Index (BMI) formula, body weight (kg) /height (m 2 ). Maternal weight was measured by weight scale. Maternal height was measured by microtoise. The measurement scale was continuous.
Exclusive breastfeeding was defined as infants only received breast milk from birth to six months of ages. The data were collected by questionnaire. The measurement scale was categorical.
Feeding pattern was defined as maternal behavior in giving food to toddlers which includes food preparation, cooks, and present to child. The data were measured by questionnaire. The measurement scale was continuous.
Food intake was defined as measured from the amount of consumed food consumed consigned in the form of energy by using a 24-hour food recall. Subjects must remember what their children consumed for 24 hours from waking up to going to bed at night. The data were collected by questionnaire. The measurement scale was continuous, but for the purpose of data analysis, it was transformed into dichotomous, coded 0 for <4 times and 1 for ≥4 times Nutritional status monitoring to posyandu was defined as the regularity of children coming to posyandu for weighing for the last 6 months. The data was measured by questionnaire. The measurement scale was continuous, but for the purpose of data analysis, it was transformed into dichotomous, coded 0 for <4 times and 1 for ≥4 times.
Data Analysis
Univariate analysis was done to see the frequency distribution and percentage characteristics of study subjects. Bivariate analysis was conducted to study the relation between performance of midwife and independent variable using chi-square test and odds ratio calculation (OR) with confidence level (CI) equal to 95%. Multivariate analysis used a multilevel logistic regression.
Research Ethics
The research ethics include informed consent, anonymity, confidentiality and ethical clearance. The ethical clearance in this research was conducted at Dr. Moewardi hospital, Surakarta. Table 1 shows that children with normal nutritional status was 143 (71.5%) and overweight children was 57 (28.5%). Mother with normal weight was 123 (61.5%) and overweight mother was 38.5%. Toddlers who did not receive exclusive breastfeeding were 135 (67.5%), and those who received exclusive breastfeeding were 65 (32.5%). Children with calorie intake ≥NAR were 147 (73.5%). As many as 174 (87%) children visit posyandu to monitor their nutritional status.
The results of bivariate analysis
The bivariate analysis looked at the relationship between independent variables (maternal nutritional status, exclusive breastfeeding, feeding pattern, food intake, and nutritional status monitoring at health center) and dependent variable (overweight). The result of bivariate analysis can be seen in table 2. Table 3 shows that maternal nutritional status was associated with overweight in children under five. Overweight children increased the risk of overweight among children (b= 0.72; 95% CI= -0.98 to 1.54; p= 0.085).
Multilevel Analysis
There was a negative effect between exclusive breastfeeding and overweight in children under five. Exclusive breastfeeding reduced the risk of overweight in children (b= -0.97; 95% CI= -1.98 to 0.02; p= 0.057).
There was a negative effect between feeding pattern and child overweight. Good feeding pattern reduced the risk of overweight among children (b= -1.11; 95% CI= -2.15 to -0.08; p= 0.034).
There was a positive effect between food intake and overweight in children. Higher food intake (≥NAR) increased the risk of overweight in children (b= 1.45; 95% CI= 0.59 to 2.31; p= 0.001).
There was a negative effect of nutritional status monitoring at posyandu and child overweight. Regular nutritional status monitoring at posyandu reduced the risk of child overweight (b= -4.34; 95% CI= -6.42 to -2.21; p<0.001).
In table 3, the score of ICC= 0.98%. The indicator showed that the health center in each strata has less contextual effect on the status of obese toddlers in health center. Lack of cadres and health personnel activeness in handling obese toddlers lead to lack of contextual effect of health center on obese toddlers at the community level. Most of nutrition program of toddlers at health center still focused on malnourished and stunting toddlers only. There was still no program from health services, community health center, and health center regarding the handling of obese toddlers.
However, from the results of individual levels, it can be concluded that the multivariate effect explained about the influence of more than the independent variables namely maternal nutritional status, exclusive breastfeeding, feeding pattern, food intake, and monitoring of nutritional status in posyandu.
The effect of maternal nutritional status on child overweight
There was a significant effect between maternal nutritional status and child overweight. Overweight mother increased the risk of overweight in their children.
The result of this study was in line with a study by James et al. (2013) and Portela et al. (2015), which stated that obesity in mothers has a strong relationship with the obesity in their children.
Factors of parental nutritional status could affect the obesity in toddlers because the behavior of parents in eating foods and physical activity could affect the children to support the obesity and obesity can also be derived genetically (James et al., 2013). Obese mothers did not directly make their children fat, however, this relationship can be caused by some unhealthy parental behavior that could be imitated by the children and even formed behavioral habits for children that lead to the occurrence of obesity.
The effect of exclusive breastfeeding on overweight
There was a statistically significant effect between exclusive breastfeeding and child overweight. Exclusive breastfeeding reduced the risk of child overweight Park et al. (2018), Marseglia et al. (2015), Yan et al. (2014), Nugrahani et al. (2016), Saputri, (2013. Children who did not receive exclusive breastfeeding increased the risk overweight 4.2 (Saputri, 2013). Longer breastfeeding reduced the incidence of obesity in children (Yan et al., 2014). Giving formula milk and weaning food at an early age for infant can lead to abnormal fat deposit and increased the risk of obesity in children under five (Park and Lee, 2018).
The content of nutrients contained in breast milk was a component that has been complete and appropriate for the baby. If formula milk or other food was given to the children, it could disturb the balance of absorption of existing nutrients that can lead to obesity in children.
The effect of feeding pattern on
child overweight There was a negative effect between feeding pattern and overweight in children. Good feeding pattern reduced the risk of child overweight. This findings is in line with (Purnama et al., 2015) (Herawati et al., 2016) (Demir et al., 2017).
Parental behavior can also affect the eating behavior of the children, which in this case was the parental behavior in feeding their children (Herawati et al., 2016). If the parents did not control the food intake of their children, it would increase the risk of obesity in children (Purnama et al., 2015).
Children were very easy to imitate what was done by their parents, if the parents often consume foods, especially high-calorie and high sugar type of foods, then this would also be imitated by the children. The type of food consumed or available at home would also depend on parental preferences. In addition, children were still not able to independently determine what they eat, so it was the parents' duty to prepare the food for children. The behavior of feeding patterns for children could affect the incidence of obesity. 4. The effect of food intake on child overweight There was a positive effect between food intake and child overweight. Higher food intake increased the risk of overweight among children under five. This finding is in line with Huang et al. (2015), Mandal et al. (2014), and Setiyaningsih et al. (2015).
Components of nutrients such as calcium and fiber was very influential to suppress the occurrence of obesity in children, while vitamin B, high-sugar foods, and carbohydrates can caused obesity (Huang et al., 2015). Children who consumed unhealthy foods ≥32 times/week tend to have higher risk by 4.26 times for obesity than toddlers who consume unhealthy foods <32 times/week (Setiyaningsih et al., 2015).
Children were generally very difficult to eat vegetables and fruit, they prefer fried food, high sugar, and high salt. This highcalorie food intake without parental control can certainly cause obesity in children. Currently, any kind of variation of snacks or fast food was more diverse and also made the toddlers to eat high-calorie foods.
The effect of nutritional status monitoring on overweight
There was a negative effect of nutritional status monitoring in posyandu on overweight among children under five. Nutritional status monitoring in posyandu reduced reduced the risk of overweight.
Posyandu was the first place for children to monitor their nutritional status. If the health center was poorly utilized by parents, it can be ascertained that children with fat nutritional status would not get any intervention.
There were still many assumptions in the community that obese toddlers were adorable and healthy, so that many parents did not consider that fat was an issue of abnormal nutritional status. Posyandu was supposed to monitor child nutritional status. | 2019-05-30T23:45:54.182Z | 2018-08-04T00:00:00.000 | {
"year": 2018,
"sha1": "5edeca79350174919b1c480399faf27c28d37aa6",
"oa_license": "CCBYNCSA",
"oa_url": "http://thejmch.com/index.php?journal=thejmch&op=download&page=article&path[]=108&path[]=111",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "def53da934685032caefdcebae1b81e49c9b7655",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Geography"
]
} |
15869095 | pes2o/s2orc | v3-fos-license | Total arch repair for acute type A aortic dissection with open placement of a modified triple-branched stent graft and the arch open technique
Background In total arch repair with open placement of a triple-branched stent graft for acute type A aortic dissection, the diameters of the native arch vessels and the distances between 2 neighboring arch vessels did not always match the available sizes of the triple-branched stent grafts, and insertion of the triple-branched stent graft through the distal ascending aortic incision was not easy in some cases. To reduce those two problems, we modified the triple-branched stent graft and developed the arch open technique. Methods and results Total arch repair with open placement of a modified triple-branched stent graft and the arch open technique was performed in 25 consecutive patients with acute type A aortic dissection. There was 1 surgical death. Most survivors had an uneventful postoperative course. All implanted stents were in a good position and wide expansion, there was no space or blood flow surrounding the stent graft. Complete thrombus obliteration of the false lumen was found around the modified triple-branched stent graft in all survivors and at the diaphragmatic level in 20 of 24 patients. Conclusions The modified triple-branched stent graft could provide a good match with the different diameters of the native arch vessels and the various distances between 2 neighboring arch vessels, and it’s placement could become much easier by the arch open technique. Consequently, placement of a modified triple-branched stent graft could be easily used in most patients with acute type A aortic dissection for effective total arch repair.
Background
Acute type A aortic dissection usually requires emergency surgical management to prevent death resulting from aortic rupture [1,2]. Although the dissection frequently involves entire aorta, the dissected ascending aorta is most common segment to rupture. Therefore, the simple ascending aortic graft replacement is widely accepted as the conventional treatment for acute type A aortic dissection [1][2][3]. This conventional operation has improved the life prognosis for the acute phase, but residual dissection in the arch and downstream aorta can still occur after the conventional ascending aortic replacement, which has been widely proven to affect the long-term prognosis [4][5][6][7]. When continuous enlargement of the residual dissection occurs, the chances of survival might remain in hazard, and difficult reoperation is inevitable. This unsatisfactory long-term prognosis would favor simultaneous replacement of the ascending aorta and total arch in the same surgical field during the primary emergency operation [8].
Total arch replacement is very complex and highly invasive if it is performed with the traditional method, which makes the risk of this procedure very high in patients with acute type A aortic dissection [2]. Whether this traditional total arch replacement with possible additional operative risk can be justified from the viewpoint of potential long-term benefits remains controversial [9,10]. Therefore, traditional total arch replacement can't be widely accepted as preferred surgical treatment for acute type A aortic dissection during the emergency repair.
It would be desirable if one technique could effectively repair total arch while keeping the surgical invasion and risk as low as possible. Recently, we developed open triple-branched stent graft placement technique, in which total arch repair could be simply completed by inserting a triple-branched stent graft into the proximal descending aorta, arch and 3 arch vessels through the same transverse aortic incision line as ascending aortic replacement [11]. Clinical results showed that our new technique could reduce the risk and technical difficulty of total arch repair to close to those of the conventional ascending graft replacement with open distal anastomosis [11]. Therefore, this new simple technique could be an attractive alternative to traditional total arch replacement for acute type A aortic dissection. However, in our practice with this new technique, two major problems were found. First, this new technique could not be applied in most patients, because the diameters of the native arch vessels and the distances between 2 neighboring arch vessels did not always match the available sizes of the triple-branched stent grafts. Second, the arch vessel orifices and the true lumen of the descending aorta could not be clearly seen through the distal ascending aortic incision in some cases, so inserting of the triple-branched stent graft in such cases was not easy. In an effort to reduce those two problems, we modified the triple-branched stent graft to the new generation, which could provide a good match with the different diameters of the native arch vessels and the various distances between 2 neighboring arch vessels, and developed the arch open technique to make the placement much easier and safer. Here, we describe our application of this new generation of triple-branched stent graft and the arch open technique for total arch repair in patients with acute type A aortic dissection. In addition, we report our initial clinical results in 25 consecutive patients.
Patients
Between November 2012 and June 2013, 25 consecutive patients with acute Stanford type A aortic dissection underwent total arch repair with open placement of a modified triple-branched stent graft and the arch open technique. This procedure was approved by the ethics committee of Union Hospital, Fujian Medical University, and written informed consent was obtained from each patient or legal representative. There were 21 men and 4 women. The average patient age was 49.92 ± 13.04 years (range, 20 to 74 years). Preoperative diagnosis was based on electron beam computed tomography, echocardiography and magnetic resonance imaging. The primary intimal tears were located in the ascending aorta in 13 patients, in the arch in 4 patients, and in the proximal descending aorta with retrograde extension of the dissection into the arch and the ascending aorta in 8 patients.
The dissection had extension to the innominate artery in 23 patients, to the left common carotid artery in 5 patients and to the left subclavian artery in 5 patients. A history of hypertension was found in 17 patients and 11 of them didn't receive effective antihypertensive treatment. Four patients had diabetes melitus, 2 had classic Marfan syndrome and 1 had chronic renal dysfunction. There were some preoperative complications related to the aortic dissection including moderate or severe aortic valvular regurgitation in 5 patients, cardiac tamponade in 1, transient brain ischemia in 1, and acute renal dysfunction in 3. All operations were performed within 4 hours after the diagnosis was confirmed. The average interval between the onset of pain and operation was 3.0 ± 2.3 days (range, 1to 9 days). In this study, total arch repair for acute type A aortic dissection was on the basis of one of the following indications: (1) the patient was < 55 years of age; (2) the intimal tear located in the transverse arch or proximal descending aorta that could not be resected by hemiarch replacement; (3) there was serious involvement of the arch vessels; or (4) Marfan syndrome was present.
The modified triple-branched stent graft
The modified triple-branched stent graft (conceived and designed by two of us (LWC and CL) and manufactured by Yuhengjia Sci Tech Corp Ltd, Beijing, China) consisted of a self-expandable nitinol stent and polyester vascular graft fabric. The polyester vascular graft fabric was thin and soft enough to be easily folded. Each modified triple-branched stent graft comprised a main tube graft and 3 sidearm tube graft.
Our modified triple-branched stent graft was designed to provide a good match with the different diameters of the native arch vessels and the various distances between 2 neighboring arch vessels. For this purpose, two major modifications were developed. First, the polyester tube graft and the stent for each native arch vessel or arch were not attached together before implantation, and they were implanted separately. Second, the diameter of the sidearm polyester tube graft was designed to be bigger than that of most corresponding arch vessels, and the distance between 2 neighboring sidearm tube grafts was longer than that between 2 corresponding arch vessels of most Chinese adults. After a bigger sidearm tube graft was inserted into the corresponding smaller arch vessel, a stent with the size proportional to that of this arch vessel was selected and inserted, which resulted in longitudinal fold of the implanted bigger sidearm tube graft to match this arch vessel. When the implantation of 3 sidearm tube grafts and their stents into the corresponding arch vessels was completed, an arch stent with the size proportional to the arch size was inserted into the main tube graft. As a result, the main tube graft between two sidearm tube grafts was transversely folded to match the distance between two corresponding native arch vessels.
In this study, two types of our modified triple-branched stent graft were produced. Type 1 was designed to provide a good match only with the various distances between 2 neighboring arch vessels, and type 2 was designed to provide a good match with both the different diameters of the native arch vessels and various distances between 2 neighboring arch vessels ( Figure 1).
In both types, the tapered main tube graft was 145 mm in length, 32 mm in proximal diameter and 28 mm in distal diameter. The proximal portion of the main tube graft was unstented before implantation and designed for arch repair while the distal portion was stented and acted as a stented elephant trunk. The distance between 2 neighboring sidearm graft was 12 mm (longer than that between 2 corresponding arch vessels of most Chinese adults).
All three sidearm tube grafts were 3.5 mm long in both types. In type 1, all three sidearm tube grafts were stented before implantation. The first sidearm stent graft was 14 or 16 mm in diameter, and both the second and third sidearm stent grafts were 12 or 14 mm in diameters. In type 2, all three sidearm tube grafts were unstented before implantation. The first sidearm tube graft was 20 mm in diameter (bigger than most innominate arteries), and both second and third sidearm tube grafts were 16 mm in diameter (bigger than that of most corresponding arch vessels).
The main tube graft and 3 sidearm grafts were individually mounted on 4 catheters and restrained by 4 silk strings.
Stents for arch or arch vessel
Stents for arch or arch vessel were multiple rings of selfexpandable nitinol wire (Yuhengjia Sci Tech Corp Ltd, Beijing, China). In each stent, the multiple rings were connected to a polyester vascular fabric felt ( Figure 2). Therefore, the stent had bare stent portion and polyester fabric portion. The arch stent was 60 mm long and 26 to 34 mm in diameter, and the arch vessel stent was 30 mm long and 12 to 20 mm in diameter. The diameter of the stent selected was 10% to 20% bigger than the size of the corresponding landing zone [12,13].
Operative technique
All procedures were performed with patients under general anesthesia and cardiopulmonary bypass. The patient was placed in a supine position. The right axillary artery was exposed using subclavian incision and a median sternotomy was performed. Cardiopulmonary bypass was established by 2 venous cannulas via the right atrium and the arterial return cannula placed in the right axillary artery.
Cardiopulmonary bypass flow was maintained between 2.4 and 2.6 L · min −1 · m −2 . Myocardial protection was Figure 1 There were two types of our modified triple-branched stent graft. In type 1 modified triple-branched stent graft, all 3 sidearm tube grafts and distal portion of main tube graft were stented while the proximal portion of main tube graft was unstented before implantation (A). In type 2 modified triple-branched stent graft, only the distal portion of the main tube graft was stented while all 3 sidearm tube grafts and the proximal portion of main tube graft were unstented before implantation (B). The main tube graft and 3 sidearm grafts were individually mounted on 4 catheters and restrained by 4 silk strings (C). achieved by multiple antegrade perfusion of cold blood cardioplegic solution (4°C).
During core cooling, the innominate and left common carotid arteries were free from surrounding tissue and exposed as long as possible. When the patient was cooled to 32°C, the aorta was clamped just proximal to the innominate artery, and transected just above the sinotubular juncton. Manoeuvres such as aortic valve repair and sinus of Valsalva reconstruction were performed. The transected proximal stump of the ascending aorta was reconstructed and subsequently connected to the 1-branched Dacron tube graft (26 or 28 mm in diameter, a product of Intergard, Intervascular, Datascope Co, Montvale, NJ).
When the rectal temperature reached 22°C, cardiopulmonary bypass was discontinued and selective antegrade cerebral perfusion via the right axillary artery was established at a rate of 10 to 15 mL · kg −1 · min −1 . After the innominate and left common carotid arteries were cross-clamped (4 cm above the arch), the distal ascending aorta was transected at the base of the innominate artery and the arch was longitudinally opened at the anterior wall ( Figure 3A). Through those aortic incisions, the main tube graft was placed into the true lumen of the arch and proximal descending aorta, and then each sidearm tube graft was implanted one by one into the corresponding arch vessel ( Figure 3B, C). Once the main tube graft and sidearm tube grafts were properly positioned, the restraining strings were withdrawn and tube grafts were deployed ( Figure 3D). Each sidearm stent with the size proportional to that of corresponding arch vessel was selected and anchored into the implanted sidearm tube graft (If type 2 modified triple-branched stent graft was used), which resulted in longitudinal fold of the sidearm tube graft inside the corresponding arch vessel to match this arch vessel (Figure 3E, F, G). Then, a continuous 4-0 polypropylene suture was used to close the arch longitudinal incision with incorporation of the main tube graf ( Figure 3H). Finally, the arch stent with the size proportional to the arch diameter was inserted and deployed into the proximal portion of the main tube graft (the bare stent portion towards the arch vessel orifices), which resulted in transverse fold of the main tube graft between two sidearm tube graft to match the distance between two corresponding arch vessels ( Figure 3I, J). The transected distal stump incorporating the main tube graft and the polyester fabric felt of the arch stent was directly anastomosed to the distal end of the 1-branched Dacron tube graft with a continuous 4-0 polypropylene suture ( Figure 3K, L).
After the air was carefully flushed out from the modified triple-branched stent graft, antegrade systemic perfusion from the branch of the 1-branched Dacron tube graft was started, and the patient was rewarmed. During the rewarming, arterial banding at the dissected arch vessel's base was applied in the patients with type 1 modified triple-branched stent graft implantation. The banding felt was Dacron tube graft ring with 3 mm in width and 5-10% shorter than the size of the implanted sidearm stent graft in length.
Follow-up
Patients were followed up after they were discharged. They were contacted by telephone or direct interview in our department. Contrast-enhanced computed tomographic scan and echocardiographic examination were prospectively performed on the following schedule: before discharge, 3 months after the operation, and annually thereafter. The effectiveness of the open placement of a modified triple-branched stent graft was estimated by complete thrombus obliteration of the false lumen surrounding the modified triple-branched stent graft. To demonstrate the fate of the descending thoracic and abdominal aorta after surgery, the diameter of the dissected aorta at the diaphragmatic level and diameters of both the dissected aorta and false lumen at the level of the superior mesenteric artery were collected in each computed tomographic examination, including the preoperative computed tomographic scan. Methods used to measure of those diameters have been described in the literature [13].
Statistical analysis
Continuous data were expressed as mean ± SD. A repeated measures ANOVA was used to compare the diameters of the dissected aorta and false lumen before surgery, before discharge, and at 3 months after surgery. The differences over the 3 time points were compared with a 2-df test; the individual time point could then be compared each other by use of a mixed model approach if the differences were significant. All analyses were performed with SAS 9.0 software (SAS Institute Inc, Cary, NC). A value of P < 0.05 was considered significant.
Operative data
Placement of the modified triple-branched stent graft into the true lumen of the proximal descending aorta, arch and 3 arch vessels was technically successful in all 25 patients, and insertion of the stents for the 3 arch vessels and the arch into the corresponding tube grafts could be easily completed. Type 1 modified triple-branched stent graft was used in 23 patients and type 2 was used in 2 patients. Postoperative chest X-ray indicated that all implanted stents were in a good position and wide expansion. Complete resection or sealing of the targeted entry sites with this procedure were confirmed by intraoperative transesophageal echocardiography.
Concomitant procedures included aortic valve repair in 2 patients (not including commissural resuspension), Bentall procedure in 2 patients and sinus of Valsalva reconstruction in 21 patients.
Mortality and morbidity
In this series, there was 1 in-hospital death. This patient had preoperative cardiac tamponade and acute renal dysfunction. Although the patient had an uneventful operative course and was extubated on 3rd postoperative day, heart arrest occurred on 7th postoperative day during the blood dialysis. After the resuscitation, hemodynamics was stable and transesophageal echocardiographic examination showed that no re-dissection or rupture in repaired aorta was found. But this patient died of multi-organ failure 10 days after the operation.
Hemostasis was not a problem in those patients. No patients required additional surgery to correct excessive postoperative bleeding. Postoperative cerebral complications include infarction in 1 patient and global temporary neurologic dysfunction in 2 patients, but they fully recovered before hospital discharge. Acute renal failure complicated postoperative care in 3 patients with 2 requiring dialysis. No pulmonary complication resulted. The postoperative mechanical ventilation support period was 22 ± 7.9 hours (range, 15 to 48 hours) and the duration of intensive care unit stay was 2.2 ± 1.2 days (range, 1 to 6 days).
Computed tomography
Postoperative computed tomography showed that all stents were fully opened and not kinked; there was no space or blood flow surrounding the modified triplebranched stent graft. The false lumen in the arch and the descending aorta covered by the modified triplebranched stent graft closed with thrombus in all survivors ( Figure 4). No significant sidearm graft stenosis or occlusion was found. Disappearance of the false lumen and recovery of the true lumen was observed in all dissected arch vessels.
At the diaphragmatic level, the false lumen of the descending aorta distal to the stent graft closed with thrombus in 20 of 24 patients at their first and second postoperative images. At this level, the aortic diameter was 29.42 ± 2.18 mm preoperatively, 27.29 ± 2.55 mm before discharge, and 26.92 ± 2.16 mm at 3 months after surgery. A significant difference in the aortic diameters at this level over the 3 time point was found (P < 0.05). Compared with the preoperative data, both aortic diameters before discharge and at 3 months after surgery reduced significantly (P < 0.05 for each), but there was no significant difference in the aortic diameter between before discharge and at 3 months after surgery (P = 0.54). At the superior mesenteric arterial level, a patent false lumen present in all survivors' first and second postoperative computed tomographic images. The aortic diameter at this level was 26.25 ± 2.74 mm preoperatively, 26.21 ± 2.08 mm before discharge, and 25.58 ± 2.06 mm at 3 months after surgery. There was not significantly different in those aortic diameters over the 3 time points (P = 0.63). The diameter of the false lumen was 14.71 ± 1.97 mm preoperatively, 9.0 ± 2.4 mm before discharge, and 8.5 ± 2.0 mm at 3 months after surgery. A significant difference in the diameters of the false lumen at this level over the 3 time points was found (P < 0.05). Compared with preoperative data, both diameters of the false lumen before discharge and at 3 months after surgery reduced significantly (P < 0.05 for each), but there was no significant difference in the false lumen diameter between before discharge and at 3 months after surgery (P = 0.21).
Follow-up
All survivors were followed up to the end date of this study (September, 2013). The follow-up was 100% complete. The mean follow-up period was 5.2 ± 2.1 months (range, 3 to 10 months). During the follow-up, no any severe complication related to the surgery or residual dissection was found, and there were no late deaths and no need for reoperation. All survivors resumed normal activities.
Discussion
Endovascular stent graft placement has widely been confirmed as an effective and less invasive alternative to surgical repair for acute aortic dissection [14][15][16]. In this study, we successfully applied our open modified triplebranched stent graft placement for total arch repair in 25 consecutive patients with acute type A aortic dissection. Placement of these modified triple-branched stent grafts and stents into the descending aorta, 3 arch vessels and arch could be easily completed in 2-3 minutes. Most patients had an uneventful postoperative course and were discharged from hospital without complications. Their postoperative computed tomographic scans showed that all stent grafts were fully opened and not kinked, there was no space or blood flow surrounding the modified triple-branched stent graft and no sidearm graft stenosis or occlusion. These preliminary results demonstrated that our modified triple-branched stent graft placement technique can be used in most patients with acute type A aortic dissection for effective total arch repair.
In the first generation of triple-branched stent graft placement technique, the stent graft was inserted into the proximal descending aorta, arch and 3 arch vessels through the distal ascending transverse incision. However, insertion of the stent graft in some cases was not so easy, because the arch vessel orifices and the true lumen of the descending aorta could not be clearly seen through this distal ascending incision. To reduce this problem, we introduced the arch open technique in this study. Through this arch open, the arch vessel orifices and the true lumen of the descending aorta could be clearly seen, which resulted in easier and safer implantation of the modified triple-branched stent graft. Although the arch open and close process took 2-4 minutes, the stent graft implantation became easier and faster. Therefore, compared with the first generation of triple-branched stent graft placement technique, time for total arch repair didn't increase in our series. In type A aortic dissection, the anterior wall of the arch is usually involved by the dissection. Therefore, the arch open and close are frequently performed at the dissected site. After an acute dissection, the dissected arch wall is so fragile that the arch close often results in intraoperative or postoperative hemorrhage owing to tissue tearing at the suture line [17]. In our technique, although the arch open was directly closed with incorporation of the main tube graft without any other reinforcement, there was no problem with bleeding from the suture site either intraoperatively or postoperatively. Actions of the implanted main tube graft and arch stent may contribute to this good result. The implanted stent graft effectively approximated the dissected layers of the arch wall, securely closed the false lumen, and consequently interrupted back flow from the false lumen, which often is a source of bleeding at the suture site. Furthermore, after the arch open was closed with incorporation of main tube graft, antegrade blood leakage from the suture site into the residual false lumen was completely prevented.
Theoretically, our modified triple-branched stent graft could be easily implanted through the same arch incision as hemiarch replacement, and the Dacron prosthesis replacing ascending aorta and hemiarch could directly connect to the modified triple-branched stent graft at this arch incision. This hemiarch replacement combined with open placement of a modified triple-branched stent graft technique could eliminate the arch open and close, obviate the need of the arch stent, and consequently appear simpler than our technique described in this study. However, this hemiarch replacement technique could not be preferred by us. Two major advantages of our technique over this hemiarch replacement technique might contribute to our preference. The implanted arch stent could make the main tube graft closely contact to the arch wall, shrink the false lumen and promote thrombosis of the false lumen in the dissected arch. Moreover, once bleeding occurred from the posterior suture line in the hemiarch replacement, hemostasis in this deep portion is difficult. In our technique, we performed the distal aortic anastomosis at the distal ascending aorta and the arch open at it's anterior wall, which provided a better surgical view, and hemostasis was much easier.
In our practice with open placement of the first generation of triple-branched stent graft in more than 100 patients with acute type A aortic dissection, we found that most arch vessels could be easily matched by our prefabricated sidearm stent grafts and difficult match occurred in the distances between two arch vessels and in a few dissected arch vessels with larger false lumen and smaller real lumen. For a dissected arch vessel with a larger false lumen, it was not easy to determine the proper size of the sidearm stent graft, and unusual large sidearm stent graft was frequently necessary. Recently, banding at the bases of those dissected arch vessels was applied by us. Since we routinally used banding technique for dissected arch vessels, no any sidearm stent graft endoleak or stenosis was found. This result suggested that banding technique is an effective alternative to our modified triple-branched stent graft for the good match between the sidearm stent graft and the corresponding arch vessel. Based on these findings, two types (type 1 and type 2) of our modified triple-branched stent graft were produced. In this series, type 1 modified triplebranched stent graft placement combined with arch vessel banding technique was applied more often than type 2 modified triple-branched stent graft placement, mainly because we believed type 1 modified triple-branched stent graft placement combined with arch vessel banding technique was simpler and had less chance of sidearm graft stenosis.
Implantation of sidearm stent graft and fold of sidearm polyester tube graft inside the corresponding arch vessel might produce sidearm graft stenosis or occlusion. Although no sidearm graft stenosis or occlusion was observed in our series, the long-term patency of those sidearm grafts should be carefully evaluated. Fortunately, fold of sidearm polyester tube graft inside the smaller arch vessel could be easily avoided by using our type 1 modified triple-branched stent graft. The long-term patency of the sidearm stent grafts of the type 1 modified triple-branched stent graft is expected to be satisfactory because simple endovascular stenting for arch vessel provides satisfactory long-term patency even in stenotic obstructive pathologies [18][19][20].
In the traditional total arch replacement for acute type A aortic dissection, the elephant trunk was routinally applied to achieve a stronger distal anastomosis and to facilitate subsequent surgery on the distal aorta [21]. However, placement of the elephant trunk into the true lumen of the dissected descending aorta is difficult and some complications, such as kinking and obstruction of the graft, embolization and paraplegia, have been found [22]. To reduce such problems, stented elephant trunk technique was developed [14,15]. This stented elephant trunk has been proven to be an effective way of closing the residual false lumen of the descending aorta, which might contribute to better long-term outcomes for acute type A aortic dissection [23]. In our modified triple-branched stent graft, the distal part of the main graft was designed to be a stented elephant trunk. Therefore, both the scope of the repaired thoracic aorta and the outcome of residual false lumen of our modified triple-branched stent graft placement technique should be comparable with the traditional total arch replacement combined with stented elephant trunk technique.
Recently, some other techniques have been developed to simplify total arch repair for acute type A aortic dissection. Total endovascular arch repair using a fenestrated stent graft or using a conventional straight stent graft with arch debranching is an effective technique to complete arch repair for acute aortic dissection [24]. This technique can be performed off-pump. Consequently it would be less invasive than our technique. We also performed this technique for some patients with acute aortic dissection and satisfied results were obtained. However, only normal ascending aorta can provide a proximal landing zone in this technique. Therefore, it can not be used in the patient with a patent false lumen of ascending aorta. Enlightened by this total endovascular arch repair technique, some surgeons developed a new hybrid operation to get the effective total arch repair for acute type A aortic dissection, in which the dissected ascending aorta is replaced with a Dacron tube graft under cardiopulmonary bypass with moderate systemic hypothermia, and arch vessel bypasses from the Dacron tube graft and antegrade or retrograde deployment of a conventional straight stent graft into the arch and the proximal descending aorta are performed [25]. This hybrid technique eliminates the need of deep hypothermic circulatory arrest, but arch vessel bypasses are difficulty if arch vessels are seriously involved by the dissection and their long-term patency should be carefully evaluated. Shimamura et al. also developed open branched endoprosthesis placement technique [26]. Although the branched endoprosthesis used in their technique seems similar to our modified triple-branched stent graft, the original idea of the design is totally different. In shimamura's branched endoprosthesis, sidearm stent grafts were connected to the main graft in the side dish during the procedure, and the size of each sidearm stent graft and the distances between 2 neighboring sidearm graft were decided by each patient's corresponding sizes which were determined by the measuring results using preoperative computed tomography. Therefore, it is impossible to be commercial. Our modified triple-branch stent graft was designed to provide a good match with the different diameters of the native arch vessels and the various distances between two neighboring arch vessels, so it can be commercial and be used in most patients.
Conclusions
The modified triple-branched stent graft could provide a good match with the different diameters of the native arch vessels and the various distances between 2 neighboring arch vessels, and it's placement could become much easier by the arch open technique. Therefore, placement of a modified triple-branched stent graft could be easily used in most patients with acute type A aortic dissection for effective total arch repair. Rigorous long-term follow-up and further extensive clinical trials are necessary to completely evaluate the efficacy of the modified triplebranched stent graft and the arch open technique before this combined technique can become a reliable alternative to conventional total arch repair. | 2016-05-17T23:47:41.435Z | 2014-08-02T00:00:00.000 | {
"year": 2014,
"sha1": "58210f95ed6595738f3640085a6ed0a493471444",
"oa_license": "CCBY",
"oa_url": "https://cardiothoracicsurgery.biomedcentral.com/track/pdf/10.1186/s13019-014-0135-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4db2953895afa117ebe46d99787b6a3e70d6dba",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229457674 | pes2o/s2orc | v3-fos-license | K -Means Genetic Algorithms with Greedy Genetic Operators
The k -means problem is one of the most popular models of cluster analysis. The problem is NP-hard, and modern literature offers many competing heuristic approaches. Sometimes practical problems require obtaining such a result (albeit notExact), within the framework of the k -means model, which would be difficult to improve by known methods without a significant increase in the computation time or computational resources. In such cases, genetic algorithms with greedy agglomerative heuristic crossover operator might be a good choice. However, their computational complexity makes it difficult to use them for large-scale problems. The crossover operator which includes the k -means procedure, taking the absolute majority of the computation time, is essential for such algorithms, and other genetic operators such as mutation are usually eliminated or simplified. The importance of maintaining the population diversity, in particular, with the use of a mutation operator, is more significant with an increase in the data volume and available computing resources such as graphical processing units (GPUs). In this article, we propose a new greedy heuristic mutation operator for such algorithms and investigate the influence of new and well-known mutation operators on the objective function value achieved by the genetic algorithms for large-scale k -means problems. Our computational experiments demonstrate the ability of the new mutation operator, as well as the mechanism for organizing subpopulations, to improve the result of the algorithm.
Introduction
e k-means problem is a continuous unconstrained global optimization problem which has become a classic clustering model. is problem is proved to be NP-hard [1,2], so it is necessary to find a compromise between the computation time and the solution preciseness. e aim of the problem is to find set S � X 1 , . . . , X k of k points and X 1 , . . . , X k ∈ R d called centroids in a d-dimensional space that minimizes the sum of squared distances from N known points (data vectors) A 1 , . . . , A N ∈ R d to the nearest centroid [3]: where L(·) is the distance between two points (usual-lyEuclidean) and k is given.
Data vector indexes for which the jth centroid is the nearest one form a set (cluster) C j , j � 1, k. An equivalent problem setting is as follows: where X j is the centroid of the jth cluster. e simplest and most popular local search algorithm is the k-means algorithm [4,5] also called Alternate Location and Allocation (ALA) procedure [6,7] or Lloyd algorithm. Similar procedure called EM (Expectation Maximization) [8,9] and its modifications [10][11][12] are the most popular algorithms for separating the mix probability distribution. e k-means algorithm improves an intermediate solution sequentially, which enables us to find a local minimum.
Technically, this is not a true local search algorithm in terms of continuous optimization, as it searches for a new solution not necessarily in the ε-neighborhood of an existing solution. Nevertheless, it enables us a solution which is locally optimal in ε-neighborhood.
If we use distances instead of squared distances in (1), we deal with the continuous p-median problem. e similarity of these NP-hard problems [13,14] allows us to use similar approaches to solving them. However, unlike the p-median problem with Euclidean distances, finding the exact solution of a 1-means problem (k-means problem with k � 1 or the centroid search problem) in accordance with Algorithm 1 is trivial, and finding a local minimum of the k-means problem takes less computational resources.
is allows the local search to be integrated into various effective global search strategies.
In the early attempts to solve the p-median problem by exact methods (its discrete modifications), the authors used a branch and bound algorithm [15][16][17] for solving very small problems. In [18][19][20], the authors reviewed various heuristic solution techniques for k-means and p-median problems. In [21][22][23], the authors presented local search approaches including the Variable Neighborhood Search (VNS) and concentric search. In [22], Drezner et al. proposed heuristic procedures including the genetic algorithm (GA), for rather small datasets.
Many approaches, based on the data reducing [24], simplify the problem by selection of some part of the initial dataset and then use these results as an initial solution to the k-means algorithm on the complete dataset [25][26][27][28]. Such aggregating as well as reducing the number of the data vectors [29] enables us to solve large-scale problems within a reasonable time. However, such approaches lead to a reduction in preciseness. In our research aimed at obtaining the most precise solutions, we consider only the methods which estimate the objective function (1) directly, without aggregation or approximation approaches.
Modern publications offer many heuristic procedures [19,30] for setting the initial centroids for the k-means algorithm, most of them belong to various evolutionary and random search methods. Local search algorithms and their randomized versions are widely presented. For instance, Variable Neighborhood Search (VNS) algorithms [23,31,32] or agglomerative algorithms [33,34] sometimes show good results. A large number of articles are devoted to the initialization procedures for local search algorithms, such as random seeding and estimating the distribution of the data vectors [30]. e challenge is that, in many cases, even multiple runs of simple local search algorithms from various randomly generated solutions do not lead to a solution that is close to the global optimum. More advanced algorithms enable us to get the objective function (1) values many times better than the local search methods [32]. e use of genetic algorithms and other evolutionary approaches to improve the results of the local search is a widely used idea [35][36][37][38]. Such algorithms recombine the local minima obtained by the k-means algorithm. GAs operate with a certain set (population) of candidate solutions and include special genetic operators (algorithms) of initialization, selection, crossover, and mutation. e mutation operator randomly changes the resulting solutions and provides some diversity in the population.
However, in genetic algorithms, as the number of iterations increases, the population degenerates into a certain set of solutions close to each other. Larger populations as well as dynamically growing populations improve this situation. However, simpler algorithms based on the use of the same greedy agglomerative procedures [32,39] often show better results within the same computation time.
In this research, we do not discuss the adequacy of the kmeans clustering model, which is actually questionable. We only focus on the preciseness and stability of the obtained objective function value (1) within the framework of the kmeans model.
ere are situations when the cost of error is high [9]. In these cases, as well as when comparing the accuracy of an algorithm with a certain standard solution (not necessarily globally optimal), we need to get a result that would be difficult to enhance by other known methods without meaningful increase of the computation time. Evolution of parallel processing systems such as graphics processing units (GPUs) makes multiple runs of local search algorithms very cheap. In this case, large-scale problems (up to several millions of data vectors) can be solved with the use of the most advanced algorithms providing the highest preciseness. As our study shows, for large-scale problems, further improvement in the results of the genetic algorithms with greedy heuristic crossover can be achieved by using a special mutation operator and partially isolated solution subpopulations. e aim of this paper is to introduce a new k-Means Genetic Algorithm with the greedy agglomerative crossover operator, a special greedy agglomerative mutation operator, and subpopulations. e rest of this article is organized as follows. In Section 2, we propose a brief overview of known approaches to the development of k-means genetic algorithms. In Section 3, we give an overview of known mutation genetic operators used in k-median genetic algorithms in accordance with various approaches to chromosome encoding as well as other instruments for increasing the population diversity. In Section 4, we propose new modifications to the genetic algorithms with greedy heuristic crossover operator. Such modifications include partially isolated subpopulations and the use of a new mutation operator based on the greedy heuristic procedure. In Section 5, we describe the results of our computational experiments which demonstrate the efficiency of our new modifications on large datasets.
K-Means Genetic Algorithms
e idea of various genetic algorithms is based on a recombination (interchange) of elements in a set ("population") of candidate solutions ("individuals") encoded by "chromosomes." Such elements of the chromosomes are called "genes" or "alleles." Each chromosome is a vector of genes (bits, integers, or real numbers) representing a solution. e goal of gene recombination is achieving the best value of an objective function called "fitness function." e appearance of the first genetic algorithms for solving the discrete p-median problem [40] preceded the genetic algorithms for the k-means problem (k-Means Genetic Algorithms). Alp et al. [41] proposed a rather fast and precise algorithm with a special "greedy" (agglomerative) heuristic procedure used as the crossover genetic operator for the network p-median problem. Such algorithms solve discrete network problems and use a very simple binary chromosome encoding (1 for the network nodes selected as the centers of the clusters, and 0 for those not selected).
In the genetic algorithms for the k-means and similar problems with binary-encoded chromosomes, many mutation techniques can be used. For example, in [42], the authors represent the chromosome with binary strings composed from binary-encoded features (coordinates) of the centroids. e mutation operator arbitrarily alters one or more components (binary substrings) of a selected chromosome.
If the centers or centroids are searched for in a continuous space, some genetic algorithms still use the binary encoding [38,43,44]. In the k-means algorithm, its initial solutions are usually subsets of the dataset A 1 , . . . , A N . In such chromosome code, 1 means that the corresponding data vector must be selected as an initial centroid and 0 for those not selected. In this case, some local search algorithm (k-means algorithm or similar) is used at each iteration of the GA to estimate the final value (local minimum) of the objective function (1).
In [45], the authors refer to their algorithm as "Evolutionary k-Means." However, they actually solved an alternative problem which aimed to increase the clustering stability instead of minimizing (1). eir algorithm operates with binary consensus matrices and uses two types of mutation genetic operators: cluster split (dissociative) and cluster merge (agglomerative) mutation. In [46], the chromosomes are strings of integers representing the cluster number for each of the clustered objects, and the authors solve the k-means problem with simultaneous determining the number of clusters based on the silhouette [47] and Davies and Bouldin criteria [48], which are used as the fitness functions. us, in [46], the authors also solve a problem with the mathematical statement other than (1). Similar encoding is used in [37] where the authors propose a mutation operator which changes the assignment of individual data vectors to the clusters.
In [49], the authors described the mutation operator as a procedure that guarantees population diversity (variability). Usually, for the k-means and p-median problems, the mutation randomly changes one or many chromosomes, replacing some centroids [36,37]. Mutation and crossover are the most important genetic operators playing different roles: the crossover seeks to preserve the features of parent solutions, while the mutation tries to cause small local changes in the solutions. Compared to a crossover, a mutation is usually regarded as a secondary operator with a low probability μ [50]. High frequency of mutations makes a genetic algorithm to search randomly and chaotically. Nevertheless, many studies have shown that evolutionary algorithms without a crossover can work better than a standard genetic algorithm if the mutation is combined with an effective selection operator [51][52][53][54]. Mutation is performed on a single parent solution.
In [36], the authors encode the solutions (chromosomes) in their GA as sets of centroids represented by their coordinates (real vectors or arrays). e genetic algorithms with the greedy heuristic crossover operator use the same principle [55].
us, various genetic algorithms for the k-means and similar problems can be classified into three categories in accordance with the chromosome encoding method: (a) Integer encoding each gene represents a data vector A 1 , . . . , A N , and the value is its cluster number. Such algorithms are declared for solving the k-means or pmedian problem; however, they may use other objective function than (1). A local search for the minimum of (1) is sometimes declared their mutation operator.
(b) Integer or binary encoding each gene corresponds to a centroid (cluster) and describes the data vector index 1, N selected as the initial centroid for the local search method. Such algorithms may use a wide variety of crossover and mutation operators. (c) Real (direct) encoding each gene is a centroid encoded by its coordinates. Such algorithms are able to demonstrate the most precise results. However, the modern literature offers a very limited variety of mutation operators for such algorithms. Usually, they do not use any mutation [38,41,43].
e greedy heuristic crossover operator can be described as a two step algorithm. e first step combines two known ("parent") solutions (chromosomes) into one intermediate invalid solution with an excessive number of centroids (clusters). At the second step (the greedy agglomerative procedure), the algorithm removes excessive centroids in each iteration so that the removal of the centroid results in the least significant growth of the objective function (1) [41,43], see Algorithm 2.
Algorithms 3 and 4 are known heuristic procedures [32,41,43], which implement the first step of the greedy heuristic crossover operator and run Gree dy heuristic procedure.
(1) ForEach centroid X i , i � 1, k, define its cluster C i ⊂ A 1 , . . . , A N as the subset of data vectors having closest centroid X i .
ALGORITHM 1: kMeans().
Mathematical Problems in Engineering 3 ese algorithms can be included in various global search strategies. Combining items (centroids) of solution S ′ with the items of the other solution S ″ and running Algorithm 1, we get a set of "child" solutions. ese solutions are used as the neighborhoods, in which a better solution is sought for. us, the second solution S ″ is a parameter of the neighborhood [32]. e general framework of the GA for the k-means and similar location problems can be described as Algorithm 5. e objective function F fitness is (1). We used the tournament selection (tournament replacement, see Algorithm 6) for Step 10 of Algorithm 5: Such algorithms usually operate with a very small population, and other selection procedures do not improve the results significantly [41,43,46].
In the GAs with greedy heuristic crossover [43,44], Algorithms 2 and 3 are used as the crossover genetic operator. ese operators are computationally expensive due to multiple runs of the KMeans algorithm. In the case of large-scale problems and very strict time limitation, GAs with greedy heuristic crossover operator performs only few iterations for large-scale problems. e population size is usually small, 10-25 chromosomes. Dynamically growing populations [43,44] are able to improve the results. In this case, Step 7 of Algorithm 5 is replaced by the following procedure (see Algorithm 7). us, in this paper, we intend to improve the GAs with greedy heuristic crossover operator which can be described as follows [43,46]: k-GA-ONE: GA framework (Algorithm 5) with Gree dy ONE as the crossover operator, Tournament selection (Algorithm 6), dynamic population size adjustment (Algorithm 7), and empty mutation operator. k-GA-FULL: the same but Gree dy FULL crossover operator. k-GA-RND: the same but the crossover operator Gree dy FULL or Gree dy ONE is selected randomly with equal probability. e empty mutation operator can be replaced with a known or new procedure described in Section 3.
Known Methods of Increasing Population Diversity in the Genetic Algorithms
Despite the widespread use of various genetic algorithms for the k-means problems in the modern literature, there is practically no systematization of the approaches used [56][57][58][59].
For various methods of chromosome encoding, various mutation operators have been developed: bit inversion for binary encoding [50], exchange, insert, inverse, and offset permutation is the objective function (1) (6) end for (7) Select a subset S elim of n elim centroids, S elim ⊂ S, |S elim | � n elim , with the minimal values of the corresponding variables F i′ ′ .Here,
Require: Two solutions (sets of centers)
Merge S′ and one item of S″: S←S′ ∪ X ′ ′ i′ ; S i′ ←Greedy(S) end for return the best of solutions S 1 , . . . , S p ALGORITHM 4: Greedy ONE. 4 Mathematical Problems in Engineering [60] for variable length chromosomes, Gaussian mutation [61], and polynomial mutation for real coding [62,63]. Some studies suggest a combination of the mutation operators [64] or the self-adaptive mutation operators [65][66][67]. e efficiency of various mutation operators depends on the GA parameters [53,68,69] and problem type [70,71]. However, the number of various mutation operators with real encoding for continuous problems is very limited. e GA for the network p-median problem described in [72] includes the hypermutation operator, which consists in an attempt to replace each gene in the chromosome with each gene from the set of genes that were not originally part of the processed chromosome. After each replacement, the algorithm checks for the improvement of the objective function value. e operator is computationally expensive due to numerous checks of the objective function and actually similar to the local search principle embedded in the j-means algorithm [73]. In [74], the hypermutation algorithm was further developed as the nearest four neighbors' algorithm.
e idea is to reduce computational costs by reducing the set of genes used for the replacement to the nearest neighbors of the gene being replaced. In several works [37,75,76], the authors propose using the kMeans algorithm as a mutation operator.
Each of these algorithms declares a local search as a mutation operator. e GA framework allows us to use a wide variety of genetic operator options. However, the local search is designed to improve an arbitrary solution by transforming it into a local optimum and thereby reducing, rather than increasing, the variety of chromosomes (solutions).
In [36,42], the mutation operator is as follows (uniform random mutation). Randomly generate r ∼ U[0, 1). If r < μ (where μ is mutation probability), then the chromosome will mutate. Randomly generate b ∼ U[0, 1). If the current position of a centroid is X j � (x j,1 , . . . , x x,d ), the mutation operator modifies it as follows: Signs "+" and "-" are used with the same probability [42]. is mutation operator shifts the centroid coordinates Require: Initial population size N POP (in ourExperiments, N POP � 10).
return solution S i * from the population with minimal value of f i * (6) end if (7) Selection: Randomly choose two indexes i 1 , i 2 ∈ 1, N POP , i 1 ≠ i 2 (8) Run chosen crossover operator: S C ⟵ Crossover(S i 1 , S i 2 ) (9) Run chosen mutation operator: S C ⟵ Mutation(S C ) (10) Run chosen procedure to replace a solution in the population (11) End loop ALGORITHM 5: GA with real chromosome encoding for the k-means and p-median problems. Mathematical Problems in Engineering randomly. A similar technique with an "amplification factor" was used in [44,77]. However, the local minima distribution among the search space is not uniform [49]: the new local minima of (1) can be found with higher probability in some neighborhood of a known local minimum than in a neighborhood of a randomly chosen point (here, by a neighborhood, we do not necessarily mean an ε-neighborhood but any subset of solutions which can be obtained by application of some defined procedure to the current solution). Combining local minima (subsets of centroids from two locally minimal solutions) must usually outperform the random shift of the centroid coordinates. e idea of combining local minima is the basic idea of the greedy heuristic crossover operator in genetic algorithms [38,43] and other algorithms [21].
e greedy heuristic crossover operator for the discrete p-median problem proposed in [41] and adapted for continuous p-median and kmeans problems in [38,43] was used in the GAs without any mutation operator. Such algorithms demonstrate more accurate results in comparison with many other algorithms for practically important middle-size problems. e other common approach to increasing the diversity in a population is to create subpopulations that develop more or less autonomously. Algorithms that produce subpopulations containing individuals gathered around optima are a wide class of such methods. e fitness sharing method [78] allows the evolutionary algorithm to search simultaneously in different areas (niches) corresponding to different local (or global) optima, i.e., this method allows one to identify and localize multiple optima in search space. e group of crowding methods [79][80][81] also uses a niche approach. e general concept of crowding is for individuals to fight for survival with similar offspring and apply tournament selection to a highlikeness parent-child pair. e main idea of the genetic chromodynamics [82] is to force the formation and maintenance of stable subpopulations. e proposed scheme of local interaction provides stabilization of the subpopulation in the early stages of the search. Subpopulations co-develop and converge to several optimal solutions.
In [83], the authors present the roaming optimization method. By using subpopulations developing in isolation, multiple optima are found. is method uses the tendency of evolutionary algorithms to premature convergence, turning this disadvantage into an advantage in the process of detecting local optima.
New Modifications to the Genetic Algorithms
e essence of our new mutation operator (greedy heuristic mutation, GHM) is as follows. We perform the crossover operator to the single parent chromosome and a randomly generated chromosome improved by the kMeans algorithm. In Step 9 of Algorithm 5, the mutation operator is replaced with Algorithm 8.
Despite small populations in the genetic algorithms with the greedy agglomerative crossover, the application of a simple approach with two subpopulations allows us to improve the result of the algorithm. In our research, within the population, we organized two subpopulations of equal volume. For the crossover and tournament, both chromosomes are mainly selected within the same subpopulation. If one of the subpopulations during a certain number of iterations does not provide an improvement in solutions and its record (the best solution) is inferior to the record of the second subpopulation, its individuals are replaced by new ones (reinitialization of the population). We assumed that chromosomes in the same subpopulation tend to develop in a similar way under the influence of crossover. Mutation of a separate chromosome increases the population diversity; however, under the influence of the crossover, the differences are gradually levelled. Reinitialization of a subpopulation is a substitute for a complete restart of the algorithm while maintaining the record. us, Step 7 of Algorithm 4 (selection) is transformed as Algorithm 9.
An additional Step is added to Algorithm 5 (see Algorithm 11). e idea of the Variable Neighborhood Search with randomized neighborhoods (see [32]) is also based on applying the greedy heuristic procedures (Algorithms 2 and 3) to a current solution and a randomly generated one transformed into a local minimum by Algorithm 1. Our computational experiments (see Section 5) show that the new genetic algorithms with GHM as the mutation operator outperform both the original genetic algorithms with greedy agglomerative crossover operator (Algorithm 4 with empty mutation) and the Variable Neighborhood Search with randomized neighborhoods.
As mentioned before, the greedy agglomerative crossover operator is a computationally expensive algorithm. In Algorithm 2, the objective function calculation F i′ ′ ←F(S ′ ) is performed more than (K − k) · k times. erefore, such algorithms are traditionally considered as methods for solving comparatively small problems (hundred thousands of data points and hundreds of centers). However, the rapid development of the massive parallel processing systems (GPUs) allows us to solve the large-scale problems with reasonable time expenses (minutes).
One of the most important issues of the GAs is the convergence of the entire population into some narrow area (population degeneration) around some local minimum. On the first crossover iterations, the "child" solutions usually have significant advantages in the objective function value in comparison with their "parents" due to the ability of the greedy agglomerative crossover operator to choose much better solutions in comparison with the k-means procedure. On a single central processor unit, such GAs manage to perform only few crossover operations due to the computationally expensive Greedy(), and the population diversity problem is not important. Our computational experiments show that, with an increase in the computational capacities and increase of the population size (which grows dynamically with the iteration number), the mutation operator plays more important role.
Computational Experiments
Parallel (CUDA) implementations of the kMeans() algorithm are known [84,85], and we used this approach in our experiments. All other algorithms were realized on the central processor unit. 6 Mathematical Problems in Engineering For our experiments, we used the classic datasets from the UCI and Clustering basic benchmark repositories: (a) Individual Household Electric Power Consumption (IHEPC): energy consumption data of households during several years (more than 2 million data vectors, 7 dimensions), 0-1 normalized data; "date" and "time" columns removed.
(b) SUSY (5 · 10 6 data vectors, 18 dimensions), 0-1 normalized data. Here, we do not take into account the true labelling provided by the database, and use this dataset to search for internal structure in the data. (d) BIRCH3 [10]: groups of points of random size on a plane (100000 data vectors, 2 dimensions). (Tables 1-6).
Randomly choose
For comparison, we used the genetic algorithms with greedy heuristic crossover (k-GA-FULL, k-GA-ONE, and k-GA-RND described in Section 2) as well as the kMeans procedure in the multistart mode and j-Means algorithm (centers are replaced with the data vectors) [73]. In addition, we ran various Variable Neighborhood Search (VNS) algorithms with randomized neighborhoods formed by greedy heuristic procedure [32], see algorithms k-GH-VNS1 and k-GH-VNS2. For algorithms launched in the multistart mode (j-Means and kMeans), only the best results achieved in each attempt were recorded. e minimum, maximum, average, and median objective function values and its standard deviation are summarized after 30 runs. For all algorithms, we used the same realization of the kMeans procedure which consumes the absolute majority of the computation time. e initial population size for all genetic algorithms consisted of N POP � 10 chromosomes.
All algorithms were classified into three groups. e first group of algorithms consists of known algorithms including the genetic algorithms with greedy heuristic crossover. Algorithms of the second group are the genetic algorithms with greedy heuristic crossover and known mutation operators (k-GA-xxx-m1 for uniform random mutation and k-GAxxx-m2 for scramble mutation [86] where a gene (centroid) Note (for all tables): "↑⇑" denotes the advantage of the best algorithm in this group over known algorithms (group A) is statistically significant ("↑" for t-test and"⇑" for Mann-Whitney U test); "↓⇓" denotes the disadvantage of the best algorithm in this group over known algorithms is statistically significant; "↕⇕" denotes the advantage or disadvantage is statistically insignificant. Significance level is 0.99.
is replaced with a randomly chosen data point). We performed our experiments with various values of mutation probability μ. Algorithms of the third group are genetic algorithms with greedy agglomerative crossover and new instruments for maintaining the population diversity: k-GAxxx-GHM are algorithms with the new GHM mutation operator, and k-GA-xxx-SUBPOP are algorithms with the new GHM mutation operator and two subpopulations.
In each group of algorithms, the best average and median values of the objective function (1) are underlined. We compared the best algorithms in the second and third groups with the best algorithm in the first group (the best of known algorithms) with the use of t-test and Mann-Whitney U test.
In the comparative analysis of algorithm efficiency, the choice of the unit of time plays an important role. e astronomical time spent by an algorithm strongly depends on the peculiarities of its implementation, the ability of the compiler to optimize the program code, and the fitness of the hardware to execute the code of a specific algorithm. Algorithms are often estimated by comparing the number of iterations performed (for example, the number of population generations for a GA) or the number of evaluations of the objective function. In our case, some of the algorithms are not evolutionary, and in genetic algorithms, the execution time of the crossover operator with the embedded kMeans algorithm can differ hundreds of times. erefore, comparing the number of generations is unacceptable. Comparison of the objective function calculations is also not quite correct. Firstly, the kMeans algorithm which consume almost all of the processor time, do not calculate (1) directly. Secondly, during the operation of the greedy agglomerative crossover operator, the number of centroids changes (decreases from 2k down to k or from k + 1 down to k), and the time spent on computing the objective function also varies. erefore, we nevertheless chose astronomical time as a scale for comparing algorithms. Moreover, all the algorithms use the same implementation of the kMeans algorithm launched under the same conditions.
In our computational experiments, the time limitation was used as the stop condition for all algorithms. As can be seen from Figures 1 and 2, the result of each algorithm depends on the elapsed time. Nevertheless, an advantage of the new algorithms remains regardless of the chosen time limit. e range of values in all tables is small; nevertheless, the differences are statistically significant in several cases. In all cases, new algorithms with the greedy heuristic mutation outperform known ones or demonstrate approximately the same efficiency (difference in the results is statistically insignificant). Moreover, new algorithms demonstrate the stability of results (narrow range of objective function values). In most cases, the best results were achieved by the genetic algorithms with nonempty mutation operators.
Conclusions
When solving some large-scale clustering problems, traditional local search algorithms often give a result very far from the optimal solution. In this research we aimed at developing not only fast but also the most accurate algorithm, based on genetic algorithms with greedy heuristic crossover operator, for solving related optimization problems. Methods for obtaining solutions in a fixed time, which would be difficult to improve by known methods without a significant increase in computational costs, include genetic algorithms with a greedy agglomerative crossover operator. As the computational results presented in this article show, further improvement in the achieved result of such algorithms is possible by increasing the diversity in their populations.
Computational experiments show that the population diversity maintaining mechanisms such as mutation genetic operator and subpopulations improve the features of genetic algorithms with greedy heuristic crossover for the large-scale k-means problem. Moreover, the best results can be shown by algorithms with a mutation operator based on greedy heuristic crossover operator with a randomly generated chromosome (new greedy heuristic mutation). e similarity in mathematical formulations of k-means, k-medoids, and p-median problems, as well as the problem of a mixture probability distribution separation, gives us a reasonable hope for the applicability of similar approaches to improving the results of solving those problems which determine possible directions for further research.
Data Availability
In our work, we used only data from the UCI Machine Learning and Clustering Basic Benchmark repositories which are available at https://archive.ics.uci.edu/ml/index. php and http://cs.joensuu.fi/sipu/datasets. | 2020-12-03T09:03:05.093Z | 2020-11-27T00:00:00.000 | {
"year": 2020,
"sha1": "22c7ebbbe7df163884cf97d3a2385d3b2b9616e5",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2020/8839763.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "8977e5445cefc816bad2f2644ef07bbed0b50eea",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
253344119 | pes2o/s2orc | v3-fos-license | The Effect of Green Coffee Supplementation on Lipid Profile, Glycemic Indices, Inflammatory Biomarkers and Anthropometric Indices in Iranian Women With Polycystic Ovary Syndrome: A Randomized Clinical Trial
Polycystic ovary syndrome (PCOS) is a heterogeneous clinical syndrome. Recent studies examine different strategies to modulate its related complications. Chlorogenic acid, as a bioactive component of green coffee (GC), is known to have great health benefits. The present study aimed to determine the effect of GC on lipid profile, glycemic indices, and inflammatory biomarkers. Forty-four PCOS patients were enrolled in this randomized clinical trial of whom 34 have completed the study protocol. The intervention group (n = 17) received 400 mg of GC supplements, while the placebo group (n = 17) received the same amount of starch for six weeks. Then, glycemic indices, lipid profiles, and inflammatory parameters were measured. After the intervention period, no significant difference was shown in fasting blood sugar, insulin level, Homeostasis model assessment of insulin resistance index, low-density lipoprotein, high-density lipoprotein, Interleukin 6 or 10 between supplementation and placebo groups. However, cholesterol and triglyceride serum levels decreased significantly in the intervention group (p < 0.05). This research confirmed that GC supplements might improve some lipid profiles in women with PCOS. However, more detailed studies with larger sample sizes are required to prove the effectiveness of this supplement.
INTRODUCTION
Polycystic ovary syndrome (PCOS) is a complicated endocrine disorder without specific etiology affecting 4-8 percent of women during reproductive age and results in menstrual dysfunction, hirsutism, ovulatory infertility, and some metabolic morbidity [1][2][3]. Most studies have focused on the development of abdominal obesity, insulin resistance and hyper-insulinemia, glucose intolerance, increased risk of type 2 diabetes mellitus (DM2), dyslipidemia (hyper-triglycridemia and low high-density lipoprotein (HDL) cholesterol) and hypertension which can elevate the risk of atherosclerosis in women with PCOS [4][5][6].
Inflammation, as another characteristic of PCOS, has been gaining importance recently. Low-grade chronic inflammation is strongly associated with hyperandrogenism and can result in ovarian dysfunction and metabolic aberration [7]. Some evidence claimed the direct simulative role of inflammation on excess androgen production of ovarian [8,9]. Since PCOS is affected by obesity, insulin resistance, and inflammation, dietary interventions and behavioral changes can be proper strategies. Exercise, high-protein moderate-carbohydrate diets, weight loss, anti-obesity pharmacologic agents, or bariatric surgery are different approaches to managing PCOS [10]. Moreover, antioxidants and anti-inflammatory agents have been shown to improve metabolic conditions related to PCOS, especially insulin resistance [11,12]. Medical management, such as contraceptive pills, metformin, and hormone therapy, were presented for PCOS patients; although, non-benefit in some circumstances and side effects cause researchers to pursue other effective therapeutic strategies [13]. Recently, attention to complementary medicine and nutritional supplements gain increased as therapeutic strategies in chronic disease with high efficacy and fewer side effects [14][15][16][17][18][19][20][21][22][23][24]. Green coffee (GC) belongs to the genus of coffee (Rubiaceae family), is a significant source of chlorogenic acids (CGA), and has different biological effects. Recent evidence in humans and animals demonstrated vasoreactivity improvement, antihypertensive effect, body weight loss, and modulation of glycemic indices from GC bean extract [25][26][27]. For instance, in a clinical trial, Roshan et al. examined the effect of GC (800 mg/day) in patients with metabolic syndrome for eight weeks [28]. They observed a positive impact of GC on some metabolic syndrome components such as high fasting blood glucose, insulin resistance, and abdominal obesity [28]. Another clinical trial study compared the administration of 40 g/day of green or black coffee in healthy subjects [29]. A significant reduction was observed regarding body weight and body mass index (BMI) with GC compared with black coffee. Also, waist circumference and abdominal fat were reduced after both interventions [29]. Considering the effective role of GC in the pathways involved in the pathogenesis of PCOS, including improving insulin resistance [30], blood sugar [30], weight [31,32], and anti-inflammatory [33] and antioxidant effects [34], and the lack of a study that examines these effects in patients with PCOS, this study aimed to evaluate the effect of GC supplement on lipid profile, glycemic indices, and inflammatory biomarkers.
Study design
The study was a double-blind, randomized clinical trial (RCT) which is approved by the Ethics Committee of Biomedical Research, Islamic Azad University, Science and Technology Branch. The study was registered in the Iranian Registry of Clinical Trials (IRCT), available at http://www.irct.ir (ID: IRCT20180808040745N1). Among the patients whose gynecologist diagnosed and confirmed their disease according to established standards, forty-four women with PCOS who met the inclusion criteria were enrolled in the study. Participants were divided into intervention and placebo groups by randomization block design. The allocation sequence was done by the random allocation software (RAS) (Microsoft Visual Basic 6, http:// www.msaghaei.com/Softwares/dnld/RA.zip, Latest version). To warrant the blinding in the evaluation process, the patients were allocated to the intervention groups by a person who was not involved in the current study. The researchers and patients were unaware of each group's intervention type.
Study sample
Inclusion criteria were women with PCOS that were willing to participate in the study age range of 20-40 years. Exclusion criteria were: having allergies and intolerance to GC supplements, taking steroid and non-steroid anti-inflammatory drugs, thyroid and kidney disorders, nutrition supplements rather than calcium, iron, and folic acid, unwillingness to continue the cooperation of each research unit during the study, acute disorders in during the study period, exposure to acute and severe stress during the study or pregnancy.
Study intervention
After a thorough explanation of the purpose and methods of the study, informed consent was obtained from all participants. Patients were randomly divided into two groups of intervention and control based on a randomized block design. Each patient in the experimental group received one tablet of 400 mg GC supplement and those in the control group received the same amount of placebo (starch) daily for 6 weeks.
Sample size
The sample size in this study, based on the changes in interleukin (IL)-6 levels [35], was calculated as 16 subjects in each group. Considering 30 percent dropouts, a total of about 44 participants were included in this study.
Anthropometric, physical activity, and dietary intake
At the beginning and at the end of the study, the International Physical Activity Questionnaire was employed to evaluate the physical activity of patients. Dietary intake was assessed using a food records questionnaire (including two weekdays and one weekend) at baseline and at the end of week 6. Height and weight were recorded with an accuracy of 0.1 cm and 0.1 kg, respectively. The BMI was calculated for each patient at the beginning, and the end of the study by the formula where kg is a person's weight in kilograms and m 2 is their height in meters squared (kg/m 2 ). After measuring waist and hip circumference by a meter with 0.1 cm accuracy, the waist-to-hip ratio (WHR) was calculated by dividing these two values.
Assessment of appetite
The short form of the Council of Nutrition Appetite Questionnaire (CNAQ) is called the Simplified Nutritional Appetite Questionnaire (SNAQ). The validation analysis of the questionnaires revealed that the SNAQ is more advised for clinical application due to its briefness and reliability. Four items make up the SNAQ, which are arranged in a single domain. https://doi.org/10.7762/cnr.2022.11.4.241
Statistical analysis
In this study, data were analyzed using SPSS software (version 21; SPSS Inc, Chicago, IL, USA). Quantitative data are reported as mean ± standard deviation (SD), and qualitative data are presented as frequency and percent. The Kolmogorov-Smirnov test was applied to assess the normality of data. Analysis of covariance (ANCOVA) was used to identify differences between two groups after adjusting for confounding variables (age and BMI). To examine differences within each group at baseline and after six weeks, paired t-tests were used. An Independent t-test was used to compare the two groups at the beginning of the study. In this study, p value less than 0.05 was considered significant.
RESULTS
After six weeks of GC / placebo supplementation, 34 patients out of 44 completed the study, and the following results were obtained (Figure 1). Table 1 shows the distribution of qualitative and quantitative variables, including education, occupational status, age, and duration of disease. Seventy percent of those in the intervention group and 64 percent in the control group had a university degree. Moreover, 29 percent of subjects in the GC group and 35 percent in control group were housewives. The mean age of the women was 27 (± 5.22) years. The duration of illness in those with GC consumption was six years and in the control group was eight years. There was no significant difference in none of these factors between the two study groups.
Effect of GC on glycemic indices
An Independent t-test was used to compare the mean fasting glucose level, blood insulin level, and insulin resistance in the two groups. In the GC and control groups, the mean at baseline was compared to the mean at week 6, which shows a significant increase in FBS (p = 0.01 and p < 0.001). According to the ANCOVA tests, fasting glucose, insulin levels, and insulin resistance at baseline were not significantly different between the placebo and treatment groups (
Effect of GC on lipid profile
The levels of lipid biomarkers are shown in Table 3 for both groups before and after the intervention. There was no significant difference between the two groups or in either group before and after the supplement therapy in terms of LDL and HDL. However, the reduced concentrations of TG and cholesterol for GC intervention group were different, compared to those for placebo group (p < 0.05).
Effect of GC on inflammatory biomarkers
In the sixth week ANCOVA test was used to compare the mean levels of IL-6 and IL-10, which did not show a significant difference between the two groups. In the GC and placebo groups, the mean level at baseline was compared to the mean at week 6, which showed no significant difference ( Table 4)
Effect of GC on anthropometric indices
The mean and standard deviation of BMI, waist circumference, hip circumference, WHR, and their changes in the two study groups are reported in Table 5. BMI and waist circumference were slightly reduced in both placebo and treatment groups, which was not significant. The hips at the beginning and end of the study are almost identical. The results of covariance analysis also showed no significant difference between the two groups before and after the study. Also, the WHR increased slightly at the end of the study in the control group, whereas in the treatment group, it decreased minimally (p > 0.05). https://doi.org/10.7762/cnr.2022.11.4.241 The Effect of Green Coffee on Metabolic Disorders in Polycystic Ovary Syndrome
Effect of GC on appetite
There was no significant effect of GC on appetite between supplement and placebo groups (
DISCUSSION
Present study was a double-blind, randomized clinical trial with 34 participants, and those in the intervention group received GC for 6 weeks. The main finding of this study was the hypotriglyceridemic and hypo-cholesterolemia effects of GC due to its bioactive components.
This study showed no significant change in glycemic indices between groups after the treatment. In line with our findings, Kondo et al. found that drinking caffeinated or decaffeinated coffee had no influence on FBS or insulin levels [38]. In contrast with our findings, Morvaridi et al. [39] have reported that drinking GC has a positive impact on adults' glycemic indices and cardio-metabolic risk factors. In addition, findings from several studies have identified the anti-diabetic effect of CGA. For example, consuming 3-4 cups of highcontent of CGA decaffeinated coffee daily can significantly decrease the risk of DM2.
Moreover, CGA has a similar therapeutic effect to metformin and insulin sensitizer properties [40]. Several publications have appeared in recent years documenting the alpha-glycosidase inhibiting activity of CGA in the pancreas in vitro [41,42]. Similarly, there are other studies in rats and humans with the blood glucose lowering effect of CGA or coffee [43]. Another potential biological action of CGA relies on an antagonistic role in the transportation of glucose in the intestine [44]. Based on Welsch et al. study [45], 1 mM CGA is able to reduce the Na+ gradient resulting in attenuated uptake of glucose in an in vitro brush border membrane by approximately 80 percent. In a clinical study by Iwai et al. [46], it was shown that consumption of 100 and 300 mg GC rich in CGA might restrict the activity of an amylolytic enzyme that decreases absorption of intestinal glucose. However, there was no significant difference in insulin levels. Another study by Van Dijk et al. showed some evidence around the putative effect of CGA on reducing glucose and insulin responses justifying the contribution between coffee and low risk of DM2 development [43]. Tunnicliffe et al. [47] studied CGA treatment in rats, resulting in low blood glucose response and some alterations in GIP hormone concentrations. A study by Ong et al. [48] illustrated that CGA could stimulate glucose transport to skeletal muscle through the activation of the AMPK passway for the first time. These results are some evidence around CGA that explain the probable effect of coffee against DM2 [48]. The observed dispute in this regard could be attributed to the use of various types of coffee, duration of study, and dose of the supplement in previous researches.
PCOS, as a heterogeneous clinical syndrome, can affect lipid profile. Dyslipidemia is one of its clinical features with 70 percent prevalence and can increase the risk of DM2 and metabolic syndrome [49,50]. As the current study presents, GC could decrease triglyceride and cholesterol levels without changes in HDL and LDL. In line with our findings, Zuñiga et al. [51] showed that GC and its CGA have hypolipidemic effects on serum levels of TG and TC in patients with impaired glucose tolerance. Further, Shahmohammadi et al. [52] indicated that GC bean extract supplementation (1 g/day) significantly improved TG and TC serum levels after eight weeks of intervention. Since CGA is a phenolic acid with a high concentration in GC beans, it modulates liver metabolic functions such as TG metabolism [53]. One suggested mechanism is through the regulation of peroxisome proliferatoractivated receptors (PPARs). These nuclear receptors have the potential to regulate the synthesis, transport, and oxidation of fatty acids. PPAR-α is one of these receptors that has insulin sensitivity and lipid-lowering effect [54]. In an animal study by Wan et al. [55] rats received a high-cholesterol diet and 1 or 10 mg/kg/day CGA for 28 days. Results of the study showed hypocholesterolemic effect from CGA due to the upregulation of PPAR-α and https://doi.org/10.7762/cnr.2022.11.4.241 The Effect of Green Coffee on Metabolic Disorders in Polycystic Ovary Syndrome https://e-cnr.org elevation of fatty acid utilization. Another study by de Sotillo and Hadley [56] showed a significant reduction in fasting plasma concentrations of cholesterol and TG in rats in the dose of 5 mg/kg body weight/day CGA for three weeks, respectively.
In this study, we also evaluated the effect of GC on IL-6 and IL-10; there was no significant change between the two groups. In agreement with our results, in the study by Song et al. [57] a significant reduction in plasma level of IL-6 was observed in mice fed 0.3% GC (300 mg GC extract/kg diet), compared to the high-fat diet (HFD) fed group after 11 weeks. This substantial effect on IL-6 may be caused by the fact that the GCE dose utilized in this investigation for mice was higher than the GCE dose used in our study at 1,460 mg/60 kg. Furthermore, in a study conducted by Hwang et al. [58], mice were injected three times with lipopolysaccharide with or without 0.1 mg CGA (5 mg/kg) for three days, and CGA was found to reduce IL-6 mRNA levels dose-dependently by downregulating nuclear factor κB (NF-κB). According to Wu et al. [59] research's giving ApoE-/-mice 400 mg/kg CGA for 12 hours reduced the serum levels of IL-6 compared to the control group. However, in one investigation, high dose CGA infusion (7 mg/kg) for seven days led to an increase in IL-6 and tumor necrosis factor(TNF)-α levels in rats compared to the control group; a low dose (0.3 mg/kg) had no remarkable impact on these biomarkers [60].
Another study by Shin et al. [61] examined the anti-inflammatory role of CGA on the production of IL-8 in the colitis model in C57BL/6 mice, and the results showed the suppression of IL-1β and macrophage inflammatory protein mRNA expression. These findings suggested that dietary CGA supplementation may relieve intestinal inflammatory conditions. Moreover, Shi et al. [62] showed that CGA supplementation in rat model can modulate liver fibrosis and inflammation through inhibition of some pathways such as tolllike receptor 4 signaling and NF-κB activation, serum levels of TNF-α and mRNA expression of IL-1β and IL-6.
Although there is some evidence around the weight management role of GC, the results of our study did not show any significant improvement in BMI and WHR. In line with our findings, Li et al. [63] in an animal study, showed that 0.5% (w/w) GC plus HFD could not reduce weight after 12 weeks. However, in contrast with our result, a clinical trial conducted by Thom [64] on overweight subjects. They showed a significant reduction in weight after 12 weeks of supplementation with GC (11 g/day). Further, an animal study on HFD-induced obese mice with 100 or 200 mg/kg GC for four weeks plus HFD significantly decreased weight and body fat [65]. It has been attributed to caffeine as a major bioactive component of GC that is linked with weight reduction. Moreover, CGA may have the ability to reduce calorie input by inhibiting amylase concentration and glucose absorption [66]. Roshan et al.
[28] showed that GC consumption has the ability to control appetite. Furthermore, Bobillo et al. [67] indicated that combining GC with Garcinia C Cambogia and L-carnitine can reduce hunger sensations. The discrepancy between the current study and earlier research could be attributed to a variety of factors, including differences in study designs, methodologies, GCE supplement dosage, and populations.
The results of the current study showed that GC supplement improves lipid profile in women with PCOS even though there was not any improvement in anthropometric measurements, glycemic indices, and inflammatory biomarkers. However, long term studies with different doses are needed to evaluate the anti-inflammatory and hypoglycemic effects of the CGA in PCOs women. The Effect of Green Coffee on Metabolic Disorders in Polycystic Ovary Syndrome | 2022-11-05T16:02:59.655Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "39ff79aaeef1d31d5ac8453abea7a1658a7b5b6a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.7762/cnr.2022.11.4.241",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a85e4f5ded2d8397d2642fae85f59cfd6c6b8995",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
248244316 | pes2o/s2orc | v3-fos-license | Performance comparison of Agilent new SureSelect All Exon v8 probes with v7 probes for exome sequencing
Exome sequencing is becoming a routine in health care, because it increases the chance of pinpointing the genetic cause of an individual patient's condition and thus making an accurate diagnosis. It is important for facilities providing genetic services to keep track of changes in the technology of exome capture in order to maximize throughput while reducing cost per sample. In this study, we focused on comparing the newly released exome probe set Agilent SureSelect Human All Exon v8 and the previous probe set v7. In preparation for higher throughput of exome sequencing using the DNBSEQ-G400, we evaluated target design, coverage statistics, and variants across these two different exome capture products. Although the target size of the v8 design has not changed much compared to the v7 design (35.24 Mb vs 35.8 Mb), the v8 probe design allows you to call more of SNVs (+ 3.06%) and indels (+ 8.49%) with the same number of raw reads per sample on the common target regions (34.84 Mb). Our results suggest that the new Agilent v8 probe set for exome sequencing yields better data quality than the current Agilent v7 set. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-022-08825-w.
Introduction
Whole exome sequencing (WES) is widely used in genomic studies as well as genetic tests. Exons (protein coding regions) represent 1-2% of the human genome comprising up to 85% of the known variants significant for diagnostics [1]. At the same time, WES is 3-5 times cheaper than whole genome sequencing [2]. Currently, exome analysis, embracing a set of different characteristics, has proven to be a more efficient diagnostic tool, being especially effective in the area of human clinical genetics [3].
There are several commercial kits for whole exome enrichment. The most known kits are SureSelect (Agilent), TruSeq Capture (Illumina), xGen (IDT), Human Comprehensive Exome (Twist Bioscience), Seq-Cap EZ (Roche NimbleGen) [4][5][6][7][8][9]. Enrichment protocols are similar and are based on hybridization of exon sequences with biotinylated DNA or RNA probes with a subsequent capture by streptavidin-covered magnetic beads. Most kits are designed to enrich the libraries for sequencing using the Illumina platform. However, earlier, we managed to adapt the enrichment protocol for sequencing using the MGI platform with the Agilent SureSelect Human All Exon V6 probes that previously showed slightly better performance in exome sequencing in several studies [4,[10][11][12][13].
In 2021, Agilent launched an updated enrichment probe set v8 and compared its performance with the other manufacturer [14] but not with the previous version v7. In this study, we focused on comparing the Agilent SureSelect Human All Exon v7 and v8 designed Open Access probes. We studied the changes introduced in panel design, metrics of enrichment quality and statistically assessed the efficiency and quality of the variant detection [15].
We prepared 20 libraries, divided them into 2 pools of 10 libraries and performed 2 rounds of enrichment of the pools using the v7 or v8 probes following the RSMU_exome protocol [16]. The sequenced pools were compared using bioinformatics pipeline based on the following characteristics: target regions, the percentages of on-targets, off-targets, and duplicates, as well as depth of coverage of the regions with various GC content.
Sample Preparation and Sequencing
The libraries were prepared from 20 samples containing 300-600 ng of human genomic DNA taken from 20 patients using MGIEasy Universal DNA Library Prep Set (MGI Tech) following the manufacturer's instructions. DNA fragmentation was performed by sonication with the average fragment length of 250 bp using Covaris S-220. Quality control of the obtained DNA libraries was performed using the High Sensitivity DNA assay with the 2100 Bioanalyzer System (Agilent Technologies).
Previously pooled DNA libraries were enriched following the RSMU_exome protocol [16]. 20 DNA libraries were divided into 2 pools each containing 10 libraries. Each pool was enriched twice with the SureSelect Human All Exon v7 probes and the latest version of the probes SureSelect Human All Exon v8 (Agilent Technologies) for the second time. Finally, we obtained 4 enriched DNA library pools. The concentrations of the prepared libraries were measured using Qubit Flex (Life Technologies) with the dsDNA HS Assay Kit. The quality of the prepared libraries was assessed using Bioanalyzer 2100 with the High Sensitivity DNA kit (Agilent Technologies).
The enriched library pools were further circularised and sequenced by a paired end sequencing using DNB-SEQ-G400 with the High-throughput Sequencing Set PE100 following the manufacturer's instructions (MGI Tech) with the average coverage of 100x. We loaded one pool per lane into the patterned flow cells in two different runs. FastQ files were generated using the zebracallV2 software by the manufacturer (MGI Tech).
Bioinformatics pipeline
The quality of the obtained 40 paired fastq files was analysed using FastQC v0.11.9 [17]. Based on the quality metrics, the fastq files were trimmed using Trimmomatic v0.39 [18]. To correctly estimate the enrichment and sequencing quality, all 20 exomes were downsampled to 50 million reads using Picard DownsampleSam v2.22.4 [19]. Reads were aligned to the indexed reference genome GRCh37 using bwa-mem [20]. SAM files were converted into BAM files and sorted using SAMtools v1.9 to check the percentage of the aligned reads [21]. Based on the obtained BAM files, the quality metrics of exome enrichment and sequencing were calculated using Picard v2. 22.4, and the number of duplicates was calculated using Picard MarkDuplicates v2.22.4. We performed the quality control analysis with the following bed files: Agilent v7_regions, Agilent v8_regions. Bed files for the GENCODE and RefSeq databases were uploaded from the UCSC Table Browser (https:// genome. ucsc. edu/ cgibin/ hgTab les? hgsid= 13098 31311_ Di0qV Ak2HA MSBFg ug0So MWuDi YQT). Genomic coordinates of unique v7 and v8 regions in the bed files were annotated using the Panther database [22]. Variant calling was performed using bcftools mpileup v1.9.
Statistical analysis
Statistical tests were performed by R (version 4.2) in Rstudio (ver 2022.02.3 Build 492). To estimate a distribution of variables, the Shapiro-Wilk test was used. If we didn't reject H 0 hypothesis, the T-test was performed. Otherwise, Wilcox rank-sum test was used. P-value < 0.05 as the level of statistical significance was used.
Comparison of probe designs
We detected several changes in target design of v8 compared to v7 introduced by the manufacturer. The manufacturer did not alter the probe structure preserving 120 bp biotinylated cRNA probes. The manufacturer claims that coding content was updated according to the database releases (CCDS release 22, GENCODE V31, RefSeq release 95), added the TERT promoter region, but removed non-coding ClinVar Pathogenic variants. The target size of the v8 kit is 35.24 Mb, whereas the target size of the v7 kit is 35.8 Mb, the intersection of the bed files from both kits is 98.42% (34.84 Mb). The percentage of unique target regions is 2.69% (0.96 Mb) and 1.14% (0.4 Mb) for the Agilent v7 and v8 exome, respectively. We compared the v7 and v8 bed files with the bed file containing the coding exons of the GENCODE Genes track (basic subtrack, release V39lift37, Oct 2021) (34.93 Mb). The intersection between v8 and GENCODE v39 was 98.9% (34.07 Mb), the intersection between v7 and GENCODE v39 was 98.2% (34.3 Mb), and 0.29 Mb of the GENCODE v39 regions were absent in both kits. We visualised the overlapping target regions for Agilent v7 exome, Agilent v8 exome, and GENCODE v39 as Venn diagram ( Fig. 1) with indicated target sizes using matplotlib-venn library (https:// github. com/ konst antint/ matpl otlib-venn).
We collected precise information (chromosomal coordinates, Gene ID, an annotation) on target regions from the v7 (0.96 Mb) and v8 kits (0.4 Mb) which is provided in the Supplementary Table 1. The figure S1 analysing the distribution of lengths of the changed fragments (Supplementary Table 1) demonstrates that most altered positions are short (less than several dozens of base pairs) which means that the manufacturer adjusted design of certain probes using the previous version of the targets. We analysed those fragments that were longer than 30 bp as we were interested in detecting unique fragments for the v7 and v8 kits in the current version of the bed file of the GENCODE v39 database. Our analysis also included the bed file containing the coordinates of all exons plus 20 bases at each end from the RefSeq ALL database (Source data version: NCBI Homo sapiens 109.20211119 (2021-11-23)). The unique sequences of the v8 target fit better into the current GENCODE v39 database than those of the v7 target. The intersection of the unique regions of the current GENCODE v39 with v8 was 0.33 Mb and with v7 was 0.09 Mb. The intersection of the RefSeq bed with a larger size (95.47 Mb) which includes exons + -20 bp from all curated and predicted genes with v8 was 0.23 Mb and with v7 0.26 Mb. We visualised the overlapping unique target regions for Agilent v7 exome, Agilent v8 exome, and GENCODE v39 ( Fig. 2A), and NCBI RefSeq exons (Fig. 2B) as Venn diagrams. Together, these results suggest that v8 target was updated with current information of exonic variants.
Enrichment quality
To assess the enrichment quality, the obtained data (raw reads) for 40 exomes (20 samples enriched by the v7 or v8 probes) were downsampled to 50 M reads. The coverage statistics were calculated using Picard, and metrics were averaged for the samples from each v7 and v8 pool. The results obtained for each downsampled sample in the pool are shown in the Supplementary Table 2. We detected no significant differences in the number of on-target (W = 217, p-value = 0.66) and aligned reads (T = 1.23, p-value = 0,26), but detected in offtarget (W = 279, p-value = 0.03) reads and the percentage of duplicates (T = -3.76, p-value = 0.00066) (Fig. 3). Mean + SD target coverage was similar for both kits and was 56.38x ± 1.18 and 56.88x ± 1.32 for v7 and v8, respectively. However, median target coverage values for two kits were different and equal to 53.4 × and 48.6 × for v8 and v7, respectively. Therefore, we suggest a higher coverage uniformity for v8 target.
The average values of metrics in the pools which reflect the target region coverage quality in the v8 kit are higher than in the v7 kit. The percentage of the target regions with ≥ 10 × coverage is 96.28 ± 0.0024% and 95.08 ± 0.0036% for v8 and v7 (T = -12.31, df = 38, p-value = 7.88e-14), respectively. At the same time, the percentage of the target regions with 20 × coverage in v7 is 5% less than in v8 (88.07 ± 0.013% for v7 vs. 92.96 ± 0.01% for v8, T = -13.198, df = 38, p-value = 1.97e-15) indicating that v8 has higher enrichment quality. The 40 × on-target coverage is in the range of 57-66% (mean ± SD = 62 ± 0.02%) for the v7 kit and is 9% less than that of the v8 kit (which is in the range of 65%-77%, mean ± SD = 71 ± 0.03%, T = -11.76, df = 38, p-value = 1.14e-13). At the same time, the distribution of v8 is closer to the normal distribution (Fig. 4B), there are fewer overcovered (≥ 80 × coverage) or undercovered positions (the inflection point is shown in Fig. 4A) which allows obtaining the sufficient coverage for more positions using fewer data.
The FOLD_80 parameter which reflects the coverage uniformity in the v8 pool samples (mean ± SD = 1.72 ± 0.076) is better than that of the v7 pool samples (mean ± SD = 2.13 ± 0.094) (Fig. 4C). The closer the value is to 1, the fewer rounds of sequencing a sample requires to obtain 80% of the targeted bases with the original mean coverage.
GC content
The AT_DROPOUT metric is 2 times lower for the exomes enriched with the v8 kit (v7 mean = 29.23%, v8 mean = 15.92%, W = 360, p-value = 2.88e-06). GC_DROPOUT does not differ between the kits (v7 mean = 12.9%, v8 mean = 13.09%, T = -0.352, df = 38, p-value = 0.72). Both AT_DROPOUT and GC_DROPOUT metrics indicate the percentage of misaligned reads that correlate with low (%-GC is < 50%) or high (%-GC is > 50%) GC content, respectively. Figure 5 demonstrates that the v8 probes (Fig. 5B) provide slightly more uniform coverage of regions with the GC content in the range of 40-60% (the red zone in the Fig. 5 shows a high density of regions with similar mean coverage and GC-content). However, this value is high in the v7 probes as well (Fig. 5A). We visualized the distribution of %-GC content of exonic regions with different coverage in the Figure S2 (Supplementary table 2). The density curves of %-GC were identical for v7 and v8 samples and correlated with previous results for Agilent of Wang et al. [23]. For low-covered exonic regions (< 10 × or < 20x) we observed no drastic curve shift on the graph towards high or low %-GC both on v7 and v8 samples.
SNV and INDEL calling comparison
Furthermore, we estimated the calling quality by calculating the number of single nucleotide variants (SNVs) and small insertions and deletions (indels) detected by different kits with the equal number of raw reads per sample. Table 1 shows the average result of calling for the v7 and v8 pools filtered by the quality of the entire bed files (results for each sample are provided in Supplementary Table 3). The following filters were used for calling: cut-off for the variants with the coverage depth exceeding 13 reads (DP > 13) and a parameter QUAL > 30. The mean + SD numbers of SNVs and indels obtained were 25,736 ± 380 and 743 ± 22 for the v7 exomes and 25,558 ± 362 and 699 ± 18 for v8. A higher amount of called variants for the v7 kit can be accounted for a larger size of the target design. As different kits provide bed files of different sizes, we compared variant calling in the overlapping target regions of the v7 and v8 kits. This approach enables a correct comparison of two probe designs. Using the same target (bed v7 cross , we calculated the average (mean ± SD) variant numbers. The number of SNVs and indels for the samples from the v8 pool were 3.06% (T = -9.3, df = 38, p-value = 2.61e-11) and 8.49% (T = -6.71, df = 38, p-value = 6.03e-08) higher than that of the v7 pool, respectively (Table 1). Then we performed intersection over union for variant calling results on "v7 cross v8" bed file and after that evaluated the quality of unique variants for v7 and v8 samples. The mean + SD coverage of unique SNVs and indels obtained were 74.5 ± 6.7 and 53.7 ± 3.3 for the v7 exomes and 60.5 ± 3.8 and 55.5 ± 3.4 for v8. The mean + SD QUAL value of unique SNVs and indels were 84.9 ± 7.8 and 128.2 ± 8.7 for the v7 exomes and 86.9 ± 6.1 and 132.3 ± 5.3 for v8.Together, these results shows that we obtained more unique variants without loss of quality using v8 probes.
Discussion
Overall, 2.76% of the target was excluded from v7, while 1.15% of the target was included into v8. Based on our data, we believe that no dramatic changes in probes significant for the clinical potential were added. Most modifications probably lie in changing the approach powered by machine learning to probe design. Most changed fragments in certain regions are several base pair long thus implying that manufacturer only adjusted certain target regions. However, some targets were quite long (dozens to thousands of base pairs) which indicates the functional changes as well. Changes affecting longer fragments arise from the updated information in the current versions of the databases. For instance, the transcript of the largest fragment in the NACA gene (3330 bp) that was excluded from the The data in this plot was collected by merging all samples from the V7 and V8 pools. Density estimation was performed using 2D plots. More specifically, we chose data points in a fixed rectangle (%GC content ∈ [0;1], mean depth ∈ [0;1000]) and split it into evenly spaced 200 × 100 grid and counted the data points in each cell of the grid. Finally, we normalised the grid to the range of [0,1] and plotted it using "jet" colormap from matplotlib library Table 1 Average (mean ± SD) results of variant calling of SNV and indels for the samples from the v7 and v8 pools using their own target (bed v7, bed v8) and target intersection (bed v7 vs. v8) filtered by DP > 13 and QUAL > 30 v8 target undergoes splicing and is characterised as tsl5 -no single transcript supports the model structure (ENST00000454682.6 NACA-203). The manufacturer excluded all regions now considered to be not protein coding from the target and included certain regions that were unknown when the v7 probes were designed. This can be proved by analysing the intersection between the unique regions of both kits and the latest releases of databases in the same way we analysed Gencode V39 release (Oct 2021). The major problem of WES is a non-uniform coverage of target regions resulting from the sensitive hybridization reaction of probes with the target fragments of DNA libraries. The introduced changes in the v8 probe design markedly improved the enrichment quality. The v8 probes with the same sizes of raw data per sample provided higher coverage of the larger percent of target regions. We noted that the degree of inadequate coverage of a particular region based on its AT content was better in case of the v8 version. We wondered if variant calling detected the same SNVs and indels in the samples obtained with the v7 and v8 kits. Indeed, the samples enriched with the v8 probes allowed for obtaining more useful data than the v7 probes due to new probe design and higher enrichment quality (uniform coverage of target fragments).
Noteworthy, the presented calculations were performed according to our in-lab gDNA standards. We aimed at estimating relative statistical metrics rather than absolute metrics as it is more correct to analyse them similarly to GIAB or Platinum Genomes. We intended to reveal the advantages that could be gained by an NGS facility performing exome sequencing if it switched to a new version of an enrichment kit.
Therefore, novel probe design Agilent all-exon v8 provides enough advantages as compared to the previous version of the kit and can be recommended as an advanced, more efficient generation of sequencing kits. | 2022-04-20T13:21:26.439Z | 2022-04-15T00:00:00.000 | {
"year": 2022,
"sha1": "8f87b2cf1d0e55b57eedc00efb080945b98d781d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "89397daf31cfd642b26e4b021df9d5c96b51f1e3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
18626288 | pes2o/s2orc | v3-fos-license | Science and the New Italian Nation
It is in Risorgimento Italy that there is an incessant quest for a definition of what it means to be Italian amongst a reality of economic paucity and clear social divisiveness. During this tenuous yet crucial epoch, there is a cohesive attempt to define Italian taste with an ideological terminology previously absent from sensorial and aesthetic discourse. A fundamental purveyor of this novel approach is the self-defined " poligamo delle scienze, " Paolo Mantegazza. To the plurality of roles attributed to the medic (anthropologist, pathologist, senator, writer, etc.), there is one yet to be explored—Mantegazza as didactic gastronome. In the attempt to combat what he considers the anti-hygienic conditions plaguing the nation, the medic inaugurates a pedagogic process that would ideally lead to the formation of the Italian citizen. With the goal of creating a stronger and more capable Italian populace, the author goes to great lengths to provide guidelines for maximizing nourishment through the humblest of foods. Ultimately, Mantegazza's pedagogic gourmandism is integral in the propagation of a social model of comportment that defines the Positivist framework of biological and nationalistic renewal and to a new vision of taste. Nell'amore il prima è spesso un prurito che fa male o un uragano che schianta gli alberi e rovina le messi. Il mentre è dolcissimo, ma ahimé, dura troppo poco. Non dirò coll'epicurea francese, che cela ne dure que le temps d'avaler un oeuf, ma dobbiamo pur confessare, che il mentre si misura non a giorni, né a ore; ma coll'orologio a minuti secondi. In recent years, particularly after the 150th anniversary of the Italian unification (2011), there has been a new focus on the role of cucina (cuisine) in the creation of italianità (Italianess), with keen attention given to the figure of Pellegrino Artusi (1820–1911). 2 Aside from Piero Camporesi's declarations in the famed 1970 edition of La scienza in cucina e l'arte di mangiar bene (Science in the Kitchen and the Art 1 In love the before is often a yearning that does harm or a hurricane that tears up trees and ruins crops. The meanwhile is so sweet, but, alas, it is so fleeting. I will not say as the French epicure that it lasts as long as it takes to swallow an egg, but we must also confess that the meanwhile is measured not by days, nor by hours, but with a clock by minutes and …
of Eating Well), 3 which first generated intellectual discourse around the text, the 100th anniversary of Artusi's death 4 coincides with, and is therefore juxtaposed with, the sesquicentennial national anniversary, thereby further solidifying the correlation between the cookbook author and a sense of national unity.The book that Alberto Capatti deems "un opera di impegno civile," [4] 5 has at this point taken on iconic status, cementing Artusi's place in a gastro-nationalistic discourse.It is Artusi who becomes the Italian cookbook author par excellence, credited with unifying a nation that was struggling to come together because of centuries of political, linguistic and cultural fragmentation.However, it is important to note that Artusi is not alone in his venture to use food as nationalizing stimulus.A figure who I contend has been overlooked is a collaborator of Artusi: Paolo Mantegazza.The two are intertwined in the Italian 19th century gastronomical landscape, and this article will ascribe a new role to Mantegazza that has yet to be explored: to the many labels attributed to this man of science (medic, anthropologist, ethnographer, pathologist, neurologist, physiologist, senator, novelist, etc.), it is fundamental to add one more, that is gastronome or, rather, didactic gastronome.
Mantegazza is at the avant-garde of the nascent self-helpism 6 in the 19th century and attempts to resolve the prominent uncertainties of the post-unification period, demonstrating that food consciousness equals identity on various levels-from domestic to societal.Hygiene and hygienism, of which Mantegazza is the major 19th-century Italian proponent, become significant inasmuch as they serve as basis for good propriety and good citizenship, and, along with galatei (etiquette manuals) that tackle hygiene of various kinds, manuals and periodicals are published in the Italian 19th century with goals to establish a criteria for good health and beauty.This is particularly the case in a post-unificatory Italy, where questions of what it means to be Italian among a reality of economic paucity and clear social divisiveness arise.During this tenuous, yet crucial, period, there is also a cohesive attempt to define Italian taste with an ideological terminology previously absent from sensorial and aesthetic discourse.A fundamental purveyor of this novel approach is precisely the self-defined polygamist of the sciences, 7 Paolo Mantegazza.In the attempt to combat what he considers the anti-hygienic conditions plaguing the nation, the author inaugurates a pedagogic process that would ideally lead to the formation of the Italian citizen.Through his numerous manuals on hygiene and physiological studies, Mantegazza the Positivist is determined to actively participate in the edification of his nation. 8his entails the regeneration of its citizens from the bottom up, denoting the educative intent of imparting a gastronomic, as well as gastrophonic (i.e., a language of food), lesson to those who may 3 "L'importanza dell'Artusi è notevolissima e bisogna riconoscere che La Scienza in cucina ha fatto per l'unificazione nazionale più di quanto non siano riusciti a fare i Promessi Sposi.I gustemi artusiani, infatti, sono riusciti a creare un codice di identificazione nazionale là dove fallirono gli stilemi e i fonemi manzoniani."( [3], p. xvi) (Artusi's importance is remarkable and it is necessary to recognize that Science in the Kitchen did more for national unification than The Betrothed.Artusi's gustatory features, in fact, were able to create a codified national identity where Manzoni's stylistic features and phonemes failed.) 4 To commemorate the hundredth anniversary of Artusi's death in 2011 there was a pilgrimage, which coincided with the annual Festa Artusiana, from Forlimpopoli (the author's birthplace and residence of youth) to Florence (where Artusi spends his adult years).There were also simultaneous celebrations held in conjunction with the Casa Artusi in the United States to honor Artusi, such as the conference held in The New School in New York on March 31st, 2011: Culinary Luminaries: Italian Food Historian Pellegrino Artusi.
a work of civil obligation
Self-helpism is a trend in 19th-century writings that begins with the Briton Samuel Smiles who wrote 1859's Self-Help, spurring a trend in Victorian England of manuals whereby the goal was the self-education of the working classes.This trend in Italy enters with the Positivist culture, and can be traced in a large part of Mantegazza's production.For a study on the development the Italian brand of selfhelpismo see Di Bello, Guetta Sadun and Mannucci's Modellli e progetti educative nell'Italia liberale [5].7 Mantegazza often references to himself in this fashion.An example is in his final work La bibbia della speranza ([6], p. 1).
8
With the bourgeois expansion in the second half of the century, due in part to the liberal revolutions of 1848, but also to industrialization and capitalization, Positivism becomes the hegemonic culture of Western Europe.With its strong emphasis on scientific progress and technological advancement and its tendencies towards realism, its influence is seen throughout the arts and sciences.With the creation of new pedagogical standards, such as the Coppino Law of 1877 which renders elementary education obligatory, and its general emphasis on the dissemination of knowledge, Positivism disseminates in Italy during a period in which the questions and consequences of nationhood and identity are predominant.For the Positivist discourse on education see Ascenzi's Tra educazione etico-civile e costruzione dell'identità nazionale [7] and Marciano's Alfabeto ed educazione [8].
seem incapable of partaking in such endeavors due to economic constraints.With the goal of creating a stronger and more capable Italian populace, the author goes to great lengths to provide guidelines for maximizing nourishment through the humblest of foods, in addition to ennobling cuisine as fine art.Ultimately, I contend that Mantegazza's analysis of food as subsistence, as well as an aesthetic subject, can be defined as a unique brand of pedagogic gourmandism, 9 integral to the propagation of a social model of comportment that defines the Positivist framework of biological and nationalistic renewal.
In order to demonstrate the important role that Mantegazza occupies within 19th-century Italian taste theory, I will draw from volumes of his immensely popular Almanacco igienico popolare: 10 Igiene della cucina (1866), Igiene di epicuro (1872), Igiene dei sensi (1874), Piccolo dizionario della cucina (1882), L'arte di conservare gli alimenti e le bevande (1887); as well as the manual Elementi di Igiene (1871), 11 and texts of a more physiological and philosophical nature: La fisologia del piacere (1880), L'arte di essere felici (1886), and Epicuro: un saggio di una fisiologia del bello (1891).With these works I will be able to trace a taste narrative that oscillates with great command from gastronomy (or, rather, the art of cuisine) to gastrology (the 19th-century term for "the science of eating"), 12 forming an amalgam that defines Mantegazza's scientific production.I will analyze his food theory from the following interwoven perspectives: (a) the anthropology of cuisine; (b) food as health and hygiene; (c) gastronomy as an art form for all; I will then focus on (d) the artful nature of these eclectic scientific publications.I contend that these themes work in unison within the author's production, with the ultimate goal of creating a stronger and more economically viable Italian nation, while promulgating a rationalized gluttony for a new, stronger and more capable Italian citizen.I will lastly examine (e) the social context in which the author writes to better understand how far reaching his message is.
The Anthropology of Cuisine
To commence the Mantagazzian narrative of taste, it is important to consider cuisine as a subject of anthropology; after all, it is Mantegazza who is famed for founding the first cattedra di Antropologia (i.e., the first professorship of Anthropology in Italy), in addition to the Museo Nazionale di Antropologia ed Etnologia (the National Museum of Anthropology and Ethnography) and its subsequent periodical and society.These are accomplishments that are possible precisely because of his voyages and subsequent studies that take him through Latin America in 1854 (Paraguay, Chile, Bolivia, and Brazil), Lapland in 9 The term gourmand (i.e., the practitioner of gourmandism) has a somewhat negative connation as it is generally linked to a person who "overeats."It was precisely this implication that the original gourmands attempted to avoid and in fact they prided themselves on the delicacy of their art: If one were to believe the Dictionary of the Academy, Gourmand is a synonym of Glutton and Gobbler, and Gourmandise of Gluttony . . .The term Gourmand has in recent years, in polite society, gained a far less unfavorable, and dare we say noble meaning.The Gourmand is more than just a creature whom Nature has graced with an excellent stomach and vast appetite...he also possesses an enlightened sense of taste...an exceptionally delicate palate, developed through extensive experience.All his senses must work in concert with that of taste, for he must contemplate food before it nears his lips.Suffice it to say that his gaze must be penetrating, his ear alert, his sense of touch keen, and his tongue able.Thus the Gourmand, whom the Academy depicts as a coarse creature, is characterized instead by extreme delicacy; only his health need be robust.( [9], p. 12).
In 19th-century Europe (particularly in France and England) the practice of gourmandism is linked to intelligent knowledge and, as Anthelme Brillat-Savarin had hoped, gastronomy began to have its "own academicians, its professors, its yearly courses and its contests for scholarship," taking its rightful place among the premier arts and sciences ( [10], p. 64).To better understand the development of gourmandism and food philosophy in France and England see, for example, Gigante's Gusto [11] and Mennell's All Manners of Food [12]. 10The Almanacco was a series of manuals on hygiene that Mantegazza published annually from 1866-1905.Additionally, Mantegazza founded L'igea.Giornale di igiene e medicina preventiva in 1862 in Milan. 11The edition that this study will be drawing from is the fifth edition published in 1871.The first edition of the work was published in 1864.The text is not exclusively gastronomic; it deals with various forms of hygiene, from physical (such as skin and muscular hygiene) to mental care (such as hygiene of the intellect and sentiment).However, it is important to note that that nearly half of the work (the first 230 pages) explicitly deals with gastronomy. 12 1879, and India in 1882. 13Within these works there is, of course, keen attention paid to the gastronomy of the peoples the author encounters.Whether focusing on the use of stimulants such as the coca leaf and liqueurs for manual workers in Peru, the excellent coffees or the manners of cookery and consumption of meats such as reindeer in Lapland, or the preparation of millet, fish and the best mangos of the world in India, it is evident that the products to which Mantegazza is exposed go far in shaping the way he envisions his nutritional ideals.It is through this optic that the author establishes a very modern premise, one that is the subject of such recent texts as Massimo Montanari's Food is Culture [20] and Richard Wrangham's Catching Fire: How Cooking Made Us Human [21]: that is, cuisine as civilizer.Referencing his La fisiologia del piacere (The Physiology of Pleasure), it becomes clear that for Mantegazza food preparation and consumption serve as an anthropological marker for the distinction between man and brute.The key differentiator lies in the refinement of the pleasures of taste and the development of gastronomy, which defers to reason for regulation and distribution ( [22], p. 73).In the early centuries of human evolution, hunger rules consumption (l'appetito supplì all'arte); however, with rational development, comes intelligence and art, as Mantegazza indicates, which drives man to search to multiply flavors, and to refine his gustatory capabilities.The brute, conversely, consumes with irregularity and with consideration for neither time nor measure, allowing his primordial impulse to govern, with no propensity for rationing or conservation.Mantegazza, in fact, dedicates an entire volume of his popular Almanacco igienico popolare in 1887 to L'arte di conservare gli alimenti e le bevande ("The Art of Preserving Foods and Beverages"), accentuating evolved man's conscious effort to preserve food, as well as divulging all of the techniques that the modern sciences have afforded him.
The Medic in the Kitchen
It is within L'arte di conservare gli alimenti e le bevande, that Mantegazza (a medic himself) proclaims: "Se i medici conoscessero un po' più gli alimenti, le loro diverse virtù e i diversi vizii . . .potrebbe far guarire chi è malato e . . ..impedire che i sani si ammalino!"([23], p. 62). 14This theme of cuisine as panacea is prevalent throughout the author's food writings, and, in Igiene della cucina (Hygiene in the Kitchen), the author insists that a medic is more effective in the kitchen than he could ever be in the pharmacy.Cuisine can "prevenire molte malattie e curarne molte altre . . .[può] trasformare uno scrofoloso in un uomo robusto . . .la cucina può guarire un'indigestione, una febbre, una tisi" ( [24], p. 63). 15This ideal of the "medic in the kitchen" serves as an archetype to which anyone can aspire, for the manuals are not destined for colleagues, but for the mater familias, who, in adopting the author's advice, become, in essence, medicus familias. 16Mantegazza's work represents the first conscious endeavor in Italy to promulgate the means by which this ideal can be realized. 17omestic health and hygiene are, for the author, issues that transcend the family unit.As Comoy Fusaro demonstrates, Mantegazza is determined to partake in the construction of Italy ( [27], p. 194); . . .prevent many illnesses and cure many others . . .[it can] transform a scrofulous man into a robust one . . .cooking can cure an indigestion, a fever, a phthisis.Both scrofula (also known as king's evil) and phthisis were forms of tuberculosis common throughout the 19th century.The former is a tuberculosis of the lymph glands of the neck, the latter is a pulmonary tuberculosis.See Firth's "History of Tuberculosis" [25]. 16Of interest is the economic connotation with the "medic in the kitchen."For example, Mantegazza asserts that those who are able to take away from his texts a profound understanding of the science of nutrition and apply it with consistency to their daily lives will notice "alla fin d'anno troverebbe di aver dato ben pochi quattrini al medico e allo speziale" [at the end of the year he/she would find that they have given very few pennies to the doctor and the pharmacist] ( [24], p. 63). 17Interesting to note is that others who attempted such endeavors, such as the author of La cucina degli stomachi deboli, ossia piatti non comuni, semplici, economici e di facile digestione.On alcune norme relative al buon governo delle vie digerenti [26], did so anonymously because of the perception that such a topic may be an undignified undertaking for a man of medicine.
hence, the scope of his alimentary message can also be interpreted as chiefly nationalistic.The construction of an Italian nation leads Mantegazza to disseminate the message of cuisine as nutrition for all sectors of Italian society, from the proletarian to the aristocrat to the peasant, and, for the lower classes, he sees in the humblest of foods the possibility for regeneration.Throughout his manuals of nutritional hygiene, as well as his texts of a more gastronomic inclination, the author goes to great lengths to provide guidelines for maximizing nourishment to create a stronger Italian citizen.For the lower classes, 18 there is an insistence on techniques that may ensure suitable nutrition.Such is the case when he writes of the benefits of salt; the medic states, "Un pizzico di sale di più nella pentola del povero, vuol dire tanti globuli rossi di più nel suo sangue, e quindi tanto di forza nelle vene di tutto il popolo italiano" ( [28], p. 100). 19In the instance of polenta, Mantegazza identifies a foodstuff that transcends class; however, he is quick to note that the pale, poorly cooked, and salted polenta consumed by rural citizens, which had led to pellagra, is a far cry from the one enjoyed by the elite.With this realization, the medic calls for action: "Tocca a noi, tocca all'economia sociale, all'igiene fare che la polenta sia per tutti una benedizione e non un veleno" ( [28], p. 89). 20It is this social conscience that is exemplified in Mantegazza's gastronomic writings, prompting him to divulge methods through which the nutritional gap between the social classes can be bridged. 21s stated by Gabriella Armenise, Mantegazza's concept of hygiene is identified with the art of physiology ( [31], p. 90) and, therefore, the majority of Mantegazza's food studies are physiological in structure and content.Food types, appetites, aromas, ingredients, scales of digestibility, and nutritive capability, as well as a superfluity of other alimentary subject matter, are categorized in an attempt to diffuse knowledge that would allow the reader to become a more informed consumer. 22Within his physiological tendency, Mantegazza utilizes a largely plurilateral and international model to convey his pedagogic gourmandism.The medic becomes an advocate for a rationally governed gluttony that includes the enjoyment of a myriad of ingredients from all the corners of the globe.Science has a fundamental role in this mode of consumption, as it is the ancient art of alimentary preservation that permits foods to travel, and the advancements of Mantegazza's epoch lead him to claim its perfected status ( [23],p.39). 23The author is a proud advocate of the splendor that modern progress has afforded gastronomy: "È in questa maniera, che seduti in una comoda poltrona e circondati da tutte le leccornie del lusso europeo, possiamo in un solo pranzo mangiare del bove ucciso nei matadores di Buenos Ayres o in Australia, del salmone pescato in Lapponia e delle aragoste cresciute nei mari dell'America del nord" ( [23], p. 39). 24his transnational, physiological modus operandi is indicative of Mantegazza's pertinence to the greater Positivist culture that comes into fashion in the second half of the century.His framework has precedents in Italy in other authors, many doctors and pathologists in their own right, who attempt to convey a similar ideal of food nutrition.Salvatore Tommasi, for example, dedicates numerous pages to the topic of consumption, because "[si] frutterebbe senza fine alla pubblica ed alla privata igiene" ( [32], p. 105). 25Angelo Camillo De Meis goes as far as envisioning a world in which Positivist ideals come fully to fruition and chemistry capable of fabricating materiale alimentare, i.e., food matter, that would satisfy all the nourishment requirements of the masses-specifically speaking of a "cibo saporoso, odoroso, squisitissimo" and a "vino chimico eccellente . . .da digradarne il Chianti" ( [33], p. 43). 26In short, Mantegazza represents the trend of food science that is mirrored in Italian contemporaries.They show how the progressive ideals of disseminating education and a heightened faith in the possibilities of science and technology can be espoused in matters of food and taste.However, it is Mantegazza who, more than any other, prolifically reshapes this discourse for the masses, while ennobling the art of cookery to the sphere of aesthetics. 27
Gastronomy as an Art form for All
In addition to a nutritional gap between the classes, the artistic qualities of gastronomy seem to be exclusively for the upper strata, while the lower classes are relegated to subsist on inadequate provisions.Yet, Mantegazza seeks, in many instances, to create fare out of the most meager of ingredients.A germane example is the medic's ennobling of the humble egg, which he claims to be the most democratic and aristocratic of foodstuffs, offering "i suoi tesori di forza e di salubrità al povero proletario, come al più Creso dei Re . . .e rimane sempre al disopra di ogni più complicato intingolo gastronomico" ( [28], pp.113-15). 28This attempt to bridge the gap of artistic intent between aristocratic fare and the scant rations of the majority of the peninsula is significant, because it denotes the 23 Mantegazza individuates some of these progresses.For example, Nicolas Appert's food preservation in sealed bottles, which Mantegazza deems a true triumph of science-"io ho mangiato al mezzo del oceano lepri e tordi . . .come se fossero venuti allora dal mercato" ( [29], p. 103).(I ate in the middle of the ocean hares and thrushes . . .as if they had just come from the market).
He also lists the advancements of countless others such as Gamgee, Boillot, Voigt, Schub, Castelhag, Laignel, Malyepyre, etc . . ..The medic's ultimate goal is to provide a documentative discourse that allows the reader to fully understand what methods are available and how to benefit from them. 24It is in this manner that seated in a comfortable armchair and surrounded by all the delicacies of European luxury, we can in only one dinner eat steer from the slaughterhouses of Buenos Aires or in Australia, salmon fished in Lapland and lobsters grown in the North American seas. 25It would be endlessly fruitful for public and private hygiene. 26[A] tasty, scented, extremely exquisite food . . .excellent chemical vine . . .that would declass Chianti.This is an ideal that anticipates the Italian Futurists, who envisioned a world where government-subsidized pills are distributed for nourishment, therefore stripping gastronomy of any bodily necessity, allowing it to become a form of art (see Marinetti's.The Futurist Cookbook [34]) Mario Morasso is another author who continues this ideal of a gastropia at the turn of the century.He envisions a Metropolis that offers the bounty of the banquets of imperial tables, and of Lucullo's and Trimalcione's famous dinners daily in its streets [35].Describing his ideal he states: "Non si ha un'idea delle frutta perfette, quasi che non la natura ma un artista amoroso le abbia modellate con un soffio, della selvaggina rara, del polame stupendo, dei pesci, dei dolci, dei pasticci, dei vini, delle carni di ogni specie in quantità stragrande, che sempre si possono trovare in qualsiasi di questi ricchi depositi di cibi" ( [35], p. 363).(One cannot have an idea of the perfect fruit, almost as if it was not nature but a loving artist that created it with a breath, of the rare game, of the stupendous poultry, of the fish, of the sweets, of the pasticcios, of the wines, of the extravagant quantities of every type of meat, that can always be found in any of these rich food desposits). 27Roberto Ardigò, the prominent Italian Positivist, also makes reference to a refined food aesthetic.Although he does not enter into the lofty discourses in which Mantegazza partakes, he does speak of the cook as emblem for the aesthetic fantasy needed for an artist: see ( [36], pp.162-63).
28
. . .its treasures of strength and of healthiness to the poor proletarian, as well as to the most Croesus [wealthy] of kings . . .and it remains above the most complex of sauces.
pedagogic intent of imparting a gastronomic lesson to those who may seem incapable of participating in such endeavors because of social status.It is an effort, as demonstrated by the gourmands and gastronomes who preceded Mantegazza (such as Anthemme Brillat-Savarin), to introduce the middle classes to a world that previously excluded them: that of food as art.Mantegazza, however, goes even farther; it is evident that part of his audience is incapable of relishing such undertakings, since their condition is far from opulent.Yet, the medic is keen on conveying the strategies necessary to optimize available foodstuffs and render them arte culinaria.In addition, he also provides a foodway towards intellectual/artistic production.Mantegazza sustains the importance of gastronomic literacy for mental stimulation; he refers to a category of foodstuffs, which are of particular benefit as alimenti nervosi.The idea that consumption directly affects our thought is not new; Ludwig Feuerbach and Jacob Moleschott, for example, advanced similar theories. 29However, Mantegazza's application is wholly different.For the author, nervine foods "hanno una storia molto simile a quella delle nuove scuole di pittura, della nuova musica, dei nuovi stili architettonici!" ( [40], p. 162). 30They are among the most noble of foodstuffs, allowing man to have a heightened control over his intellect and sensibility.If foods are the vapor that moves the locomotive, as Mantegazza indicates, then the nervine foodstuff is the vehicle through which we can govern its movement ( [41], pp.13-14).The author categorizes more than a hundred of these stimulants throughout his works; when balanced and used in moderation, "L'uomo incivilito . . .nel brillante sviluppo della sua intelligenza [consume] in un sol giorno i succhi fermentati delle vigne del Vesuvio, la birra nebbiosa dell'Inghilterra, il cacao dell'America, e il té dell'estrema China" ( [29], p. 60). 31The introduction of such substances into the body and, as a result, into the bloodstream, awakens the intellect, causing new sensorial and cerebral activity that is most advantageous for the mind. 32he aesthetic nature of gastronomy becomes paramount to Mantegazza's theory of taste ennoblement.Once again, in the Fisiologia del piacere, the author delves into a theoretical discourse that analyzes how the pleasures of taste are brought about, elevating the traditionally mundane act of consumption while rendering food an analog of music.The pleasures of taste are divided into two key elements: harmony and melody, which together forge a paragon that allows the medic to assert the sublimity of gastronomy.It is a sublimity that finds expression in the concatenation of dishes and pairings that encompass a pranzo, which the author describes as "un concerto d'armonia e di melodia del gusto . . .che viene poi portato alla massima perfezione dal genio dell'artista" ([3], p. 65). 33A meal, for Mantegazza, is to become a plurisensorial event: "Una festa ai piaceri del gusto, ai quali si associano quelli dell'odorato, dell'udito, della vista . . .elevati a un certo grado dalla perfezione dell'arte e dal sentimento del bello" ( [22], p. 76). 34A pranzo, therefore, is not just the mere satisfaction of hunger, but, rather, as indicated in Elementi di Igiene, a feast in which the superior joys of sentiment 29 Jacob Moleschott's Lehre der Nahrungsmittel [37] (The Chemistry of Food) represents the first foray into naturalist dietetics, providing the people with a text that is an amalgam of scientific interests, physiological study, as well as a foundation for a materialist-humanist discourse.As such, the work's intent is to diffuse appropriate modes of consumption while demonstrating the effects of different foods on the body.Lehre is a self-help book in every sense, with the goal to aid in the development of a stronger, more educated populace.For more on the political nature of the text see Gregory's Scientific Materialism in 19th-Century Germany ( [38], pp.35-39).These ideals are then supported and expanded by Ludwig Feuerbach to include food as a means towards political revolution [39].
30
. . .have a very similar history to those of the new schools of painting, of new music, of the new architectural styles! 31Civilized man in development of his brilliant intelligence [consumes] in only one single day the fermented juices from the vines of the Vesuvius, hazy beer from England, cocoa from America, and tea from far China. 32Despite the fact that he can live without them, he makes a conscious choice to stimulate his mind through their use instead of living "without enthusiasm," as Mantegazza puts it.Within various pages ( [24], pp.29-48; [29], pp.59-63; [40], pp.11-19) Mantegazza delineates the various benefits of moderate use of these foodstuffs: "Le fatiche dell'intelletto sono più presto ristorate da una tazza del caffé, mentre gli alcolici dispongono meglio il lavoro dei muscoli., etc . . ." ( [29], p. 62).(The exertions of the intellect are quickly restored by a cup of coffee, while alcoholic beverages put the workings of the muscles in better order . . . )
33
. . .a concert of the harmony and the melody of taste . . .that is brought to maximum perfection by the genius of the artist. 34A celebration of the pleasures of taste, to which are associated those of smell, hearing, vision . . .elevated to a certain level by the perfection of art and by the sentiment of beauty.
and intellect participate, transforming a simple primordial urge into one of the most fruitful, sociable, and educational sources of merriment ( [29], p. 214).The highly intellectualized, sensorial communion proposed corroborates Mantegazza's modernity and anticipates the Futurist paradigm by more than fifty years. 35Furthermore, the physio-aesthetic narrative constructed by the medic is wholly unique and certainly worthy of praise.It is an ideal that is continued in the Piccolo dizionario della cucina, a work based on Alexandre Dumas's Grand Dictionnaire de cuisine. 36Here, Mantegazza invokes conviviality as prerequisite for the realization of this gustatory symphony.Therefore, the communal setting, with its psychological value becomes premise to a meal that can be classified as fine art, distinguishing it from an act of mere satiation.The author indicates that the perfect man makes it his goal to attain perfect joy in the convivial setting, by invoking the angel of poetry and the archangel of affection ([28], p. 9).Using a language tinted with clear spiritual connotations, Mantegazza makes of the dinner table a sort of laic altar that can be created from the "desco dell'operario e del contadino, come alla mensa dorata del milionario e del re" ( [28], p. 8). 37
The Art of Mantegazza's Science
Mantegazza's goal is to ameliorate the stagnant Italian dietetic circumstances.This leads the author to conceive of texts that have success within the Italian context, not only because they fill a void and help answer questions both of a pragmatic 38 and sumptuous nature, but also because they exude an innate literariness.Due to the various digressions into which the author delves, it is this literary quality that Nicoletta Pireddu points to as protagonist of Fisiologia del piacere ( [43], p. 140). 39his notion can easily be extended to a vast majority of Mantegazza's texts, particularly those utilized within our narrative of taste.In Mantegazza, even the most scientific of material is presented with anecdotes and literary references that make his works palatable for a large portion of his audience: the numerous anecdotes, such as the account of the comical events of a masquerade ball where a host concocts a boisterous ruse ( [28], p.107); or the innumerous aphorisms that lace the pages of his texts, such as "conviene che ogni effetto abbia il suo pane, e che ogni ambizione leggitima abbia il suo vino" ( [44], p. 36); 40 or the multitude of literary and philosophical references, such as those to Dante and Parini ( [30], p. 33; [24], p. 31).
It is important to frame Mantegazza's success within his contemporary society.According to the author, man "abbraccia quanto può e quanto sa dell'universo che lo circonda, e dice: tutto questo è mio" ([16], p. 13). 41It is this moral that the author intends to diffuse; advocate of Darwin in 35 Futurism is an artistic and literary avant-garde movement that develops in early 20th-century Italy.It is centered on the violent abolishment of the Italian tradition for new artistic forms that would modernize the new Italian nation.Among these touted art forms is la cucina (cuisine).For a collection of its founder's manifestos and critical writings see Berghaus's F.T. Marinetti: Critical Writings [42]. 36Mantegazza's work is proliferate with references and citations to various gastronomes and thinkers who have in some form written about food and taste throughout the centuries.It has a heavy reliance on classical and French cultures, aside from his contemporary Italy.Of all the sources indicated, none are cited to the extent that Dumas is, explicitly named in 12 occasions.Throughout the work it is apparent that the Italian relies heavily upon the Grand Dictionnaire as inspiration for his text in structure and in content, seeing in the work a fruitful and supple tool to mimic for the Italian masses.However, it is of interest to note that Dumas publishes in 1882, the same year of Mantegazza's Piccolo dizionario, a condensed version of his work-the Petit dictionnaire de cuisine.
37
. . . the table of the worker and peasant, as well as to the gilded dining hall of the millionaire and of the king. 38The practical aspect to Mantegazza's gastronomic writings are found also in the many domestic topics that he covers.
Whether it is the proper mode of maintaining cookware ( [24], chapter 6), or the how to factor water when choosing a home ( [24], p. 58), or the references to a cuisine for weak stomachs ( [29], p. 229), it becomes rather evident that Mantegazza considers even the mundane aspects of domesticity. 39"L'appassionato uso della letterarietà che permea il testo mantegazziano diviene protagonista incontrastato nelle frequenti digressioni in cui l'autore concede libera espressione alla sue meditazioni estatiche ( [43], p. 140)."(The passionate use of literariness that permeates Mantegazza's texts becomes an uncontrasted protagonist in the frequent digressions in which the author concedes free expression to his ecstatic meditations). 40It is suitable that every effect has its bread, and that every legitimate ambition has its vine.
Italy, 42 Mantegazza promotes and practices the nascent sciences that biological Darwinism generates, particularly those of anthropology and sociology, in addition to being a medic and a literary writer.This philosophy of embracing all as man's capital is what defines Mantegazza's interests and production; from literary novel to scientific treatise, from manual of hygiene to philosophical text, from almanac to dictionary, the medic's works are as varied as can be found from a single author.And it is because of this eclecticism that the author's production has been defined as schizophrenic-marked by a continuous flux between artistic and scientific production.This eclecticism, that justly denotes the medic's work, is, I believe, the reason for his success.He becomes one of the few figures of the period to engage his audience in an open dialog, making this priest of science an apostle through the street and the piazzas. 43ith his ability to shift from societal to domestic issues, and from first person to dialogic narratives with ease, the author creates accessible pedagogical texts that are meant for immediate application.However Mantegazza's work may be defined, it is evident that its hybrid artistic/scientific nature, and its intent to educate, articulates a trend that is already present in the century-a trend that will bring to the development of the most widely famed gastronomic Italian text of the 1800s: Pellegrino Artusi's La scienza in cucina e l'arte di mangiar bene.
Notwithstanding the variety of approaches from which Mantegazza postulates his theories, food and taste are analyzed by Mantegazza from a Positivist standpoint-they are seen as a fundamental instrument to man's progress and advancement, because ultimately "si deve mangiar bene per viver bene" ( [24], p. 8). 44This idea of eating well is particularly salient in a Darwinist epoch given that, "La metà dei viventi vive, divorando l'altra metà.I grandi mangiano i mezzani e i mezzani mangiano i piccoli; e i piccolissimi poi, più forti di tutti, mangiano grandi, mezzani, e piccoli . . .A noi non resta che a mangiar bene, con scienza e conscienza, tutto il mangiabile" ( [23], p. 37). 45Therefore, the author preaches throughout much of his hygienic production that the manner and mode of alimentation directly affects mental processes, or rather our way of thinking, acting, and being-appropriating cuisine as an intellectual affair despite the biases confronted, because "L'arte di preparare i cibi non solo li rende più saporiti, ma anche più digeribili e più nutrivi, e la cucina in tutta la perfezione della civilità moderna è altamente igienica" ( [29], p. 211). 46From the poetry of cuisine that can be achieved ( [24], p. 29), to the pleasures of sentiment and the intellect satisfied ( [22], p. 66), Mantegazza propagates an ideal of culinary art that needs to attain three essential goals: first, it needs to supply a maximum variety of foods and flavors; second, it must facilitate the digestibility of foods without diminishing their nutritive value; finally, it must educate both the sense of taste as well as the sentiment of beauty ([29], p. 211).With these prerequisites, it is clear that the medic's conceptualization of taste consists of both lofty aesthetic ideals and a more pragmatic, Positivist framework of biological necessity and nationalistic renewal.As Armenise indicates, the author partakes in a process of cultural and hygienic literacy ([47], p. 99).If this is the case, then it is evident that the divulgation of a gastronomic ideal of regeneration, moderation, and beauty is fundamental.It is with this in mind that Mantegazza reminds his reader, "Studiate a fondo la vostra cucina, occupandovi assai di ciò che mangiate e del come mangiate," adding, with the zeal that characterizes much of his prose, "Non vergognatevi mai di essere saviamente golosi!" ([29], p. 214). 47ncapsulating the author's role as didactic gastronome is a quote that helps define Mantegazza's witty and direct approach; as he has done on various occasions, he ennobles simple foods, in this case 42 See Landucci's Darwinismo a Firenze [45]. 43From Igiene del sangue (1868) in Marciano's Alfabeto ed Educazione ( [46], p. 84). 44One needs to eat well to live well. 45Half of the living live devouring the other half.The big eat the medium sized and the medium sized eat the small; the very small then, stronger than them all, eat the big, the medium sized and the small . . .The only thing left for us is to eat well, with science and awareness, everything that is edible. 46The art of preparing foods not only renders them tastier, but also more digestible and more nutritious, and cuisine in all its perfection of the modern civilization is highly hygienic. 47Profoundly study your cuisine, be very involved in what you eat and how you eat it . . .Do not ever be ashamed to be wisely gluttonous.squash, by playing with the bisemic meaning of zucca: 48 "Anche le zucche più vuote di questo mondo poi possono elevarsi a pretesa di aristocrazia culinaria, quando si faccia loro un ripieno." 49Leaving the author to question, "Non è forse tutta quanta la pedagogia un'arte di mettere un buon ripieno nelle zucche umane?" ( [28], p. 119). 50
Truly an Art for All?
It is evident that there was an attempt to construct an Italian cultural unity by preaching an ennobled national taste, but to what extent was this endeavor successful?We must first define the parameters within which we will be judging success.If we look at book sales, then we can almost certainly say that the message is well received, at least among the Italian middle classes of the late 19th century.Mantegazza was immensely popular, and, therefore, his writings certainly did reach a literate audience.However, it remains to be said that his success can be considered relative.
In Igiene dei sensi (Hygiene of the Senses), Mantegazza lauds a philosophy immersed in a sensorial acquisition of knowledge: "Tutto quello che sappiamo, tutto quel che possiamo, tutto quel che facciamo ci viene dall'umile scaturigine dei sensi" ( [30], p. 15). 51Furthermore, the author encourages the banishment of the traditional sensory hierarchy, suggesting: "Tutti i sensi riuniti si aiutano, si correggono l'un l'altro e a guisa dei molti tentacoli d'un polipo, ci permettono di mettere in intimo contatto della natura delle cose i nostri organi nervosi centrali, così avidi di sentire e di imparare" ( [30], p. 15). 52It is precisely in this work that Mantegazza promotes a sensory communion no longer defined exclusively by intellectual faculties.Knowledge is acquired through all the senses and therefore the Platonic/Kantian aesthetic tradition is eradicated.No longer are the intelligent senses (i.e., vision and hearing) held in position of prominence, since they function in harmony with the lower senses (i.e., smell, touch and taste).However, it is noteworthy that it is within this very text, where the author continues to promulgate these lofty aesthetic ideals, that he is forced to defend himself: I miei almanacchi, perché popolari, son creduti da alcuni critici destinati soltanto al contadino o all'operario e son quindi accusato di occuparmi dei ricchi e son quindi maltrattato, perché insegno al popolo precetti igienici, che possono sembrare una crudele ironia per chi non abbia molti quattrini in tasca.Ma c'è dunque bisogno ancora di ripetere per la millesima volta a questi aristarchi, che il popolo non è fatto di soli operai e di soli contadini, ma è composto di tutti noi; e che quando si scrive un libro popolare, convien farsi un'idea empirica e media di un popolo medio, a cui non appartengono né i dottissimi, né gli analfabeti?Ma vi è dunque bisogno di ripetere fino alla noia, che un libro popolare ha per natura propria il difetto congenito di riuscire troppo elevato per gli uni, troppo volgare per gli altri?Perché sia utile, basta che si attagli alla statura media dei cervelli umani e che tutti, l'altissimo come il piccolissimo, vi possan beccare qualche granello di cibo . . .( [30], pp.26-27). 53 48 Zcca (squash), colloquially, is used as metaphor for the human head.The Dizionario generale de'sinonimi italiani [48] indicates that "secondo, la Crusca, nel proprio, è una piante erbacea, che produce pampani e frutti maggiori di qualsivoglia altra pianta, presentando sovente una forma simile alla testa degli animali; e fu talvolta impiegata, per similtudine, per Testa" ( [48], p. 372).(According to la Crusca, it is an herbaceous plant that produces vine leaves and more fruits than any other plant, often similar to the shape of animals' heads; and was sometimes employed, as a simile, for a head). 49Even the emptiest squash of this world can then be elevated to the pretense of aristocratic cuisine, when stuffing is made for them. 50Is not all pedagogy an art of putting a good stuffing in the human squash (head)? 51All that we know, all that we can, all that we do, comes from the humble wellspring of the senses. 52All the senses united help each other, they correct one another, and in the manner of the many tentacles of an octopus, they allow the nature of things to be put in intimate contact with our central nervous organs, so avid to feel and learn. 53My almanacs, because popular, are believed by some critics to be intended for only the peasant or the worker, and I am then accused of taking care of the rich.I am therefore mistreated, because I teach the populace hygienic precepts, which can seem a cruel irony for those who do not have many pennies in their pocket.But is it therefore necessary to repeat for the thousandth time to these Aristarchs that the people are not made up of only workers and peasants, but we are all the people?When one writes a popular book, it is appropriate to have a practical understanding of what is the middle-class, to which neither the scholarly nor the illiterate belong.It is then necessary to This juxtaposition between the extremely modern premise of an aesthetic reordering of the senses and Mantegazza's defense, sheds light on the author's difficulties in an epoch where hunger is much more prevalent than opulence.Paolo Sorcinelli, the Italian social historian, refers to a malessere alimentare (nutritional malaise) that becomes a defining factor of the post-Risorgimento period ( [49], p. 52).Statistics support his argument.A summary of historical data compiled in 1968 by ISTAT (The National Institute for Statistics) [50] reveals details of a diet in which the art of food is seemingly all but unattainable.In the first years of unification (1861-1870), the Italian diet is made predominantly of grains (nearly 55%) and produce (42%).Meat consumption constitutes a small fraction of the diet, comparatively (a little over 3%). 54overnment-sponsored surveys also paint a clear picture.Perhaps the most famed example was the large-scale inquest initiated by Senator Stefano Jacini, which produces 15 volumes of studies and recommendations (1878´1883).From Jacini's inquiry (which is considered the most complete analysis of agricultural Italy) we learn much about the rural population's diet and hygiene, 55 as he concluded: "Lo stato generale non è soddisfacente; l'aria è cattiva, cattiva l'alimentazione e il vestito, le abitazioni poco salubri" ( [52], p. 56). 56t is evident that with large portions of the nation poorly nourished, parts of Mantegazza's message of a pedagogic gourmandism is unattainable for many new Italians.The author's ideal of regenerating the nation by educating its citizens about food and cuisine is problematic, and the medic himself addresses this notion in Elementi di igiene (Elements of Hygiene) by stating: So pur troppo che per molti e molti il pranzo si reduce a polenta, a sola minestra condita col lardo o a patate; ma che potrebbe contro queste miserie un libro d'igiene?Tutt'al più consigliare che nella minestra si mettano più fagiuoli, più ceci, più piselli che riso; che si preferisca il pane di segale a quello di frumentone.L'igiene del povero è questione di economia politica.( [29], p. 228).57 Mantegazza insists that his texts are to be of aid even to the poor, despite the reality in which they find themselves.However, they cannot solve the issues of poverty that plague the new nation.His goal is simply to diffuse the knowledge that would allow even the poorest towards better nourishment, but he realizes that his message is limited.It cannot put more pennies in the pockets or food in the mouths' of its citizens.
The intended audience of his gastro-hygienic ideals, however, remains largely the middle classes (as stated by the author), which also can be considered problematic.There is no question that Mantegazza's works sell well; nevertheless, we must say that even this is relative.The reality is that repeat until tedium, that a popular book has by its very nature the congenital defect of being too elevated for some and to vulgar for others?For it to be useful, suffice that it suits the average human intellect and that all, from the highest to the smallest, can take away some crumbs of food. 54Cereali (grains) divided into wheats, mais, rice, rye and barley equaled 205.5 kilograms per half year.Prodotti ortofrutticoli (produce) divided into potatoes, dried legumes, fresh legumes, vegetables, fresh fruit, citrus, and dried fruit were consumed at a rate of 156.6 kilograms.Carni (meats) divided into bovine, pork and goat, and other was consumed at a rate of 14.8 kilograms ([50], p. 136). 55For example:"il cibo del contadino è, se non scarso affatto, per lo meno, poco sostanzioso . . .La carne di bue o di vacca raramente si mangia, I polli non servono che le grandi occasioni; . . .le uova; il latte; il formaggio scadente.Ma fondamento dell'alimentazione sono i vegetali: patate, cavoli, fagiuoli, fave, olio delle qualità più inferiori per condire,...prevalente a tutte le altre prese insieme, è la farina di granturco mangiata sotto forma di pane o di polenta" ( [51], pp.204-5).The food of the peasants is, if not absolutely inadequate, [it is] at very least insubstantial . . .Rarely are beef (ox and cow) eaten.Chickens are only served on special occasions . . .eggs, milk and cheese are of poor quality.The foundation of their diet is vegetables; potatoes, cabbages, favas, oil of the worst quality to dress . . .more prevalent than all these put together is cornmeal eaten as bread or polenta. 56The general state is not satisfactory; their air is bad, poor is their nutrition and their clothing, their homes unhealthy. 57I unfortunately know that for many a meal is reduced to polenta, a lone soup dressed with lard or potatoes; but what could a book of hygiene do to combat this misery?At most it can advise that more beans, chickpeas, or peas than rice are put in a soup; that one should prefer rye bread to that of wheat.The hygiene of the poor is a problem of the political economy.
the bourgeoisie only encompasses 6.7% of the population during this period. 58Additionally, 78% of the nation is illiterate at the time of unification (with certain regions seeing rates as high as 91%).59
Conclusions
With a blend of science and art, Mantegazza promotes an Italianness that is ahead of its time.However, Italy in many ways is not ready for his modernity.While his rational gluttony and his guidelines for better nourishment are novel and sound, the message could not feasibly reach enough of the population to have an immediate widespread effect.The task of nourishing the malnourished is far too great to be solved by food literature.Moreover, I believe that Mantegazza's ideal of an aestheticized cuisine, of a dinner that reaches perfection when it demonstrates a symphonic harmony, is beyond the reach of too many during his period.Yet, the formation of an Italian taste, for the author, is also the formation of the sense of what it means to eat in conviviality and as well as possible.Whether the gastrosophic theories forwarded directly enter in the vocabulary of the masses is almost insignificant, because these theories draw attention to the art of food in a wholly unique way.As a result, we see the trickle down of certain paragons from the middle classes: as poor as a dish may be, even from the table of the peasant or the worker, it is shared with the utmost dignity and respect for what it represents and from where it comes.What the medic ultimately transmits is an art that everyone can afford at any epoch: the sentimento del bello (sentiment of beauty), as Mantegazza reminds us, is what makes a meal truly artful ( [22], p. 76).
I contend that taste does ultimately find prominence in Italian culture, and although it cannot be said that intellectuals in the 20th century espouse an eradicated sensorial hierarchy (with exceptions being the Futurists and authors such as Mario Soldati); it can however be said that the layman do.If we were to construct a philosophy of our everyday experiences, as Nicola Perullo suggests [55], I believe we can trace taste as a driving factor of the Italian quotidian.This is precisely what Mantegazza preaches: an everyday world where taste reigns, and where a trip to the market "[far] crescere la salute e il buon umore" ( [29], p. 229). 60The attention and care put into daily food rituals has been a defining characteristic of so many generations of Italians, who see in their foods a way of sustenance, yet so much more.Here gastrosphers such as Mantegazza are fundamental.He is at least in part reflective of this mentality that makes food central to daily life, transforming the quotidian into a philosophy through his Positivist ideals, while solidifying the dinner table as a laic altar for all sects of society ( [28], p. 8).His initial limited influence notwithstanding, Mantegazza as philosopher of food exemplifies what Carlo Petrini says: he who sows utopia ultimately may reap reality [56]. | 2016-06-10T08:59:46.098Z | 2016-05-04T00:00:00.000 | {
"year": 2016,
"sha1": "213d5734dffe99c7f89e766c2a39a7ce8f08621b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0787/5/2/26/pdf?version=1462366040",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "213d5734dffe99c7f89e766c2a39a7ce8f08621b",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Sociology"
]
} |
209426793 | pes2o/s2orc | v3-fos-license | Identification of a 7‐microRNA signature in plasma as promising biomarker for nasopharyngeal carcinoma detection
Abstract Background Circulating microRNAs (miRNAs) have become reliable sources of non‐invasive biomarkers for cancer diagnosis. Identification of promising miRNA biomarkers in plasma might benefit a lot to the detection of nasopharyngeal carcinoma (NPC). Methods The Exiqon miRNA qPCR panel was used in the screening stage to identify candidate miRNAs, which were further verified by quantitative reverse transcription polymerase chain reaction (qRT‐PCR) in the following three stages among plasma samples from 200 NPC patients and 189 healthy donors (as normal controls [NCs]). The identified miRNAs were further explored in tissue specimens (48 NPC vs 32 NCs) and plasma exosomes (32 NPC vs 32 NCs). Survival analyses were ultimately conducted by Cox regression models and Kaplan‐Meier curves using log‐rank tests. Results We identified a 7‐miRNA signature including let‐7b‐5p, miR‐140‐3p, miR‐144‐3p, miR‐17‐5p, miR‐20a‐5p, miR‐20b‐5p, and miR‐205‐5p in plasma for NPC diagnosis after four‐stage validation. The areas under the receiver operating characteristic curve (AUCs) for the signature were 0.879, 0.884, 0.921, and 0.807 for the training, testing, external validation stage, and the combined three stages, respectively. In NPC tissues, miR‐144‐3p, miR‐17‐5p, miR‐20a‐5p, and miR‐205‐5p were consistently up‐regulated while let‐7b‐5p and miR‐140‐3p were significantly down‐regulated compared to NCs. However, none of the seven identified miRNAs were dysregulated in plasma‐derived exosomes in NPC patients. As to survival analysis, none of the seven miRNAs seemed to be associated with NPC prognosis. Conclusion We identified a 7‐miRNA signature in plasma as promising non‐invasive biomarkers for NPC detection.
| INTRODUCTION
Nasopharyngeal carcinoma (NPC) is a kind of uncommon cancer derived from the epithelium of nasopharynx. 1 Although compared to other cancers, NPC is rarer with the proportion of only about 0.6% of all diagnosed cancers worldwide, it has relatively higher incidence rate in some specific ethnic populations and regions such as the east and southeast parts of Asia and other less developed areas. 2 During the past decades, incidence and mortality rate of NPC have decreased gradually due to effective screening and treatment strategies including radiotherapy, combined chemotherapy, and the emerging immunotherapy (such as adoptive T-cell transfer and immune checkpoint inhibitor). [3][4][5] However, it is still a heavy health burden in China which needs enhanced control and prevention. 6 NPC, especially the undifferentiated form (WHO type III) which represents almost 80% of all the cases, has shown consistent association with Epstein-Barr virus (EBV) infection. [7][8][9] Various studies have confirmed the diagnostic value of circulating cancer-derived EBV DNA as biomarker for NPC early detection. 10,11 Antibodies in response to different EBV antigenic elements can be released into body fluids (serum and saliva) and also form potential screening biomarkers for NPC diagnosis. 12,13 Although these markers may be a help for the confirmation of highly suspected NPC patients, their abilities to screen asymptomatic patients were limited due to unstable sensitivity and specificity. 14 Therefore, more researches are still in need for the discovery of novel non-invasive biomarkers to identify NPC patients.
MicroRNAs (miRNAs) are families of small non-coding RNAs with the length of about 19-25 nucleotides which function in post-transcriptional regulation by targeting mRNA and mediating mRNA degradation. 15 The importance of miRNAs in cancer biology has been underlined by many studies with accumulating evidence in recent years. 16 Dysregulated miRNAs that exist stably in tumor tissues and peripheral blood circulation are reliable sources of diagnostic or prognostic markers for various cancer types. 17 For NPC, miRNA expression profiling in tumor tissues has been assessed by a number of studies, but systematic analyses of plasma miRNAs are still inadequate and inconsistent. [18][19][20] In this study, to identify potential biomarkers for NPC diagnosis, we conducted miRNA profiling in plasma with four-stage validation by quantitative reverse transcription polymerase chain reaction (qRT-PCR). MiRNA expression patterns in tissue specimens and plasma-derived exosomes samples were also analyzed for further exploration. Whole venous blood sample was drawn from each participant before any clinical intervention or treatment such as radiotherapy and surgical operation. Samples were initially collected in ethylenediaminetetraacetic acid (EDTA)-containing tubes (Becton, Dickinson and Company) followed by a twostep centrifugal process (350 RCF [reactive centrifugal force] for 10 minutes and 20 000 RCF for 10 minutes [Beckman Coulter]) to isolate cell-free plasma samples within 12 hours. The obtained plasma samples were then restored in RNasefree tube at −80°C ready for future analysis. In all, we collected 200 plasma samples from NPC patients and 189 plasma samples that were set as normal controls (NCs) from healthy donors. Additional 48 frozen tumor tissue specimens from NPC patients undergoing surgery and 32 paraffin-embedded nasal mucosa tissue specimens from healthy donors were also collected and kept in liquid nitrogen for further exploration.
| Exosomes isolation
We used ExoQuick Exosome Precipitation Solution (System Biosciences, Mountain View, Calif) to isolate exosomes from plasma samples. Following manufacturer's protocols, exosomes pellets were precipitated from the mixture of 400 μL plasma and 100 μL ExoQuick exosome precipitation solution and lysed into 200 μL of RNase-free water for future RNA extraction. A total of 64 plasma-derived exosomes samples (32 NPC vs 32 NCs) were collected.
| Study design
The study was designed into four-stages: the screening, training, testing, and external validation stages to identify potential miRNA biomarkers for NPC diagnosis. The flow chart of experiment design is given in Figure 1. Several factors were taken into account in sample selection and distribution in the four stages: (a) the purpose of each separate stage (b) the sequence of sample collection in practical operation (c) the balance of non-experimental factors among four separate sets such as gender, age, TNM stage, and pathological type (d) the basic principal of experiment design-control, randomization, replication, and balance. In the initial screening stage, the miRNA profiling platform-Exiqon miRCURY-Ready-to-Use PCR-Human-panel-I+II-V1.M (Exiqon miRNA qPCR panel, Vedbaek, Denmark; 168 miRNAs) was applied for the selection of candidate miRNAs. We constructed 2 NPC pools and 1 NC pool with per 10 plasma samples being gathered into 1 pooled samples and tried to identify differently expressed miRNAs between NPC and NC pools using the Exiqon miRNA qPCR panel on 7900HT real-time PCR system (Applied Biosystems). The process was in accordance with the previous study. 21 Candidate miRNAs were then subjected to following multiple-stage validation process and analyzed by qRT-PCR among 200 NPC and 189 NCs plasma samples (30 NPC vs 30 NCs for the training stage, 140 NPC vs 130 NCs for the testing stage and 30 NPC vs 29 NCs for the external validation stage). Analysis in the training, testing, and external validation stages was conducted in an orderly and independent manner. The three independent stages were designed for preliminary analysis of candidate miRNAs, accurate verification among larger cohorts and further verification of the established model, respectively. A large sample size in the testing phase was of the greatest importance, since it was the most critical step in determining the final models. At each stage, the number of control samples should match with that of tumor samples to the most extent. In addition, factors such as gender, age, TNM stage, and pathological type were as evenly distributed across four sets as possible to avoid selection bias.
In addition, expression levels of the identified miRNAs in tissue specimens (48 NPC vs 32 NCs) and plasma-derived exosomes samples (32 NPC vs 32 NCs) were also analyzed by qRT-PCR for further exploration.
| RNA extraction
We extracted total RNA using the mirVana PARIS Kit (Ambion) for 200 μL plasma and exosomes samples and Trizol (Invitrogen, Carlsbad, CA, USA) for tissue specimens following manufacturer's instructions. The acquired total RNA was lysed into 100 μL RNase-free water and kept at 80°C until analysis. The ultraviolet spectrophotometer was applied to evaluate the concentration and purity of total RNA samples. If the concentration of total RNA was less than 10 ng/μL, it was not included in data analysis. During the process, additional 5 μL of synthetic Caenorhabditis elegans miR-39 (5nM/L, RiboBio, Guangzhou, China) was added to each sample after denaturing solution (Ambion) for sample-to-sample normalization.
| Quantitative reverse transcription polymerase chain reaction (qRT-PCR)
MiRNAs were amplified using Bulge-Loop TM miRNA qRT-PCR Primer Set (RiboBio) with specific primers of reverse transcription (RT) and polymerase chain reaction (PCR). According to the previous study, RT and PCR procedures were performed on 7900HT real-time PCR system (Applied Biosystems) in the condition of 42°C for 60 minutes followed by 70°C for 10 minutes (for RT) and 95°C for 20 seconds, followed by 40 cycles of 95°C for 10 seconds, 60°C for 20 seconds and then 70°C for 10 seconds (for PCR), respectively. 22 SYBR Green (SYBR® Premix Ex TaqTM II, TaKaRa) was used to calculate the amount of PCR products by the level of fluorescence and melting analysis was introduced to evaluate the specificity of PCR products. As described previously, miRNA expression levels were determined using the 2 −ΔΔCt method with cel-miR-39 and RNU6B (U6, for tissue samples) as reference.
| Statistical analysis
Mann-Whitney U test was used to assess the difference of miRNA expression in plasma, exosomes, and tissue specimens between NPC and NC groups. One-way ANOVA or χ 2 test was applied to analyze the demographic and clinical characteristics of participants along with their association with miRNA expression patterns. Binary logistic regression analysis was conducted to combine the identified miRNAs into a comprehensive panel. A formula of log distribution was built based on the relative expression data generated from all the 200 NPC patients and 189 NCs: Logit(P) = ln(P/(1-P)), where P (P = 1/(1 + e-Logit(P)) implies the probability of identifying the disease case correctly. The predicted probability of being diagnosed as NPC was used to fit receiver operating characteristic (ROC) curves. The area under the ROC curve (AUC) was calculated to estimate the diagnostic performance of individual miRNAs and the constructed panel. The corresponding prognostic value was evaluated by overall survival (OS) rate. Cox's regression models were applied to assess factors related to the OS and Kaplan-Meier curves using log-rank tests were used to estimate the association between identified miR-NAs and NPC prognosis. SPSS22.0 software (SPSS Inc) and GraphPad Prism 7 (GraphPad Software) were applied for statistical analysis and graph construction. A two-sided P-value <.05 was considered to be of statistical significance.
| Description of study subjects
A total of 200 NPC patients and 189 NCs which were divided into three independent parts (the training, testing, and external validation stages) were enrolled in this study for the comparison of miRNA expression levels in plasma. Their characteristics are presented in Table 1 and the flow chart of experiment design is shown in Figure 1. No significant difference of gender and age distribution was observed between the case and the control groups. (P > .05).
| Discovery of candidate miRNAs in the screening stage
In the initial screening stage, the Exiqon miRCURY-Readyto-Use PCR-Human-panel-I+II-V1.M-a miRNA profiling platform-was employed for the selection of candidate miR-NAs in plasma among 2 NPC pools and 1 NC pool. A total of 168 miRNAs with relatively higher expression abundance in plasma/serum were analyzed twice on 384-well plates by qRT-PCR. MiRNAs satisfying all of the three following standards were determined to be candidate miRNAs: (a) cycle threshold (Ct) value <37 (b) Ct value being 5 lower than negative control (No Template Control, NTC) (c) expression level being altered with >1.5-fold or <0.67-fold in any NPC pool compared to NC pool. 23 As a result, 31 plasma miRNAs with 25 up-regulated and 6 down-regulated were found to be differently expressed between NPC pools and NC pool, which were submitted to further validation in the following three stages (Table S1).
| Correlation between miRNA expression levels and clinicopathologic features
Since NPC is highly correlated with EBV infection status, we further tried to explore miRNA expression difference in plasma among 189 NCs and EBV-positive or EBV-negative (confirmed by EBV-DNA test) NPC patients from the 200 cases. As shown in Figure 3, all the seven miRNAs but miR-144-3p were significantly up-regulated in EBV-positive NPC patients compared to NCs. miR-144-3p were conversely down-regulated with P < .05. Parallelly, significant up-regulation of the seven plasma miRNAs were also observed in EBV-negative NPC patients compared to NCs (P < .05). In addition, when we compared miRNA expression in plasma between EBV-positive and EBV-negative NPC patients, we found that the expression level of let-7b-5p was significantly higher in the former group while miR-140-3p and miR-20a-5p were much lower than in the latter group (P < .05).
Besides EBV infection status, TNM stage and lymph node metastasis status of NPC were also taken into consideration. No significant difference was observed in miRNA expression levels for any of the seven miRNAs in plasma between earlystage (Stage I or II) NPC patients and advanced (Stage III or IV) NPC patients (P > .05, Figure S1). So was the result of T A B L E 2 Expression levels of the identified 7 miRNAs in the three independent stages (presented as mean ± SD; ΔCT, relative to cel-miR-39) comparison between NPC patients with or without (N0/N1 vs N2/N3) distant lymph node metastasis (P > .05, Figure S2). Figure S3).
|
In an attempt to enhance the diagnostic efficacy for NPC patients, we combined the identified miRNAs into a comprehensive panel using binary logistic regression analysis. A logistic regression statistical model that provide Figure 4C) for the training, testing, and external validation phases, respectively. When the data from three phases were combined, the AUC was 0.807 with the sensitivity and specificity being 0.735 and 0.757 when 0.42 was set as the cutoff value (Table S2). Figure S4). Similarly, the diagnostic performance of identifying EBV-negative and EBV-positive NPC patients from NCs remained fine. The corresponding AUCs were 0.827 (95% CI: 0.778-0.876) for EBV-negative patients and 0.823 (95% CI: 0.768-0.877) for EBV-positive patients ( Figure S5).
| Prognostic value of the miRNA signature for NPC
Cox regression and Kaplan-Meier curve analyses were conducted to estimate the association between several clinical influence factors and OS (overall survival) rate. As shown in Table S3, in univariate Cox regression analysis, distant lymph node metastasis had significant association with worse OS for NPC patients (P < .05). However, none of the seven identified miRNAs showed close correlation with NPC prognosis (P > .05) ( Figure S6).
| miRNA expression in tissues
Expression levels of the identified seven miRNAs were also analyzed among 48 NPC tissue specimens and 32 nasal mucosa tissue specimens from healthy donors on the basis of qRT-PCR. As shown in Figure 5, let-7b-5p and miR-140-3p were significantly down-regulated in NPC tumor tissues, while miR-144-3p, miR-17-5p, miR-20a-5p, and miR-205-5p were significantly up-regulated in NPC tissues compared to normal tissues (P < .05). No significant difference was observed for miR-20b-5p expression.
| miRNA expression in plasma exosomes
For better understanding of the potential existing form of the identified plasma miRNAs, we further explored miRNA expression patterns in 32 NPC vs 32 NCs plasma-derived exosomes samples by qRT-PCR. None of the seven miRNAs showed expression difference with statistical significance in plasma exosomes (P > .05, Figure S7).
| Bioinformatics analysis of identified miRNAs
DIANA-miRPath v3.0 analysis of miRNA target genes and correlated-pathways based on DIANA-TarBase v7.0 database was conducted to decipher the potential function of each identified miRNA. According to Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis, these miRNAs were involved in several tumor-related pathways such as p53 signaling pathway, viral carcinogenesis, and FoxO signaling pathway. Gene Ontology (GO) category analysis identified several biological processes associated with these miRNAs (such as ion binding, cell death, cell cycle, and immune system process). The heatmaps and tables of miRNA target analysis are presented in Figure 6 and Table S4.
| DISCUSSION
NPC, although with its gradually declining mortality rate due to tremendous advances in screening methods and treatment strategies, still poses great threat to residents in some specific regions or ethnic groups. 24 NPC patients in their early stage with non-metastatic diseases always have good response to local radiotherapy (especially intensity-modulated radiotherapy [IMRT]). 25 However, a number of NPC cases are asymptomatic until the disease develops to an advanced stage, which attributes to their poor prognosis. 26 Since EBV infection is consistently correlated with NPC, quantification of cell-free EBV DNA in plasma as well as detection of EBV-based antibodies can serve to monitor the occurrence, development, prognostication, or treatment outcomes of the disease. [27][28][29] Nevertheless, furthermore research is in need to confirm their clinical use and to discover novel non-invasive biomarkers for disease surveillance. MiRNAs are closely implicated in tumor-promotion or suppression activities and have been proved to be potential biomarkers for various cancers. 16 For NPC, miRNA expression has been explored at different scopes by numerous studies which often concentrated on intracellular miRNAs in tumor tissues or EBV-originated viral miRNAs. 30,31 However, still a few efforts have been made to decipher the miRNA expression traits in the plasma of NPC patients nor identify novel circulating miRNAs capable for NPC screening.
In this study, we conducted a comprehensive four-stage investigation among 389 plasma samples from 200 NPC patients and 189 NCs on the basis of qRT-PCR. In the initial screening stage, 31 differently expressed miRNAs with 25 up-regulated and 6 down-regulated were screened out by the Exiqon miRNA qPCR panel and transferred to further validation in the following three stages (training, testing, and external validation stages) by qRT-PCR. Ultimately, seven miRNAs (let-7b-5p, miR-140-3p, miR-144-3p, miR-17-5p, miR-20a-5p, miR-20b-5p, and miR-205-5p) in plasma showed consistent trend of up-regulation in NPC patients compared to NCs. We combined the seven miRNAs together and constructed a 7-miRNA panel to strengthen the diagnostic capability of the identified miRNA signature. ROC curve analyses were conducted and the corresponding AUCs for the panel for the three independent stages were as high as 0.879, 0.884, and 0.921, respectively. When the three stages were combined, the AUC was 0.807, which showed credible diagnostic value for the panel to discriminate NPC patients from NCs.
Up to now, several studies have also focused on the discovery of circulating miRNAs as biomarkers for NPC diagnosis. For example, Xiong Liu et al once observed the increased levels of plasma miR-16, 21, 24, and 155 in NPC patients within candidate miRNA lists from other literatures, 32 while Xiao-Hui Zheng et al reported the significant up-regulation of plasma miR-548q and miR-483-5p based on array analysis. 33 These results lacked consistency between each other and also had limited overlap with our findings, probably because of different initial screening methods, varied subject sizes or sample handling means. In this study, we employed the Exiqon miRNA qPCR panel to perform miRNA profiling in 2 NPC plasma pools and 1 NC pool for the preliminary selection of candidate miRNAs. Compared to other array based platforms such as TaqMan assays, this qRT-PCR based platform could have better sensitivity and linearity in the case of relative lower miRNA abundance in plasma samples, which to some extent ensured the reliability and comprehensiveness of our candidate miRNAs. 34 For further validation, we enrolled a considerable amount of study subjects with all the NPC patients being untreated when blood samples were taken, which could minimize the influence of any treatment factors and thus reveal the true expression patterns of plasma miRNAs in NPC patients.
For better understanding of the underlying roles of these identified miRNAs in tumor activities, we further explored miRNA expression levels in 48 NPC tissue samples vs 32 normal tissues. As a result, miR-144-3p, miR-17-5p, miR-20a-5p, and miR-205-5p were up-regulated in NPC tumor tissues as in plasma compared to normal tissues. The up-regulated miRNAs in plasma might be originated from tumor cells. 35 let-7b-5p and miR-140-5p showed the opposite tendency of down-regulation. Such discrepancy of miRNA expression patterns between blood samples and tissue specimens have been similarly observed in a number of previous studies. 36,37 Circulating miRNAs can have totally different expression traits with those intracellular. Active or passive transport of miRNAs between tumor cells, tumor-adjacent normal cells and tumor micro-environment may be one possible explanation. 38,39 Moreover we suspect that miRNA expression changes in tissue just reflected local changes, but miRNA expression in blood circulation might be the epitome of systematic disease status. However, the exact mechanism remains unclear and still requires further investigation.
Although potential biomarkers of circulating miRNAs have been identified for NPC diagnosis, exploration of their function in NPC carcinogenesis and progression is still in its infancy. miR-17-5p and miR-20a-5p are members of the miR-17-92 cluster, and their oncogenic function such as cell cycle regulation has been confirmed with numerous evidence in various cancers. 40,41 In NPC, overexpressed miR-17-5p might promote tumor occurrence and proliferation via down-regulating the expression of p21 protein (a cell cycle inhibitor), 42 and according to Zhao et al, the significant up-regulation of miR-20a-5p could promote resistance to radiotherapy for NPC patients via targeting the gene of neuronal PAS domain protein 2 (NPAS2) and regulating the Notch signaling pathways which were involved in cell proliferation, differentiation and apoptosis. 43 For miR-144-3p and miR-205-5p, their tumor-promoting roles in NPC have also been revealed by several studies. miR-144-3p was located in 17q11.2, the region often amplified in NPC patients. 44 In NPC, miR-144-3p was discovered to promote tumor migration and invasion by down-regulating the tumor suppressor gene phosphatase and tensin homolog (PTEN) and activating the PI3K/Akt pathway. 45 miR-205-5p have complex roles of oncogenicity and anti-oncogenicity in different cancers. But in NPC, according to Nie G et al, it functioned as tumor-promotor via targeting tumor protein p53-inducible nuclear protein 1. 46 miR-17-5p could also modulate radio-resistance of NPC through targeting PTEN. 47 According to the previous study, let-7 family could prevent tumor cells proliferation by suppressing c-Myc expression in NPC and the dysregulation of miRNA let-7 might be associated with the early formation of NPC. 48 Evidence was still limited, but our findings might provide some hints for future investigation.
To supplement the diagnostic application of the identified signature, miRNA expression levels were further evaluated among NPC patients with different clinical characteristics. When EBV-positive and EBV-negative patients were analyzed separately, we could observe the opposite trait of down-regulation for plasma miR-144-3p in EBV-positive patients in comparison with NCs. Interestingly, EBV-positive and EBV-negative NPC patients could exhibit different miRNA expression patterns in plasma. Among the seven miRNAs, expression levels of miR-140-3p and miR-20a-5p were lowered while let-7b-5p was increased with the presence of EBV infection for NPC cases. The results of this primary exploration suggested that active EBV infection might alter the miRNA expression patterns in plasma for NPC patients. Several previous studies revealed that EBV infection could influence the expression levels of certain miRNAs (eg miR-146a and miR-155) to promote NPC development. 49 However, none of these studies had focused on the crosstalk between circulating miRNA expression and EBV infection status in NPC. In this study, such a phenomenon observed in different subgroups was still preliminary but might give a hint to future investigation. Besides EBV infection status, miRNA expression levels in plasma were also evaluated between patients in advanced stage (Stage III or IV) and early stage (Stage I or II), as well as patients with and without distant lymph node metastasis. Although no significant difference was observed, the 7-miRNA panel could still do well to discriminate each group of fine-classified patients from NCs. In addition, Cox regression analysis and Kaplan-Meier curves were performed for survival analysis. It seemed that none of the seven identified miRNAs could actually measure the clinical prognosis of NPC patients. The results indirectly indicated that most NPC patients can respond well to timely treatment, which apparently prolong patients' OS in clinical practice.
Besides plasma and tissue samples, miRNA expression levels were further analyzed in plasma-derived exosomes from 32 NPC patients and 32 NCs. Exosomes, one of the smallest extracellular vesicles secreted by many cell types, have shown to carry a variety of molecules including miR-NAs. 50 Exosomal miRNAs can form potential biomarkers for various cancers and may help improve comprehension over some unexplained cancer behaviors. However, in our study, no significant difference was observed for any of the seven up-regulated plasma miRNAs in plasma-derived exosomes between NPC and NCs. It is notable that besides exosomes, the majority of miRNAs are loaded to the fundamental extracellular carrier-Agonaute2 (Ago2) proteins. According to Arroyo JD et al, five miRNAs in our study (let-7b-5p, miR-140-3p, miR-144-3p, miR-20a-5p and miR-20b-5p; the other two not well detected) were not encapsulated in exosomes but independently co-purified with Ago2 ribonucleoprotein complex in plasma. 51 It could be one of the credible explanations of discrepant miRNA expression traits between plasma and exosomes.
Taken together, we identified a 7-miRNA signature in plasma for NPC detection. Although there will be still a long way to go for virtual clinical use, giving consideration to its convenience and low health impact, the miRNA panel can be combined with some traditional strategies to assist disease screening and benefit clinical outcomes of NPC patients in the near future. | 2019-12-21T14:04:18.582Z | 2019-12-19T00:00:00.000 | {
"year": 2019,
"sha1": "9a4de085fb4f817053a8be01a737232a59f69339",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.2676",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67c88257ad7ee403419d1cf751a481d7444f9b93",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8946509 | pes2o/s2orc | v3-fos-license | The ubiquitin-proteasome pathway in cancer.
Degradation by the 26S proteasome of specific proteins that have been targeted by the ubiquitin pathway is the major intracellular non-lysosomal proteolytic mechanism and is involved in a broad range of processes, such as cell cycle progression, antigen presentation and control of gene expression. Recent work, reviewed here, has shown that this pathway is often the target of cancer-related deregulation and can underlie processes, such as oncogenic transformation, tumour progression, escape from immune surveillance and drug resistance.
bound to a ubiquitin-protein ligase (E3). The first ubiquitin molecule is usually bound to the substrate by an isopeptide bond between the C-terminal glycine of ubiquitin and an E -NH2 group of a lysine residue of the substrate. The polyubiquitin chain is formed in multiple cycles of this reaction by addition of another ubiquitin molecule to the lysine at position 48 of the previously already conjugated ubiquitin. Release of ubiquitin from the isopeptide linkage with the lysine residue is performed by isopeptidases called ubiquitin C-terminal hydrolases (UCH). Their function is probably important not only in recycling ubiquitin monomers after substrate degradation but also in the recovery of poorly or incorrectly ubiquitinated proteins (Shaeffer and Cohen, 1996).
Polyubiquitinated proteins are substrates for the 26S proteasome. This consists of three large multi-subunit complexes, namely a 700-kDa 20S proteasome core particle and two 19S cap structures, also called PA 700 (for proteasome activator of 700 kDa) (reviewed in Peters, 1994). The 20S particle has the structure of a hollow cylinder composed of four rings of seven related subunits and containing a central channel with three cavities (Lowe et al, 1995;Groll et al 1997). The inner rings are formed of 3-subunits, which carry the proteolytically active sites on the inner surface. The outer rings contain subunits that lack proteolytic activity and are thought to control the access to the central cavity. The isolated 20S particle has very limited activity in vitro compared with the 26S proteasome, which is formed by the 20S proteasome with the addition of two 19S/PA700 substructures in opposite orientations, one at each end (Peters et al, 1993), as revealed by electron microscopy (Figure 1). The 19S regulatory complex consists of at least 15 subunits, which can be classified into ATPases and non-ATPases (Dubiel et al, 1995a), and is thought to act in recognition, unfolding and translocation of the substrates into the 20S proteasome for proteolysis (Rubin and Finley, 1995). The composition and the function of the regulatory complex is not yet fully characterized and recent data have shown, for example, that the regulatory complex also contains an isopeptidase capable of deubiquitinating substrates (Lam et al, 1997).
Because of the broad involvement of ubiquitin-proteasome proteolysis in fundamental biochemical processes, this pathway is a potential target for cancer-related deregulation, and alterations of proteasome function have indeed been described in events, such as cellular transformation by oncogenic viruses (Scheffner et al, Ubquti a -. c, p53, p27, classl antigens, Finley, 1995). The 26S proteasome is a multiprotein complex that acts as a multicatalytic protease degrading proteins that have been targeted by the ubiquitin pathway. Proteins are ubiquitinated in a cascade reaction involving three classes of ubiquitinating enzymes called El, E2 and E3 and can be deubiquitinated by isopeptidases. The 20S proteasome consists of a stack of four rings of seven subunits. The inner rings made of5-subunits display the catalytic sites on the inner surface. At each end, the 20S proteasome can be capped by a regulatory complex called 1 9S or PA700, which contains ATPases and is probably involved in recognition, unfolding and translocation of the substrate into the 20S proteasome (Rubin and Finley, 1995) 1990Ciechanover et al, 1994) and immune escape (Restifo et al, 1993;Sibille et al, 1995;Rotem Yehudar et al, 1996;Seliger et al, 1996). Furthermore, alterations of proteasome activity in tumour samples have been reported recently to confer in colon and possibly breast cancer a phenotype of clinical aggressiveness associated with poor prognosis (Catzavelos et al, 1997;Loda et al, 1997;Porter et al, 1997). Finally, mutations of proteasome subunits have been found to result in a multidrug resistance phenotype in fission yeast (Gordon et al, 1993(Gordon et al, , 1996, and we have recently shown that this pathway of multidrug resistance is conserved in mammalian cells (Spataro et al, 1997). Here, we therefore review the rapidly increasing body of information on the role of proteolysis by the ubiquitin/proteasome pathway in various fields of cancer biology.
p53 AND HPV.RELATED MALIGNANCIES The product of the tumour-suppressor gene p53 is an unstable nuclear protein with a half-life of 20-35 min in normal cells. After cellular stress or DNA damage, p53 is stabilized, leading to growth arrest or apoptosis. The rise in p53 protein level is detectable almost immediately after DNA damage, and the absolute level of p53 protein and the duration of the response depend on the nature of the damage (reviewed in Cox and Lane, 1995). This accumulation of p53 is thought to occur mainly via the down-regulation of its degradation by the ubiquitin-proteasome pathway (Harris, 1996;Maki et al, 1996). Although, at present, this supposition has not been experimentally confirmed, it is supported by experimental data in a cell line containing a thermolabile El ubiquitin-activating enzyme, in which p53 accumulates at the non-permissive temperature; this accumulation is prevented by introduction of the wild-type El gene (Chowdary et al, 1994). Interestingly, recent evidence suggests that p53 degradation is stimulated by the product of the p53-activated MDM2 gene (Haupt et al, 1997), providing the basis for a mechanism by which the activation of p53 could be self-limiting. Further research on regulation of p53 by proteolysis is clearly warranted because alterations in this pathway can be functionally equivalent to p53 inactivation. This is well exemplified in the case of human papilloma virus (HPV)-related cancers. The oncogenicity of the human papilloma virus, which is involved in the aetiology of the majority of human anogenital carcinomas, is mediated by up-regulation of p53 degradation by the ubiquitin-proteasome pathway. The E6 oncoprotein encoded by high-risk HPV (e.g. HPV-16, -18, -5 and -8) binds to p53 and promotes its degradation by the proteasome (Scheffner et al, 1990) -a property that is critical for immortalization of human cells by HPV. In contrast, low-risk HPV (e.g. HPV-6 and -11) encode an E6 protein that does not bind to p53 and does not promote its degradation. The formation of the E6-p53 complex requires a cellular E6-binding protein called E6-AP (E6associated protein) (Scheffner et al, 1993), which forms thiol ester complexes with ubiquitin in the presence of enzymes of the E2 category, such as UBC4 or E2-F1 (Ciechanover et al, 1994). E6-AP acts as an E3 enzyme, which ubiquitinates p53, leading to its rapid degradation by the 26S proteasome.
p27 AS A PROGNOSTIC FACTOR
Progression through the cell cycle is promoted by oscillation in the activity of cyclin-dependent kinases (CDK), and proteolysis by the ubiquitin-proteasome pathway regulates CDK activity by degrading CDK activators and inhibitors. Furthermore, proteolysis by the proteasome is crucial during mitosis in triggering the transition from metaphase to anaphase (reviewed in King, 1996). Among the substrates for proteolysis in the cell cycle machinery, clinically important data are emerging with regard to the CDK inhibitor p27. p27 inhibits a wide variety of cyclin-CDK complexes in vitro and its activity is up-regulated by cytokines, such as TGF-,B and by cell-cell contact, linking extracellular signals to the cell cycle (Polyak, 1994;Slingerland, 1994). Loss of contact inhibition and of response to TGF-3 in transformed cells may imply an alteration of function of p27 during oncogenesis, even though p27 mutations in human tumours are extremely rare (Hunter and Pines, 1994;Morosetti et al, 1995;Ferrando, 1996). Unlike p21, which is also a member of the family of cip/kip CDK inhibitors acting in G1 and appears to be regulated principally at the transcriptional level, p27 is critically regulated post-translationally by proteolysis by the ubiquitin-proteasome pathway Hengst and Reed, 1996). Recently, it has been found that low p27 protein levels in common tumours, such as colorectal carcinomas and breast cancer, are associated with a poor prognosis (Catzavelos et al, 1997;Loda et al, 1997;Porter et al, 1997). In both tumour types (Catzavelos et al, 1997;Loda et al, 1997), comparison of immunohistochemical analysis and in situ hybridization showed a discordance between p27 mRNA and protein levels, suggesting that, also in tumours, p27 levels could be regulated post-translationally. Moreover, in one of the studies, it was clearly shown that increased proteasome-dependent degradation was responsible for low p27 levels in tumour samples of colorectal carcinomas. Total cellular extracts from frozen tumour samples were tested for p27 proteasome-mediated degradation using recombinant p27 as a substrate, and a very good correlation was found between low levels of p27 and increased proteasome activity. Degradation was abolished by proteasome depletion and resumed after proteasome readdition (Loda et al, 1997). Down-regulation of p27 by the proteasome was found in tumours regardless of clinical stage. In breast cancer, Catzavelos et al (1997) showed that increased p27 proteolysis can be an early event in tumorigenesis, as suggested by analysis of high-grade ductal carcinoma in situ (DCIS) or, alternatively, can occur upon progression, as shown by reduced p27 levels in axillary lymph node metastasis compared with primary tumours assessed simultaneously. Even taking into account the caveats associated with retrospective studies on prognostic factors, these three recent studies (Catzavelos et al, 1997;Loda et al, 1997;Porter et al, 1997) conclude that p27 protein level (and its proteasome-dependent degradation, which was shown to be inversely related) are powerful independent prognostic factors of survival in both tumour types and show clearly that deregulation of gene products involved in clinical tumour progression can occur via alterations of ubiquitinproteasome proteolysis. Furthermore, they show that this is not a rare event, given that the unfavourable phenotype of decreased p27 levels (defined as immunostaining in < 50% of the cells or as a score of staining of 0-1 on a 0 to 6 scale) involves the majority of the studied population for both colorectal cancer and breast cancer (Catzavelos et al, 1997;Loda et al, 1997;Porter et al, 1997). Thus the frequency of the phenotype of decreased p27 and its distribution, which is independent of most other prognostic factors, make p27 levels a very promising new prognostic factor to be evaluated further. Loda et al (1997) have shown that, perhaps unexpectedly, p27 degradation activity is not correlated with degradation by the proteasome of other substrates, such as p21 and cyclin A, which underscores that the substrate specificity of the ubiquitin-proteasome pathway is highly regulated (Hochstrasser, 1995). Identification of the element(s) responsible for targeting p27 to the ubiquitin-proteasome pathway would, of course, be extremely important for unravelling this novel pathway associated with tumour progression. Like p27, other elements of the cell cycle machinery that are substrates of ubiquitin-proteasome degradation are potential targets for deregulation in tumours. One of the best characterized transitions in the normal cell cycle is the rapid proteasome-mediated degradation of cyclin B at the exit from mitosis (Glotzer et al, 1991), and recent evidence shows that continuing rapid proteolysis accounts for the low levels of cyclin B until the onset of S phase (Amon et al, 1994;Brandeis, 1996). Cyclin B has been found to be overexpressed in a set of breast cancer cell lines (Keyomarsi and Pardee, 1993), and it would be interesting to assess whether or not decreased proteolysis by the proteasome is involved in its overexpression. Similarly, cyclin E has been found to be overexpressed in breast cancer cell lines and in surgical specimens of breast tumours (Keyomarsi et al, 1994(Keyomarsi et al, , 1995, and cyclin Dl is frequently overexpressed in many common tumour types (Betticher, 1996). Recent evidence suggests that cyclin DI and E are substrates of the ubiquitin-proteasome pathway (Clurman et al, 1996;Diehl et al, 1997), and decreases in their degradation could contribute to the overexpression of these cyclins in tumours.
ANTIGEN PRESENTATION
The 26S proteasome is responsible for the processing of MHCrestricted class I antigens. Peptides derived from endogenously expressed cytoplasmic proteins are carried by MHC class I molecules from the endoplasmic reticulum to the surface for recognition by cytotoxic T lymphocytes. The proteasome was postulated to be the proteolytic system that degrades cytosolic proteins, when it was found that the genes encoding subunits LMP-2 and LMP-7 of the proteasome complex were included in the MHC gene cluster (see for example Beck et al, 1992). Experiments performed in a mutant cell with a thermolabile El-ubiquitinating enzyme (Michalek et al, 1993) and with proteasome inhibitors (Rock et al, 1994;Cerundolo et al, 1997) have subsequently demonstrated that the proteasome is necessary for class I-restricted antigen presentation. This is confirmed by the analysis of mice lacking LMP-7, which have decreased surface expression of MHC class I molecules and present antigens inefficiently (Fehling et al, 1994). It has also been shown that 3 of the 28 subunits composing the 20S catalytic core, namely subunits X, Y and Z, are interchangeable with the alternative subunits LMP2, LMP7 and LMP1O respectively (Belich et al, 1994;Fruh et al, 1994;Groettrup et al, 1996;Hisamatsu et al, 1996;Nandi et al, 1996) upon induction by interferon-y. These substitutions result in an enhancement of peptidase activity, a change in the quality of generated peptides (Gaczynska et al, 1996;Kuckelkorn et al, 1995) and eventually in a more efficient antigen presentation. Interferon-y also induces the binding to the 20S catalytic core of the proteasome of a complex called 11 S regulator or PA28, which may further increase the spectrum of peptides generated . There is strong evidence that MHC class I-restricted peptide presentation is modified in tumours and may contribute to escape from immune surveillance.
Alterations of ubiquitin-proteasome degradation have been reported among other alterations in this pathway. Three different small-cell lung carcinoma lines with low to undetectable levels of mRNA for LMP2 and LMP7 and functional deficiencies in antigen presentation have been described (Restifo et al, 1993). The mouse T-cell lymphoma line SP-3 displays underexpression of LMP-2 and is defective for antigen presentation, whereas LMP-2 expression and antigen presentation to cytotoxic T lymphocytes are restored upon expression of interferon-y by transfection (Sibille et al, 1995). Similar studies on tumour samples are rare. An analysis of expression of both LMP-2 and LMP-7 proteasome subunits together with other elements of the antigen presentation machinery has been carried out on a primary renal cancer and a lymph node metastasis of the same patient and compared with normal kidney. Deficiencies at all levels, including the expression of LMP-2 and LMP-7 proteasome subunits, were associated with transformation and progression. Interferon-a and, in particular, interferon-y could partly suppress these defects (Seliger et al, 1996). The potential importance of subunits LMP-2 and LMP-7 for MHC class Irestricted antigen presentation is also underscored by the fact that they are specifically down-regulated after viral transformation in vitro by oncogenic viruses (Rotem Yehudar et al, 1996).
REGULATION OF TRANSCRIPTION FACTORS BY PROTEOLYSIS
Increasing evidence shows that the proteasome also participates in events that control gene transcription. Several transcriptional regulators, including nuclear factor-kappa B (NF-KB), p53 (see above), c-JUN, sterol-regulated element-binding proteins and MATa2 have been recently shown to be regulated by proteolysis, either for the activation or the inactivation of gene expression (for a review see Pahl and Baeuerle, 1996).
NF-KB is involved in the activation of genes encoding products such as cytokines, chemokines, growth factors, cell-adhesion molecules and surface receptors in response to a great-variety of pathogenic signals and therefore has a central role in mediating the immune/inflammatory responses. NF-KB has been reported to be activated by the cytotoxic agents TNF-a, daunorubicin, etoposide, ionizing radiation or oxidative stress but not by the protein kinase C inhibitor staurosporine (Wang et al, 1996). The activation of NF-KB requires two steps of proteasome-dependent proteolysis. Active NF-KB is a nuclear heterodimer consisting of two subunits called p50 and p65. Ubiquitin-proteasome proteolysis is involved first in the biogenesis of the subunit p50 from the precursor p105 and then in the cytoplasmic degradation of the inhibitory factor IKB, which allows the translocation of the active dimer into the nucleus (Palombella et al, 1994). Recently published data attribute an anti-apoptotic role to NF-KB in response to some cytotoxic agents (Beg and Baltimore, 1996;Van Antwerp et al, 1996;Wang et al, 1996). In one case, TNF-a was more toxic for immortalized embryonic cells of NF-KB knock-out mice than for controls (Beg and Baltimore, 1996) and in other experiments expression of the super-repressor IKB-a (inhibiting NF-KB activation) moderately increased the sensitivity to TNF-a, daunorubicin and ionizing radiation (Wang et al, 1996). Consistent with this, the proteasome inhibitor MG132 (preventing NF-KB activation) strongly enhanced, in a dose-dependent fashion, the killing of HT1080V cells by TNF-a. The proto-oncogene products c-JUN and c-FOS constitute the transcription factor AP-1 (for activator protein 1) either as heterodimers or as c-JUN homodimers and are wellknown substrates for ubiquitin-proteasome degradation (Treier et al, 1994;Jariel Encontre et al, 1995;Tsurumi et al, 1995a;Hermida Matsumoto et al, 1996;Musti et al, 1997). The degradation of c-JUN is dependent on a segment of 27 amino acids called the delta domain, which is necessary for both ubiquitination and degradation. The delta region, and hence this mechanism of downregulation, is lost in v-JUN, the transforming retroviral counterpart of c-JUN, and this increased stability very likely contributes to its oncogenicity (Treier et al, 1994). Moreover, it has been convincingly shown that c-JUN is degraded by this pathway, but recent data suggest that ubiquitin-proteasome-mediated proteolysis of c-JUN could play an essential role in regulation of activity of AP-1 factors (Musti et al, 1997). There is a high degree of regulation of c-JUN proteolysis, with the presence of c-FOS and dimerization itself influencing the ubiquitination and the degradation activity (Tsurumi et al, 1995a;Hermida Matsumoto et al, 1996). Like NF-KB, AP-1 factors are important in the cellular response to oxidative stress (Schreiber et al, 1995;Pinkus et al, 1996) and are involved in the induction of a variety of genes encoding important enzymes in glutathione-related detoxification pathways, such as the isozymes a, zt and y of glutathione-S-transferase and yglutamyl-cysteine synthetase. Up-regulation of AP-1 activity has been associated with drug resistance in several instances, such as in a multidrug-resistant derivative of MCF7 cells obtained after vincristine selection (Moffat et al, 1994) in etoposide-resistant human leukaemia cell lines (Ritke et al, 1994) and in cisplatinresistant ovarian cancer lines (Yao et al, 1995). Given the relevance of proteolysis for c-JUN regulation, this acquires particular importance in the light of recent data discussed in the following section that link the proteasome, AP-1 factors and multidrug resistance (Spataro et al, 1997).
DRUG RESISTANCE
We recently identified a novel component of the 26S proteasome that indicates a link between ubiquitin-dependent proteolysis and drug resistance. Overexpression of the fission yeast Padl protein confers multidrug resistance to unrelated compounds, such as staurosporine, caffeine and leptomycin B, through the activation of the yeast transcription factor Pap 1, a homologue of human AP-1 (Shimanuki et al, 1995). Because studies in yeast may help to identify important novel mechanisms in mammalian cells, we set out to examine the role of a Pad 1 human homologue. We have cloned the human homologue of Padl (named POHI for Pad One Homologue) and have shown by transfection experiments that its overexpression in mammalian cells can confer multidrug resistance to 7-hydroxystaurosporine, paclitaxel, doxorubicin and to ultraviolet radiation. Interestingly, the amino acid sequence of POHI displayed a significant similarity to the subunit S 12/p4O of the 26S proteasome (Dubiel et al, 1995b;Tsurumi et al, 1995b), and the pattern of mRNA tissue expression was very similar to that previously described for other subunits of the 26S proteasome (Tsurumi et al, 1995b). We demonstrated that POHI is in fact a novel subunit of the 26S proteasome, as it co-purifies with proteasome immunoprecipitates and with full 26S proteasomes obtained by biochemical fractionation (Spataro et al, 1997). POHI also has a significant sequence similarity with JAB 1, which has been shown to interact with c-JUN and to activate AP-1 transcription factors (Claret et al, 1996). Various independent data, namely the dependence of the padl multidrug resistance phenotype in fission yeast on the activation of an AP-1 like factor, the sequence similarity between POH1 and JAB1 and the importance of proteasome degradation for c-JUN regulation support a model whereby overexpression of the novel proteasome subunit POH1 could up-regulate AP-1 factors, resulting ultimately in drug resistance. Our data show that POH1 overexpression does not activate P-glycoprotein expression and does not alter intracellular accumulation of doxorubicin. Nevertheless, it is not clear at this stage if the survival advantage confered by POHI overexpression reflects a decreased propensity for cell death or an alteration in the processing of potentially lethal damage. POHI is widely expressed in human tumour cell lines and work in progress is assessing its contribution to tumour drug resistance. Interestingly, in recent years, two other subunits of the 19S regulatory complex of the proteasome called Mts2 and Mts3 have been identified in fission yeast through a screen for mutants resistant to the mitotic spindle poison carbendazim (MBC) (Gordon et al, 1993(Gordon et al, , 1996. Thus, the 26S proteasome plays an important role in determining multidrug resistance in fission yeast. This pathway is highly conserved in mammals, can confer drug resistance to anti-cancer agents in vitro and could potentially be involved in drug resistance in human tumours. A human homologue of another fission yeast gene called Crml, which is involved like Pad1/POHl in Papl/AP-1-dependent multidrug resistance (Toda et al, 1992;Kumada et al, 1996) has been recently cloned (Fornerod et al, 1997). Interestingly, its protein product interacts with the DEK-CAN fusion protein of AML with the chromosomal translocation t(6;9), which is associated with poor prognosis (Lillington et al, 1993). It is possible that proteasome/AP-1-mediated drug resistance contributes to the dismal prognosis of this uncommon subset of acute myeloid leukaemia (AML).
OTHER AREAS OF CANCER BIOLOGY Among other areas of cancer biology in which involvement of the ubiquitin-26S proteasome pathway may be relevant, growth factor receptors and their signalling pathways should not be overlooked. Several cell-surface receptors have been shown to be ubiquitinated, suggesting that proteasome-mediated proteolysis could be involved in their turnover (for a list see Ciechanover, 1994). Involvement of proteasomes in the degradation of cell surface receptors might have an increasing relevance in cancer chemotherapy, as new agents that modulate growth factors and their signalling pathways are developed. For example, there is strong evidence for an involvement of the ubiquitin-proteasome pathway in the degradation of tyrosine kinase receptors, such as insulin-like growth factor receptors and epidermal growth factor receptors. Of interest, it has been shown recently that herbimycin A, which targets tyrosine-kinase-activated signal transduction by inhibiting multiple tyrosine protein kinases and has in vitro and in vivo anti-tumour activity, acts through an enhancement of receptor degradation by the proteasome (Sepp Lorenzino et al, 1995). Similar data have also been found with regard to the partly agonist protein kinase C (PKC) inhibitor bryostatin 1 (Philip and Harris, 1995), which after transient activation down-regulates PKC through the promotion of its degradation by the proteasome (Lee et al, 1996). Proteasome inhibitors have been shown to counteract the effects of herbimycin A in vitro (Sepp Lorenzino et al, 1995), and it is conceivable that modulation of proteasome function might influence the anti-tumour activity of these new classes of drugs.
Other cell surface receptors that are potential targets for proteasome degradation are the T-cell antigen receptor (TCR) and the platelet-derived growth factor (PDGF) receptor. One T-cell receptor subunit is ubiquitinated on its cytoplasmic domain when the receptor is occupied (Hou et al, 1994), but data are lacking on possible effects on its function. The PDGF receptor-,3 also undergoes polyubiquitination as a consequence of ligand binding and, recent data suggest that the proteasome is responsible for the degradation of the ligand-activated receptor (Mori et al, 1995). DNA repair is another important area in which the ubiquitin-proteasome pathway is potentially involved. The first data supporting this notion came from budding yeast S. cerevisiae, in which the rad6 DNA repair mutant is defective in the ubiquitinconjugating enzyme (E2) UBC2 and, intriguingly, the DNA repair gene RAD23 encodes a protein containing a ubiquitin-like domain, which is essential to its function (Watkins et al, 1993) and is conserved in the human homologue HHR23B (Masutani et al, 1994). More recently, experiments performed on a ts mutant from the mouse mammary carcinoma line FM3A, which contains a thermosensitive ubiquitin-activating enzyme (El) have shown that El mutants incubated at the restrictive temperature after UV exposure display a decrease in clonogenic survival and defects in an assay measuring DNA repair by the appearance of UV-induced mutations (Ikehata et al, 1997). These data support a contribution of ubiquitin conjugation to DNA repair in mammalian cells. However, it remains to be seen if there is a true contribution to DNA repair of the entire pathway of ubiquitin-proteasome-mediated proteolysis or if, alternatively, ubiquitin-binding proteins, such as El or E2 enzymes, may have a direct influence on DNA repair by physically interacting with DNA repair proteins carrying ubiquitin-like domains, such as RAD23/HHR23B. Another area where intriguing data await further elucidation is the potential role of deubiquitinating enzymes in oncogenic transformation, as the yeast DOA4 isopeptidase is related to the product of the human Tre-2, which has been found to be tumorigenic when expressed at high levels (Papa and Hochstrasser, 1993); in addition, the human homologue of the murine ubiquitin-releasing enzyme unp has been found to be overexpressed in lung cancer cell lines (Gray et al, 1995).
Recently, ubiquitin-proteasome-mediated proteolysis has also been found to have an important role in apoptosis of nerve growth factor-deprived neurons (Sadoul et al, 1996), and it will be important to investigate proteasome involvement in apoptosis induced by anti-cancer drugs. Finally, it has recently been shown that expression of heat shock protein 70 (hsp70), which is involved in stress response and might have a role in drug resistance (Ciocca et al, 1992), is induced up to 30-fold by a proteasome inhibitor, unlike other members of the hsp family (Zhou et al, 1996).
DRUGS ACTING ON THE PROTEASOME
Pharmacological intervention to modulate one or several proteasome functions could be therapeutically advantageous. There is considerable interest in this possibility in the field of immunology, in which the intent is to target activation by the proteasome of NF-KB, which has a key role in mediating the inflammatory and immune response. The best known proteasome inhibitor is lactacystin, a Streptomyces metabolite discovered on the basis of its ability to induce neurite outgrowth in the Neuro 2A mouse neuroblastoma cell line (Fenteany et al, 1994). This inhibitor was subsequently shown to covalently modify a critical threonine residue of the subunit X/MB1 of the proteasome core (Fenteany, 1995). Lactacystin was found to inhibit cell cycle progression in human osteosarcoma cells (Fenteany et al, 1994) and to induce apoptosis in human monoblast cells (Imajoh Ohmi et al, 1995). However, we are not aware of any data on the anti-tumour activity of lactacystin. Interestingly, the clinically used anti-tumour drug aclacinomycin A or aclarubicin, known as a DNA-intercalative agent, has been shown to inhibit the degradation of ubiquitinated protein by selectively inhibiting the chymotrypsin-like activity of the proteasome (Figueiredo Pereira et al, 1996). It is not clear whether this could contribute to the anti-tumour activity of this drug. Apart from lactacystin, most of the proteasome inhibitors developed so far are synthetic protease inhibitors of the family of peptidyl aldehydes (Rock et al, 1994). Some of them, such as n-acetyl-leucinylleucinyl-norleucinal (ALLN) and benzyloxycarbonyl (Z)leucinyl-leucinyl-leucinal (ZLLL) are cell penetrating, display proteasome specificity and have been reported to induce apoptosis in human tumour cell lines (Fujita et al, 1996;Shinohara et al, 1996). Because of the broad involvement of proteasomes in normal cellular physiology, any attempt to target the proteasome non-specifically might be associated with prohibitive in vivo toxicity. However, the complexity and specificity of proteasome regulation indicate that specific inhibitors of individual proteasome-mediated processes might ultimately become available. Moreover, the rapidly expanding knowledge about the role of proteasomes in normal and tumour cells could provide in the future a rational basis for the use of proteasome-targeting drugs.
CONCLUSIONS
The ubiquitin-proteasome pathway clearly represents an important area of research in cancer biology, although it has previously been relatively neglected. Basic research has provided in recent years an increasing body of information on the extent of the involvement of this pathway in critical cellular processes, such as cell cycle progression and regulation of gene expression. To date, research has found that deregulation of this pathway in cancer can be responsible for crucial phenomena, such as oncogenic transformation in HPV-related malignancies, poor prognosis in colorectal and breast carcinoma, and that it is clearly involved in modulating response to anti-cancer drugs. Understanding the complexity of the ubiquitin-proteasome pathway, and in particular how the specificity for a given substrate is regulated, should allow us in the future to translate this knowledge into new therapeutic strategies.
Fenteany G (1995) Inhibition of proteasome activities and subunit-specific aminoterminal threonine modification by lactacystin. Science 268: 726-731 Fenteany G, Standaert RF, Reichard GA, Corey EJ and Schreiber SL (1994) A betalactone related to lactacystin induces neurite outgrowth in a neuroblastoma cell line and inhibits cell cycle progression in an osteosarcoma cell line. Proc Natl Acad Sci USA 91: 3358-3362 Ferrando A (1996) Mutational analysis of the human cyclin dependent kinase inhibitor p27/kipI in primary breast carcinomas. Hum Genet 97: 91-94 Figueiredo Pereira ME, Chen WE, Li J and Johdo 0 (1996) The antitumor drug aclacinomycin A, which inhibits the degradation of ubiquitinated proteins, shows selectivity for the chymotrypsin-like activity of the bovine pituitary 20S proteasome. J Biol Chem 271: 16455-16459 Fomerod M, Van Deursen J, Van Baal S, Reynolds A, Davis D, Murti KG, Fransen J and Grosveld G (1997) The human homologue of yeast CRM1 is in a dynamic subcomplex with CAN/Nup2 14 and a novel nuclear pore component Nup88. Embo J 16: 807-816 Fruh K, Gossen M, Wang K, Bujard H, Peterson PA and Yang Y (1994) Displacement of housekeeping proteasome subunits by MHC-encoded LMPs: a newly discovered mechanism for modulating the multicatalytic proteinase | 2014-10-01T00:00:00.000Z | 1998-01-01T00:00:00.000 | {
"year": 1998,
"sha1": "ccc50effe421972f34e34259d043dc618635b063",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/bjc199871.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ccc50effe421972f34e34259d043dc618635b063",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14855815 | pes2o/s2orc | v3-fos-license | Stability-Indicating HPLC Method for Simultaneous Determination of Chloramphenicol, Dexamethasone Sodium Phosphate and Tetrahydrozoline Hydrochloride in Ophthalmic Solution
Purpose: A simple stability-indicating RP-HPLC assay method was developed and validated for quantitative determination of Chloramphenicol, Dexamethasone Sodium Phosphate and Tetrahydrozoline Hydrochloride in ophthalmic solution in the presence of 2-amino-1-(4-nitrophenyl)propane-1,3-diol, a degradation product of Chloramphenicol, and Dexamethasone, a degradation product of Dexamethasone Sodium Phosphate. Methods: Effective chromatographic separation was achieved using C18 column (250 mm, 4.6 mm i.d., 5 μm) with isocratic mobile phase consisting of acetonitrile - phosphate buffer (pH 4.0; 0.05 M) (30:70, v/v) at a flow rate of 1 mL/minute. The column temperature was maintained at 40°C and the detection wavelength was 230 nm. Results: The proposed HPLC procedure was statistically validated according to the ICH guideline, and was proved to be stability-indicating by resolution of the APIs from their forced degradation products. Conclusion: The developed method is suitable for the routine analysis as well as stability studies.
Introduction
Chloramphenicol (CAP), Figure 1,A, is a bacteriostatic antibiotic. 1 Dexamethasone Sodium Phosphate (DSP), Figure 1,B, is an inorganic ester of dexamethasone, that suppresses the inflammatory response to a variety of agents. 2 Tetrahydrozoline Hydrochloride (THC), Figure 1,C, is an imidazoline-derivative sympathomimetic amine, which temporary relief of conjunctival congestion, itching, and minor irritation. 3An ophthalmic solution contains CAP 0.5%, DSP 0.1%, and THC 0.025% is available in the market.It is indicated for keratitis, conjunctivitis acute and chronic infectious, inflammation of the uvea anterior, scleritis, and sympathetic ophthalmia. 4-amino-1-(4-nitrophenyl)propane-1,3-diol (AMPD), Figure 1,D, is the hydrolysis product of Chloramphenicol. 5British Pharmacopoeia states that it should be less than 8%, with respect to Chloramphenicol, in the ophthalmic solution. 6examethasone (DEX), Figure 1,E, is the hydrolytic derivative of Dexamethasone Sodium Phosphate.7 The allowable maximum limit for Dexamethasone in the solution for Injection is 0.5%, with respect to Dexamethasone Sodium Phosphate.6 The literature survey revealed that few methods determined simultaneously Chloramphenicol and Dexamethasone Sodium Phosphate 8,9 in the presence of Dexamethasone. 10 Therefore, the aim of this work was to develop and validate a new simple stability-indicating HPLC method for simultaneous determination of Chloramphenicol, its hydrolysis derivative (AMPD), Dexamethasone sodium phosphate, its hydrolysis derivative (Dexamethasone) and Tetrahydrozoline in the ophthalmic solution.
Chemicals and solutions
CAP was purchased from CHEMO, Spain; DSP and DEX were purchased from SYMBIOTICA, Malaysia; THC was purchased from S.I.M.S, Italy; AMPD was purchased from British Pharmacopoeia Commission Laboratory; and excipients were kindly supplied by DIAMOND PHARMA, Syria.Acetonitrile used was of HPLC grade.All the other reagents used were of analytical grade.Purified water was used for making the solutions.
Chromatographic conditions
Separations were performed with a HPLC (LA Chrom ELITE, VWR Hitachi, Germany, equipped with L-2130 pump, L-2200 auto sampler, L-2300 column oven, and UV photo diode array detector L-2455).The out-put signal was monitored and processed using EZ Chrom ELITE software.
The column used was Thermo Hypersil C18 column (250 mm, 4.6 mm i.d., 5 μm).The isocratic mobile phase comprised of mixture of acetonitrile -potassium dihydrogen phosphate buffer (pH 4.0; 0.05 M) (30:70, v/v).The mobile phase was filtered through 0.45 μm membrane filter, degassed in ultrasonic bath and pumped from the respective solvent reservoir at a flow rate of 1 mL/minute.The column temperature was maintained at 40°C and the detection wavelength was 230 nm.The injection volume was 20 µL.The column was equilibrated for about 60 minutes prior to injection.
Figure 1. Chemical structures of (A) CAP (B) DSP (C) THC (D) AMPD (E) DEX
Preparation of standard solution CAP (5000 μg/mL), DSP (1000 μg/mL), THC (250 μg/mL) and AMPD (400 μg/mL) stock solutions were prepared in mobile phase.DEX solution (250 μg/mL) was prepared in acetonitrile.Then, dilution was made with mobile phase to obtain DEX stock solution with concentration of (5 μg/mL). 2 mL of each of the stock solutions were transferred into a 25 mL volumetric flask and diluted with mobile phase.The concentrations obtained were 400, 80, 20, 32, 0.4 μg/mL for CAP, DSP, THC, AMPD and DEX, respectively.The standard solution was filtered using a 0.45 μm filter.
Method validation
The proposed HPLC method was validated according to ICH guideline. 11degradation studies Stock solutions CAP (4000 μg/mL), DSP (800 μg/mL) and THC (200 μg/mL) stock solutions were prepared in mobile phase.
Degradation studies 5 mL of each of the stock solutions were transferred into a 50 mL volumetric flask, in each study.For acidic hydrolysis, 2 mL of 2 M HCl was added, and the volumetric flask was kept at 70°C for about 3 hours in water bath.Then the solution was allowed to | 139
Analysis of APIs and degradants in eye solution
Advanced Pharmaceutical Bulletin, 2016, 6(1), 137-141 attend ambient temperature, neutralized by 2 mL of 2 M NaOH, and the volume was made up with mobile phase.For alkaline hydrolysis, 1 mL of 0.1 M NaOH was added, and the volumetric flask was kept at 70°C for about 60 minutes in water Then the solution was allowed to attend ambient temperature, neutralized by 1 mL of 0.1 M HCl, and the volume was made up with mobile phase.For oxidative degradation, 3 mL of 3% H 2 O 2 was added, and the volumetric flask was kept at 70°C for about 3 hours in water bath.Then the solution was allowed to attend ambient temperature and the volume was made up with the mobile phase.For thermal degradation, the volumetric flask was kept at 70°C for 3 hours in water bath.Then the solution was allowed to attend ambient temperature and the volume was made up with mobile phase.For photolytic degradation, the volumetric flask was subjected to both of the cool white fluorescent and near ultraviolet lamp with a maximum energy emission at 365 nm for 30 minutes.Then the solution was allowed to attend ambient temperature and the volume was made up with mobile phase.All solutions were filtered with a 0.45 μm filter and injected in stabilized chromatographic conditions.
Method validation
The results of system suitability test from five replicate injections of standard solution were within the acceptable limits as per FDA guideline. 12he chromatograms of standard solution and excipients solution showed the absence of interfering peaks at the retention times of analytes in the excipients chromatogram, which demonstrates specificity of the method.Good linearity was obtained in the studied ranges, as the correlation coefficients of the peak area responses versus concentrations calibration curves were more than 0.999.This method was found to be precise as the RSD% of assay values at three concentrations (50%, 100% and 200%) for repeatability and intermediate precision (performed by three analysts) were less than 2%.Recovery % of the analytes at each of added concentration (50%, 100% and 200%) was within the range of 98% to 102%, indicating that the method is accurate.The summary of validation parameters of the proposed method is tabulated in Table 1.The robustness was evaluated by making small changes in some method parameters including mobile phase composition (± 1%), mobile phase pH (± 0.1), flow rate (± 0.1 mL/minute), column temperature (± 2°C), wavelength (± 2 nm), and injection volume (± 10 μL); System suitability parameters were within the acceptable limits in all varied chromatographic conditions, indicating that the method is robust.However, in extended robustness study to evaluate the effect of larger variation in the chromatographic conditions, the resolution between THC and DSP peaks was found to be susceptible to the acetonitrile percentage increasing, as it became about 1.9 when the percentage was 32%.Thus, it's recommended to suitably control the mobile phase composition to get the best resolution.
Forced degradation studies
In the chromatograms resulted from all degradation studies, there was no interference between the tested drugs and the degradation products.The chromatogram resulted from oxidative degradation study is represented in (Figure 2), as an example.
The peak purity spectrum of each tested drugs was recorded using PDA detector.Peak purity results were greater than 0.99, which indicates that the peaks are homogeneous in all stress conditions tested and thus establishing the specificity and confirming the stabilityindicating power of the developed method. 13,14able 2 presents the forced degradation studies results.
Table 1 .
Summary of validation parameters | 2016-12-22T08:44:57.161Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "3234fc214fa2e5de462bf0d0c37465361dd24e90",
"oa_license": "CCBY",
"oa_url": "https://apb.tbzmed.ac.ir/PDF/APB_3279_20150712145650",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3234fc214fa2e5de462bf0d0c37465361dd24e90",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
218681851 | pes2o/s2orc | v3-fos-license | Establishment of an in vitro RNA polymerase transcription system: a new tool to study transcriptional activation in Borrelia burgdorferi
The Lyme disease spirochete Borrelia burgdorferi exhibits dramatic changes in gene expression as it transits between its tick vector and vertebrate host. A major hurdle to understanding the mechanisms underlying gene regulation in B. burgdorferi has been the lack of a functional assay to test how gene regulatory proteins and sigma factors interact with RNA polymerase to direct transcription. To gain mechanistic insight into transcriptional control in B. burgdorferi, and address sigma factor function and specificity, we developed an in vitro transcription assay using the B. burgdorferi RNA polymerase holoenzyme. We established reaction conditions for maximal RNA polymerase activity by optimizing pH, temperature, and the requirement for divalent metals. Using this assay system, we analyzed the promoter specificity of the housekeeping sigma factor RpoD to promoters encoding previously identified RpoD consensus sequences in B. burgdorferi. Collectively, this study established an in vitro transcription assay that revealed RpoD-dependent promoter selectivity by RNA polymerase and the requirement of specific metal cofactors for maximal RNA polymerase activity. The establishment of this functional assay will facilitate molecular and biochemical studies on how gene regulatory proteins and sigma factors exert control of gene expression in B. burgdorferi required for the completion of its enzootic cycle.
Purification of RNA polymerase. To test if the B. burgdorferi RNA polymerase would co-purify with the affinity-tagged β′ subunit, 3-4 L cultures of 5A4-RpoC-His10X were grown to mid-logarithmic growth phase (~5 × 10 7 spirochetes · ml −1 ), and cell lysates generated using a French pressure cell were subjected to nickel-metal affinity chromatography. The presence of RNA polymerase in the elution fractions was analyzed by SDS-PAGE and staining of proteins with Coomassie Brilliant Blue. Elution fractions containing RNA polymerase were pooled and yielded 40-50 µg of extracted protein. The components of the RNA polymerase holoenzyme (β′, β, α, and σ) in the pooled elution fractions were identified by LC-MS following separation of proteins by SDS-PAGE and in-gel digestion ( Fig. 2A). LC-MS identified the most heavily Coomassie-stained gel bands as subunits β′, β, and α migrating at their expected protein sizes (154, 129, and 38.5 kDa, respectively). Among proteins identified in the elution fraction mixture were likely RpoC and RpoB cleavage products (one of which is indicated on Fig. 2A). Some of the unquantified contaminating proteins likely interact with RNA polymerase, including RNA burgdorferi RNA polymerase core model was created by modeling the subunits β′ (yellow), β (green), α (pink and purple), and ω (orange) individually using Iterative Threading Assembly Refinement (I-TASSER) and subsequently aligning the modeled subunits to the E. coli RNA polymerase (PDB 3lu0) in PyMOL. Location of the affinity tag appended to C-terminus of the β′ subunit is shown schematically as a chain of black spheres and labeled. polymerase secondary channel binding protein GreA, transcription termination/antitermination protein NusA, chaperone protein DnaK, as well as several ribosomal proteins. Gel images and results from LC-MS are available in Fig. S1 and Table S1. Notably, proteins closely associated with the RNA polymerase complex such as the ω subunit and alternative sigma factors RpoS and RpoN were not identified by LC-MS. Previous studies suggest alternative sigma factors are not expressed at high levels during logarithmic growth in culture [27][28][29] . Consistent with these previous studies, alternative sigma factors did not appear to co-purify with RNA polymerase isolated from logarithmic phase B. burgdorferi cultures. The presence and migration of the co-purified RNA polymerase subunits α and σ 70 were subsequently confirmed by western blot analysis using polyclonal antibodies raised against recombinant B. burgdorferi RpoA and RpoD (Fig. 2B). Together, these results indicate B. burgdorferi RNA polymerase holoenzyme subunits co-purify under the affinity chromatography conditions tested allowing for affinity purified RNA polymerase enzymatic activity to be examined.
Figure 2.
Purification of the RNA polymerase from B. burgdorferi and determination of the molar ratio of the core subunits β and α. (A) Purified proteins in pooled elution fraction from nickel-affinity chromatography performed on lysates generated from B. burgdorferi 5A4-RpoC-His10X were separated by SDS-PAGE and stained with Coomassie Brilliant Blue. Labels on the right side of the gel indicate RNA polymerase subunits detected by LC-MS of excised bands. (B) Western blots were performed on nickel-affinity purified proteins using anti-Borrelia-RpoA and anti-Borrelia-RpoD antibodies to confirm the presence of the target proteins. Numbers indicate the migration of protein molecular mass markers. Detection of RpoD required loading microgram quantities of purified RNA polymerase. (C) Molar ratios were determined by quantitative western blots. Recombinant RpoB, RpoA, and RpoD were loaded in amounts indicated above to form a standard curve. Purified RNA polymerase samples A and B were loaded with the standard curve for quantification. Molar amounts were calculated from the theoretical masses of proteins based on amino acid sequence. Images are representative of four replicate experiments. initial characterization of the RnA polymerase from B. burgdorferi. To accurately measure the relative amounts of the subunits from purified RNA polymerase, the concentration of the β, α, and σ subunits were measured by quantitative western blot analyses. Linear detection ranges were determined for the western blots developed with B. burgdorferi RNA polymerase subunit-specific polyclonal antibodies anti-RpoB, anti-RpoA, and anti-RpoD by loading known quantities of the respective purified recombinant RpoB, RpoA, and RpoD proteins (Fig. 2C). Chemiluminescent signals resulting from the western blots were analyzed by densitometry to determine the linear range of detection for each protein. Quantitative western blots performed with 100 ng of affinity purified RNA polymerase produced chemiluminescence signals within the linear range of detection for assays using anti-RpoB and anti-RpoA. There was an insufficient amount of sigma factor within 100 ng of the affinity purified RNA polymerase mixture to yield a signal within the linear range of detection with anti-RpoD, indicating fewer sigma factors were co-purified through affinity chromatography (˂68.5 fmol/100 ng). Given a measured concentration of 340 fmol of α subunit and 208 fmol of β subunit per 100 ng, an estimate of the molar ratio from the affinity purified RNA polymerase was 1.63:1 (α:β subunits). Canonically, an active RNA polymerase core contains a minimum of four subunits (β′, β, two α), and a 2:1 ratio of α subunit to β subunit is expected. Consequently, we reasoned that the maximum concentration of fully constituted RNA polymerase in our affinity purified RNA polymerase sample was 170 fmols per 100 ng of protein with α as the limiting subunit. The molar concentrations of RNA polymerase indicated for all subsequent experiments described here are expressed as the values determined by quantitative western blot analyses.
To determine reaction conditions that support RNA polymerase activity, we utilized a dye-incorporation method for detection of RNA synthesis that allowed for various reaction conditions to be screened 30,31 . Circular, single-stranded DNA was used as a template because it relieved the requirement for sigma factor-dependent transcriptional initiation as RNA polymerase requires a sigma factor to initiate transcription from dsDNA. A circular, single-stranded DNA template also allows for the accumulation of long RNA transcripts that are easily detectable using intercalating dyes. Circular, single-stranded DNA ranging in size from 45 to 180 nucleotides were generated as templates for the RNA polymerase. RNA polymerase activity was tested under various reaction conditions by adding potassium glutamate, magnesium chloride (MgCl 2 ), zinc chloride, calcium chloride, and/ or manganese chloride (MnCl 2 ) to a base buffer containing a final concentration of 40 mM HEPES, pH 7.5, 0.05% NP40, and 1 mM DTT. Initial in vitro transcription reaction mixtures containing 100 nM DNA template, 100 nM RNA polymerase, and 200 µM nucleotides (NTPs) were incubated for 5-6 hours at 37 °C to allow for the accumulation of RNA transcripts (Fig. S2). Our data indicated reactions containing 12 mM MgCl 2 or 2-10 mM MnCl 2 had detectable levels of RNA product accumulation based on SYBR Safe dye incorporation suggesting a metal requirement for RNA polymerase enzymatic activity. Further characterization determined RNA products were detectable within minutes when RNA polymerase was pre-incubated with 2-10 mM MnCl 2 prior to addition of the DNA template to initiate the reaction. Collectively, these experiments established initial reaction conditions for achieving enzymatic activity from affinity-purified RNA polymerase. Subsequent reactions to test RNA polymerase activity from dsDNA templates were carried out in 60 mM potassium glutamate, 2 mM MgCl 2 , and 5 mM MnCl 2 in addition to the base buffer described above.
RNA polymerase activity from double-stranded DNA templates. We next generated a linear dsDNA template by amplifying the flgB promoter (flgBp) from B. burgdorferi genomic DNA by PCR. The flgB promoter is RpoD-dependent 32 and the 499-bp PCR product encompasses a region from −248 to +251 surrounding the transcription initiation site. Levels of RpoD required for RNA polymerase activity initiated from the flgB promoter were determined by titration of recombinant RpoD in the reactions (Fig. 3). Reaction mixtures containing 21 nM RNA polymerase and 10 nM of the linear dsDNA template encoding the flgB promoter were supplemented with 16-500 nM of recombinant RpoD and the accumulation of RNA products was quantified by the incorporation of α-32 P-ATP. The accumulation of RNA products in the reaction increased linearly with increasing concentrations of RpoD. To maximize the rate of transcriptional initiation, subsequent in vitro transcription reactions were carried out with 500 nM RpoD in the reaction mixture at a 24:1 molar ratio of RpoD to RNA polymerase.
RNA polymerase activity is pH-and temperature-dependent. Having established reaction conditions that permit transcription from a linear dsDNA template with the flgB promoter, we assayed the activity of the RNA polymerase within a range of temperature and pH (Fig. 4). Utilizing a reaction containing the RNA polymerase, recombinant RpoD, and the flgBp template, three pH values encompassing the buffering range of HEPES (pH 6.8, 7.5, and 8.2) were tested at 30 °C (Fig. 4A,B). The accumulation of the RNA products was detected by the incorporation of α-32 P-ATP. Accumulation of RNA products increased with increasing pH; RNA polymerase had the highest transcriptional activity at pH 8.2. Additional reactions using the three pH values and three temperature conditions, 22 °C, 30 °C, and 37 °C, within the range encountered by B. burgdorferi during its enzootic cycle, were performed (Fig. 4C,D). Accumulation of RNA products increased with increased temperature; RNA polymerase activity was highest at 37 °C. These results indicate RNA polymerase activity responds to temperature and pH, which is consistent with previously characterized bacterial RNA polymerases [33][34][35] . In addition, we observed reactions performed at pH 8.2 do not permit use of pre-mixed reaction buffers, likely due to the instability of DTT at high pH, which significantly reduces its half-life 36 . Therefore, subsequent in vitro transcription reactions were carried out at pH 7.5 and 37 °C unless indicated otherwise. These RNA polymerase reaction conditions yielded the highest and most reproducible activity in this study.
B. burgdorferi RNA polymerase requires manganese for activity. Initial screening for RNA polymerase activity indicated that magnesium or manganese is required for activity using single-stranded DNA templates (Fig. S2). Therefore, we screened the metal-dependent activity of RNA polymerase holoenzyme with recombinant RpoD, using the dsDNA flgBp template (Fig. 5). We tested the magnesium-and Scientific RepoRtS | (2020) 10:8246 | https://doi.org/10.1038/s41598-020-65104-y www.nature.com/scientificreports www.nature.com/scientificreports/ manganese-dependent activity of the RNA polymerase using ultra high purity (>99.9%) metal salts containing sulfate anions to remove potential noise from trace contamination. We observed a lower concentration of manganese, compared to magnesium, is required for activity. Reaction buffers containing 0-20 mM magnesium sulfate (MgSO 4 ), 0-20 mM manganese sulfate (MnSO 4 ), or 2 mM MgSO 4 with 0-20 mM MnSO 4 were prepared in parallel ( Fig. 5A-C). The accumulation of RNA products over 5 min was measured following the addition of the dsDNA template. In vitro transcription reactions containing MgSO 4 required a 20 mM concentration for detectable RNA polymerase activity, while reaction mixtures containing MnSO 4 required tenfold less (2 mM) manganese for detectable activity and did not require magnesium ions in the reaction mixture. Reaction mixtures containing 2 mM MgSO 4 along with varying levels of MnSO 4 all had higher levels of activity compared to reaction mixtures with only MnSO 4 (Fig. 5D). These results indicate manganese is required for B. burgdorferi RNA polymerase activity whereas magnesium plays a supplementary role.
The amino acid sequence of B. burgdorferi RNA polymerase β′ subunit was aligned to β′ subunits from other bacterial species to better understand the role of manganese using Clustal Omega and MSAProbs amino acid sequence alignment algorithms utilizing hidden Markov models 37,38 . Gram positive species in the genera Bacillus and Clostridium possess RNA polymerases that are enhanced by the presence of manganese 39,40 . The β′ subunits of these bacteria were included in the alignment to the β′ subunit in B. burgdorferi, along with prototypical magnesium-dependent RNA polymerases from other genera. However, the alignment around the active site, which binds magnesium, revealed no conserved pattern among the manganese-associating RNA polymerases (Fig. 6). A full alignment of β′ subunit amino acid sequences are available in Table S3. Therefore, which divalent cation incorporates into the active site of the B. burgdorferi RNA polymerase remains unclear.
RpoD-dependent promoter selectivity. We next tested if the RNA polymerase holoenzyme can select various promoter sites encoded within the B. burgdorferi genome. Defined transcriptional initiation sites were chosen from an RNA-seq data set that mapped processed and unprocessed RNA 5′ ends to define transcription start sites 41 . dsDNA templates for 18 B. burgdorferi genes were generated by PCR encompassing the predicted promoter regions. www.nature.com/scientificreports www.nature.com/scientificreports/ DNA template size, relative promoter strength, and transcript size based on known transcriptional start sites are found in Table 1. In vitro transcription reactions were carried out using the 18 dsDNA templates. Reaction products were separated by gel electrophoresis to detect the relative product size and quantity (Fig. 7A). Reactions using each of the dsDNA templates generated products matching the expected transcript sizes (Table 1). A consensus sequence for the RpoD-dependent promoter site was generated using seven templates encoding promoters to housekeeping genes that yielded the strongest relative signals (flgBp, nagAp, rplUp, clpCp, glpFp, gapdhp, napAp, and groLp) using previously annotated transcription initiation sites (Fig. 7B). Conserved sequences in position −1 to −40 determined by sequence logo resemble AT-rich promoter sequence previously determined by the MEME motif discovery algorithm 41,42 . While multiple time points or a promoter competition experiment would be required to properly quantify relative signal strengths from the various templates, the strongest signals were repeatedly generated from flgBp and rplUp, and signals generated from ospCp and bbd18p were among the weakest. These observations were consistent with previous studies showing flgBp is a strong RpoD-dependent promoter and ospCp is an RpoS-dependent promoter 29,32,43 . Together, these results suggest the RpoD-directed RNA polymerase holoenzyme preferentially selects certain promoters (flgBp and rplUp) in our in vitro transcription system.
Discussion
In this study, an intact RNA polymerase complex was purified from B. burgdorferi by affinity chromatography, and an in vitro transcription assay was developed by establishing buffer and metal concentrations for reaction mixtures. The activity of RNA polymerase in a range of pH and temperature was demonstrated. We leveraged this in vitro transcription system to assay RpoD promoter selection on previously annotated promoters. These promoter sites were recognized by RpoD, as evidenced by specific length products produced by the RNA polymerase. This demonstrates the utility of these transcriptional start sites previously assayed and annotated in a 5′ end transcriptome and makes possible an assay of RpoD-dependent transcriptional initiation.
The level of RNA polymerase activity we detected from various promoters differed qualitatively in signal strength. The qualitative differences in promoter strength observed in our in vitro transcription assays does not necessarily closely correlate with relative strength of transcriptional start sites measured from live cells. For www.nature.com/scientificreports www.nature.com/scientificreports/ example, ospAp, clpCp, and glpFp all produce in vitro transcription products of similar signal intensity (Fig. 7), although, the relative transcriptional starts from these promoters differ by two orders of magnitude in cells ( Table 1). The interactions of many transcription factors with bacterial RNA polymerase determine the rates of transcriptional initiation in vivo. Transcription initiation from a promoter site may vary not only by sigma factor binding, but also by accessibility of the site due to steric hindrance, polymerase availability, or DNA supercoiling 44 . Moreover, there was no promoter competition in the in vitro transcription reactions to compare relative promoter strength, as has been engineered in other systems 45 . This limits the interpretation of relative signal strength produced in the in vitro transcription reactions described in this study. For example, dbpBp, which is The RNA products were separated by gel electrophoresis and detected by phosphor imaging. Gels were cropped from a single image and are representative of three independent experiments. (B) A consensus sequence was generated from DNA sequence encoded in −40 to −1 position from the transcription start sites using sequence logo. Seven templates that encode promoters to housekeeping genes and produce single RNA products were chosen to generate the sequence.
Scientific RepoRtS | (2020) 10:8246 | https://doi.org/10.1038/s41598-020-65104-y www.nature.com/scientificreports www.nature.com/scientificreports/ thought to be primarily recognized by RpoS, is also recognized by RpoD, but the relative signals obtained from our assays should not be used to infer promoter strength 29 . Further investigations that include the use of alternative sigma factors and transcription factors will be required to resolve the apparent discrepancies between what we observed in the in vitro transcription assays presented here and what occurs in live cells.
Major components in the nickel affinity purified RNA polymerase mixture were identified as the homologs of β′, β, and α subunits by immunoblotting and LC-MS. No other subunits were required for RNA polymerase transcriptional activity using single-stranded DNA templates. Quantitative western blotting determined that the α subunit was the most abundant subunit in the mixture but constituted less than twofold the number of β′ and β subunits, suggesting B. burgdorferi RNA polymerase enzyme consists of the typical β′βα 2 arrangement with only a portion of the RNA polymerases carrying both α subunits. We suspect some of the α subunits are capable of dissociating either in the cell or during the purification process to produce non-productive RNA polymerases in the mixture, thereby limiting maximal enzymatic activity. Similarly, a significantly smaller quantity of RpoD was co-purified with the RNA polymerase, requiring supplementation with additional sigma factor subunits for measurable RNA polymerase activity from dsDNA templates. Similarly, the ω subunit, considered part of the RNA polymerase complex, did not co-purify with the core. Previous reports suggest the interaction of ω subunit with the RNA polymerase can be transient, therefore, this subunit may have been lost during purification 46 . Further investigations will be required to characterize the role of the ω subunit on B. burgdorferi RNA polymerase activity.
Other co-purified products present in the mixture were not in apparent abundance when observing the Coomassie-stained gel and included proteases and transcription factors (Table S1). While bands excised from the stained electrophoresis gels of the purified RNA polymerase contained peptides other than subunits of RNA polymerase core, these results are not surprising. Other RNA polymerase complexes purified using strategies similar to those presented here contain detectable contaminants [47][48][49][50] . Our study differed from these previous purification efforts only because we identified the contaminating peptides by highly sensitive LC-MS. Nevertheless, this study did not rule out a role for these factors that may interact with subunits of the RNA polymerase and alter its activity. Undoubtedly, a higher purity RNA polymerase will be required for structural studies; however, the level of RNA polymerase purity shown in this study was sufficient to observe and quantify activity from specific B. burgdorferi promoters.
In this study, we determined that RNA polymerase of B. burgdorferi is likely dependent on manganese. Manganese is a common contaminant in MgCl 2 stocks and utilizing ultrapure magnesium revealed the requirement for manganese for B. burgdorferi RNA polymerase activity. Manganese was required for transcription elongation from both ssDNA and dsDNA templates in vitro. These observations suggest that manganese in B. burgdorferi RNA polymerase is not required for association with dsDNA or with a sigma factor (when RNA polymerase interacts with ssDNA). While prototypical bacterial RNA polymerases are thought to be magnesium-dependent 51 , RNA polymerases purified from the Firmicutes (Clostridium acetobutylicum, Bacillus subtilis, and Lactobacillus curvatus) are manganese-dependent 40,52,53 . A regulatory role for manganese has been demonstrated for some previously characterized RNA polymerases, such as transcription stalling, while others seem to require manganese for catalytic activity 40,[51][52][53][54][55][56] . Although the highest level of activity from the B. burgdorferi RNA polymerase was obtained from a combination of magnesium and manganese, the role of manganese in catalysis remains unexplored; note that the conserved catalytic site amino acid residues do not differ between magnesium-and manganese-utilizing bacteria. Moreover, the possibility that metals such as magnesium may co-purify with the RNA polymerase was not ruled out. Further structural characterization is required to determine the exact roles of manganese and magnesium for B. burgdorferi RNA polymerase activity.
A unique feature of B. burgdorferi physiology is its accumulation of manganese in the intracellular environment along with the apparent lack of intracellular iron 9,10 . The role of manganese in the B. burgdorferi RNA polymerase should be carefully considered due to its numerous potential roles in physiology. Manganese levels in B. burgdorferi are maintained by the BmtA transporter 57,58 , and spirochetes actively accumulate manganese in the cytosol. Levels of intracellular manganese in B. burgdorferi can change depending on the manganese concentration in the culture media 14,57 . Additionally, manganese levels were shown to affect the transcription of virulence genes and are hypothesized to be an environmental cue required for B. burgdorferi transmission 14,59 . The impact of varying concentrations of manganese on RNA polymerase activity in this study suggests regulation of optimal intracellular levels of manganese in B. burgdorferi are required to support basal transcription.
Currently, several transcription factors are proposed to respond to environmental cues and to regulate transcription initiation in B. burgdorferi. These transcription factors include Rrp2, BosR (the ferric uptake regulator homolog), BadR (the Borrelia host adaptation regulator), and DksA 13,60-66 . Our newly developed in vitro transcription assay can be used to confirm the function of these factors as activators or repressors of transcription from cognate promoters, which was not previously feasible for B. burgdorferi. Additionally, we observed RpoD only weakly drove transcriptional initiation from the ospC promoter, which is known to be regulated by the alternative sigma factor RpoS 29,67,68 . The data presented here demonstrate the utility of in vitro transcription assays to test the hypothesis that alternative sigma factors control transcription of dissimilar sets of genes in B. burgdorferi. experimental procedures. RNA polymerase structure modeling. Amino acid sequences of the RNA polymerase subunits (NP_212636, YP_008686574, NP_212522.1, and NP_212954) were individually submitted to the Iterative Threading Assembly Refinement (I-TASSER) server for modeling with no template specification 69 . An alignment of the resulting I-TASSER models to the corresponding subunits of a molecular model of the E. coli RNA polymerase core deposited in the Protein Databank under accession number 3lu0 25 was then performed with PyMOL (The PyMOL Molecular Graphics System, Version 2.0 Schrödinger, LLC). To relax the structure and relieve any clashes arising from the piece-wise modelling and assembly approach taken to generating the B. burgdorferi RNA polymerase core model, the assembled complex was subjected to minimization and limited molecular dynamics simulation with the program NAMD 70 using the CHARMM36m force field 71 . The model Scientific RepoRtS | (2020) 10:8246 | https://doi.org/10.1038/s41598-020-65104-y www.nature.com/scientificreports www.nature.com/scientificreports/ was solvated with TIP3P waters and NaCl added to 250 mM using VMD 72 . The system was then minimized using 5,000 steps of conjugate gradient minimization with NAMD ahead of 100 ps of dynamics with a 2 fs integration time step during which the protein was fixed. Langevin dynamics temperature and Nosé-Hoover Langevin piston pressure controls were used to maintain the system at 310 K and 1 atm, respectively, and electrostatic interactions were calculated using the particle-mesh Ewald method. The system was then subjected to a second round of 5,000 steps of conjugate gradient minimization prior to performing 1 ns of unconstrained dynamics in the NPT ensemble. Protein coordinates from frames written every 2 ps over the final 500 ps were positionally-averaged, and bond lengths and angles were subsequently idealized with Rosetta 73 to generate the final model.
Genetic transformation.
A C-terminal 10xHis affinity tag was introduced to the chromosomal copy of rpoC in the B31-5A4 strain background by homologous recombination. The 3′ end of the rpoC gene was amplified using primers encoding a 10xHis affinity tag optimized for B. burgdorferi codon usage (rpoC 3285 F: TGCATCTTATGTATTACCAG and rpoC 4131 R + H + AaAg: ACCGGTACTGACGTCTCACTAG TGATGATGATGGTGATGATGGTGATG ATGAACTTCAGAATCGATATTT). The PCR product was TOPO-cloned into the pCR2.1-TOPO vector (Thermo Fisher Scientific, Grand Island, NY, United States). The genomic sequence downstream of rpoC was amplified by PCR using the primers rpsL U149F + AatII: GACGTCTGGACATTTAATTCCTACTG and rpsG 385 R + AgeI: ACCGGTATGCATTTAAAAGTTCGTTT. The PCR product encoding the downstream region and the pCR2.1-TOPO-RpoC-10XHis vector were digested by AatII and AgeI restriction enzymes and ligated together with T4 DNA ligase (Invitrogen, Carlsbad, CA, United States). The selectable marker flgBp-aaaC1-trpLt 12 was inserted on the 3′ end of rpoC−10XHis. AatII restriction digest of the pCR2.1-TOPO-RpoC-10xHis vector, and the PCR product encoding the selectable marker was ligated with T4 ligase to generate the plasmid used for homologous recombination. B. burgdorferi B31-5A4 strain was transformed with pCR2.1-TOPO-RpoC-10xHis plasmid as described using 30 µg · ml −1 gentamicin for selection 74 .
RNA polymerase purification. The B. burgdorferi B31-5A4-RpoC-10xHis strain was maintained under microaerobic conditions (5% CO 2 , 3% O 2 ) in BSK II medium, pH 7.6 at 34 °C. Cultures were passaged in 3 L BSK II media and allowed to reach a cell density of 3-5 × 10 7 cells · ml −1 . Cells were collected by centrifugation at 10,000 × g for 30 min and then washed once in HN buffer (10 mM HEPES, 10 mM sodium chloride, pH 8.0) to remove residual BSK II. Cell pellets were resuspended in 20 ml of ice-cold lysis buffer containing 50 mM sodium phosphate, 300 mM NaCl, 10 mM imidazole, 2 mM DTT, 5X protease inhibitor cocktail (Invitrogen, Carlsbad, CA, United States), and 200 U · ml −1 Turbonuclease (Sigma-Aldrich, St. Louis, MO, United States). Cells were lysed by pushing the cell suspension through a 1-inch diameter French Pressure Cell twice at 3,000 PSI. Cell-free lysates were generated by removing insoluble cell debris from the lysed cell suspension with centrifugation at 4 °C, 20,000 × g for 30 min and then filtering the resulting supernatant through a 0.45 µm pore size syringe filter. His-tagged proteins were separated by nickel affinity chromatography. Lysates were loaded into a UPC-900 FPLC (Amersham Biosciences, Little Chalfont, United Kingdom) and pumped through a HisTrapFF 1 mL (GE Healthcare, Chicago, IL, United States) column. The column was washed with 9 mL of 5% elution buffer (50 mM sodium phosphate, 300 mM NaCl, 250 mM imidazole, and 2 mM DTT) and 6 mL 10% elution buffer. The resin-bound proteins were collected into elution fraction by increasing the gradient of elution buffer. The liquid volume of the RNA polymerase containing elution fractions were reduced by filtration in Amicon Ultra -4 (Millipore, Burlington, MA, United States) 10 kDa pore size centrifugal filter columns. RNA polymerase was subjected to buffer exchange into a storage buffer containing 40 mM HEPES, 200 mM NaCl, and 2 mM DTT with a PD10 Sephadex G-25 column (GE Healthcare, Chicago, IL, United States). The liquid volume was reduced to concentrate RNA polymerase on a centrifugal filter column, and final protein concentration was measured by spectrophotometry to determine A 280nm without extinction coefficient adjustment (1 abs at 1 cm = 1 mg · ml −1 ) and BCA assay. The RNA polymerase was stored in 50% glycerol at −80 °C.
Sigma factor purification. Oligonucleotides encoding the codon-optimized version of B. burgdorferi rpoD were commercially synthesized and cloned into the BamHI/EcoRI site (GenScript, Piscataway, NJ) of the pMAL-C5X plasmid expression vector (New England Biosciences, Ipswich, MA, United States). The expression vector was transformed into Top Shot BL21 (DE3) pLysS Chemically Competent E. coli (Invitrogen, Carlsbad, CA, United States) to produce N-terminal Maltose binding protein (MBP)-tagged RpoD. Overnight E. coli cultures were passaged 1:200 into LB-Lennox broth containing 2 g · L −1 glucose and 100 µg · ml −1 ampicillin and then incubated at 32 °C until culture density reached optical density (OD 600 nm) of 0.5. The culture was incubated for an additional 2 hours with 0.3 mM isopropyl β-D-1-thiogalactopyranoside to allow protein expression under the lac operator. MBP-tagged proteins were purified from total cell extracts by amylose resin affinity chromatography as described in the pMAL-C5X protein expression system protocols (New England Biosciences, Ipswich, MA, United States). To remove the MBP-tag from the RpoD, 30 mg of purified protein was incubated over night with 200 µg Factor Xa protease in the presence of 2 mM calcium chloride. The mixture containing RpoD was separated by heparin affinity chromatography by flowing the mixture though a HiTrap Heparin HP column. Elution of the column with an increasing gradient of sodium chloride led to the release of recombinant RpoD to apparent homogeneity. RpoD was prepared for storage and use as described under the RNA polymerase purification section.
Protein identification. The purified RNA polymerase mixture was separated by SDS-PAGE. Protein bands stained with Imperial Protein Stain (Thermo Fisher Scientific, Grand Island, NY, United States) were excised for analysis by LC/MS/MS at the Research Technology Branch, NIAID, NIH, (Bethesda, MD, United States). Following trypsin gel digest, protein samples were injected onto an Orbitrap Tribrid Mass Spectrometer equipped (2020) 10:8246 | https://doi.org/10.1038/s41598-020-65104-y www.nature.com/scientificreports www.nature.com/scientificreports/ with Nano-LC Nano-Electrospray source (Thermo Fisher Scientific, Grand Island, NY, United States). Data were analyzed with PEAKS v8.5 (Bioinformatics Solutions Inc. Waterloo, ON, Canada) to discover sequences matching proteins encoded in the B. burgdorferi B31 genome. Sequence matches were collated, and protein identities were ranked based on highest number of sequence matches normalized to predicted protein size.
Quantitative western blots. Custom polyclonal antibodies were generated to full-length recombinant Borrelia RNA polymerase subunits β, α and σ 70 (RpoD) (GenScript, Piscataway, NJ, United States). Linear detectable ranges of the polyclonal antibodies anti-RpoB, anti-RpoA, and anti-RpoD were determined by linear regression analysis. Analysis was performed on densitometry signals resulting from western blots loaded with twofold dilutions of recombinant target proteins and incubated with 1:2000 dilution of primary antibodies for 16 hours. Purified RNA polymerase was loaded in amounts to produce densitometry signals within the linear range (100-150 ng of purified protein) to quantify the RNA polymerase subunits purified by affinity chromatography. For western blotting, proteins were separated by SDS-PAGE in the Mini-Tetra Gel System (Bio-Rad, Hercules, CA, United States) and transferred to PVDF membrane using the Transblot Turbo System (Bio-Rad, Hercules, CA, United States). Primary antibodies were incubated with the membrane at a 1:2000 dilution for 17 hours. The antibodies bound to antigen were labeled by incubation of the membrane with HRP-conjugated protein-A (Invitrogen, Carlsbad, CA, United States) at a 1:4000 dilution for one hour and then the membrane was subjected to five 15-min washing steps in TBST. Labeled antibodies were detected by soaking the membrane with Super Signal West Pico chemiluminescent substrate kit (Thermo Fisher Scientific, Grand Island, NY, United States) on the ChemiDoc MP imaging system (Bio-Rad, Hercules, CA, United States). Chemiluminescent signals were quantified from ChemiDoc images by densitometry in Image Lab Software (Bio-Rad, Hercules, CA, United States).
Generation of templates for in vitro transcription.
A single-stranded circular DNA template was generated for RNA polymerase activity (without sigma factor). A 45 bp oligonucleotide NC-45 (CTGGAGGAGA TTTTGTGGTATCGATTCGTCTCTTAGAGGAAGCTA) was combined with splint oligonucleotide (CTCCAGTAGCTT) to a promote double-stranded complex and phosphodiester bond formation by T4 DNA ligase, resulting in single-stranded NC-45 oligonucleotide circularization 75 . Linear dsDNA templates were generated to detect sigma factor-dependent transcription from B. burgdorferi promoters. Transcriptional start sites upstream of genes of interest that were previously identified by either primer extension or RNAseq were targeted for template generation 32,41,43,76 . Primers were generated to amplify a 500-bp region that contains the previously annotated transcriptional start site in the middle of the amplicon (Table S2). DNA was amplified by PCR with Q5 High Fidelity Polymerase (New England Biosciences, Ipswich, MA, United States) from B. burgdorferi strain B31-A3 genomic DNA. PCR products were purified using the QIAquick PCR purification kit (QIAGEN, Germantown, MD, United States).
In vitro transcription. The in vitro transcription reactions were carried out under the following conditions unless otherwise stated. A 5X reaction buffer containing 300 mM potassium glutamate, 200 mM HEPES pH 7.5, 5 mM DTT, 0.25% NP40 detergent, 10 mM MgSO 4 , and 25 mM MnSO 4 in HPLC grade water were stored for up to two weeks at −80 °C and mixed to a 1X concentration in the reaction mixture. The final reaction mixture contained 1X reaction buffer, 0.8 U RiboLock RNase inhibitor (Invitrogen, Carlsbad, CA, United States), 21 nM RNA polymerase, 500 nM RpoD, 2 µCi ATP [α-32 P] (PerkinElmer, Waltham, MA, United States), 20 µM ATP, 200 µM GTP, 200 µM CTP, and 200 µM UTP. A preliminary mixture containing reaction buffer, RiboLock RNase inhibitor (Invitrogen, Carlsbad, CA, United States), RNA polymerase, and RpoD was incubated for 10 min on ice prior to the addition of other components. Transcription was initiated with the addition of linear dsDNA template to a concentration of 10 nM, and reactions were allowed to proceed for 5 min at 37 °C. RNA products were separated by gel electrophoresis in 10% TBE-urea gels (Invitrogen, Carlsbad, CA, United States) at 180 V for 45 min. To detect accumulation of RNA that incorporated α-32 P-ATP, gels were placed on a Phosphor Screen (GE Healthcare, Chicago, IL, United States) overnight (16 hours), and the resulting signal was detected using the Typhoon FLA 9500 (GE Healthcare, Chicago, IL, United States). Densitometry measurements were determined with Image Lab 6.0.1. Software (Bio-Rad, Hercules, CA, United States). | 2020-05-19T14:43:24.087Z | 2020-05-19T00:00:00.000 | {
"year": 2020,
"sha1": "23b6f17af56aad7bbddc3220afe8765483be1277",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-65104-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a3e52f6f3469309e51e10ba066cb5dfe1fbac8f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
76648941 | pes2o/s2orc | v3-fos-license | A Mathematical Model for Multiworkshop IPPS Problem in Batch Production
Integrated Process Planning and Scheduling (IPPS) problem is an important issue in production scheduling. Actually, there exit many factors affecting scheduling results. Many types of workpieces are commonly manufactured in batch production. Moreover, due to differences among process methods, all processes of a workpiece may not be performed in the same workshop or even in the same factory. For making IPPS problemmore in line with practical manufacturing, this paper addresses an IPPS problem with batches and limited vehicles (BV-IPPS). An equal batch splitting strategy is adopted. A model for BV-IPPS problem is established. Makespan is the objective to beminimized. For solving the complex problem, a particle swarmoptimization (PSO)with amultilayer encoding structure is proposed. Eachmodule of the algorithm is designed. Finally, case studies have been conducted to validate the model and algorithm.
Introduction
Process planning and production scheduling are two indispensable subsystems in manufacturing systems.In traditional manufacturing, they are performed independently in series.A process planning subsystem determines the process route for each workpiece, and a scheduling subsystem allocates manufacturing resources according to the results from the process planning subsystem [1][2][3].The independent and serial running mode of the two subsystems may lead to unrealistic process routes, uneven resource utilization, and bottlenecks in scheduling [1][2][3].Integration of the two subsystems is an effective method to eliminate conflicts of resources, shorten finishing times of workpieces, and improve machine utilization [1][2][3].The integration of process planning with scheduling is important for the development of manufacturing systems.
Integrated process planning and scheduling (IPPS) problem is a significant issue in the field of production scheduling.Existing research focusing on IPPS problems has typically considered only process stages.Few studies have considered batches in the IPPS problem even though many workpieces are processed in batch production.Batch splitting problem has been a significant issue in real production environments [4,5].Furthermore, there are some cases in which all processes cannot be accomplished in the same workshop due to their different processing characteristics.Because machines may be located in different workshops, transportation between them must be considered in scheduling which will influence the finishing time of the workpieces.
Batch splitting problem involves splitting lot numbers and splitting lot sizes for each workpiece.In fixed batch splitting, the lot numbers of all workpieces are constant, and the lot sizes of all sublots of a workpiece are equal.Equal batch splitting is typically used in production scheduling problems [4,6].The lot size of each sublot of a workpiece is equal, but the lot numbers of different types of workpieces are not equal (the lot number of each workpiece can be changed).Therefore, equal batch splitting is more flexible than fixed batch splitting.Equal batch splitting is adopted in this paper because fixed batch splitting may cause an imbalance between machine and load.
In conclusion, batch and transportation are simultaneously considered so that the IPPS problem better aligns with the real production environment.An IPPS problem considering equal batch splitting and limited vehicles (BV-IPPS) is proposed.The mathematical model of BV-IPPS is established to minimize the makespan.Given the complexity of the problem, a particle swarm optimization (PSO) is designed with a multilayer encoding structure.Finally, the model and algorithm are proven through a case study.
This paper initially provides a brief literature review related to IPPS problem in Section 2. A mathematical model for the problem is proposed in Section 3. A PSO with a multilayer encoding structure is designed in Section 4. The computational results are analysed in Section 5. Finally, the conclusions are provided in Section 6.
Literature Review
The basic IPPS problem can be defined as follows [7]: "Given a set of parts that are to be processed on machines with operations including alternative manufacturing resources, select suitable manufacturing resources and sequences of operations to determine a schedule in which the precedence constraints among operations can be satisfied and the corresponding objectives can be achieved." Researchers began studying IPPS problem in the 1980s [8].Chryssolouris et al. [9] proposed an approach to integrate process planning and scheduling.An increasing number of studies have focused on the IPPS problem in recent years.
Phanden et al. [1] determined that considering process planning and scheduling separately may cause many problems and limitations.They introduced three common approaches for integrating process planning and scheduling.Potential avenues for future IPPS work are discussed at the end of the paper.
(1) Integrated Approaches.Saygin and Kilic [10] proposed a framework for integrating process planning and scheduling that consisted of machine tool selection, process plan selection, scheduling, and rescheduling modules.The framework was validated using many samples.Phanden et al. [11] established a makespan-targeted IPPS model composed of a process route selection module, scheduling module, analysis module, and process route modification module.A genetic algorithm (GA) was designed to solve the model.The availability of the model and algorithm was proven by a case study.Manupati et al. [12] proposed a mobile agent-based approach for integrating process planning and scheduling.A mathematical model for a biobjective IPPS problem with consideration of transportation was established.The approach was proven through many examples.
(2) Improvements to Algorithms.Petrović et al. [3] proposed a hybrid algorithm based on PSO and chaos theory.The advantages of the hybrid algorithm have been proven through many benchmarks by comparisons with other approaches.Xia et al. [13] proposed a dynamic IPPS problem with consideration of machine breakdown and new job arrival.A model for the problem was established.A hybrid GA with variable neighbourhood search was designed to solve the problem.Zhang and Wong [14] proposed a GA framework and integrated ant colony optimization (ACO) into the framework for solving IPPS problem.Lian et al. [15] proposed an imperialist competitive algorithm (ICA) to solve IPPS problem.Lian et al. [16] proposed a mathematical model for process planning problem with an objective of total cost minimization.An ICA was designed to solve the problem.Shao et al. [17] established an IPPS model with two objectives.Makespan and machine utilization are objectives which will be optimized.A modified genetic algorithm-based approach was designed for solving the problem.Li et al. [18] proposed an evolutionary algorithm-based approach for solving IPPS problem.Guo et al. [7] presented an advanced PSO approach to solve the IPPS problem.The advantages of PSO were proven by comparison with other intelligent algorithms.Seker et al. [19] proposed a hybrid heuristic algorithm based on a GA and artificial neural network (ANN) to solve IPPS problem.
(3) Other Research on IPPS Problem.Zhang et al. [20] established an IPPS model with the total energy consumption as the objective.A genetic algorithm-based approach was proposed to solve the problem.Haddadzade et al. [2] proposed an IPPS problem that considered stochastic processing time.The Dijkstra algorithm and Monte Carlo sampling method were used to create examples, and a hybrid algorithm based on simulated annealing and tabu search was designed to solve the problem.Kis [21] proposed a particular job-shop scheduling problem in a chemical production environment.The process routes were directed acyclic graphs and consisted of several alternative subgraphs.A tabu search and GA were proposed to solve the problem.Li and McMahon [22] proposed a multiobjective IPPS problem.The target to be optimized was obtained by combining multiple objectives through linear weighting.A simulated annealing algorithm (SAA) was proposed to solve the problem.Moon and Seo [23] considered transportation in IPPS problem with the makespan as the objective, and an evolution algorithm was proposed to solve the problem.
Many existing studies on IPPS problem only involve the process stage; few consider batching, which may cause the IPPS problem to unrealistic.In real production, many workpieces are processed in batches.Batch splitting problem is a significant issue in real production environments [4,5].Furthermore, all processes may not able to be processed in the same workshop due to different process types.Transportation is another factor that must be considered in real production environments.References [12,23] considered transport in an IPPS problem but did not consider the amounts and locations of transports, which may cause the methodology to not fit real production environments.
Recently, several studies have focused on production scheduling problems considering batch splitting.For example, batching has been considered in a job-shop scheduling problem (JSP) [4,24], flow-shop scheduling problem (FSP) [25,26], and parallel machine scheduling problem [27].Research on production scheduling problems considering batch splitting has mainly focused on JSP and FSP.As manufacturing technology has improved over time, normal machine tools now coexist with numerically controlled machine tools and machining centres in many workshops [28]; however, because different machines have similar functions, multiple process routes are designed for the same workpiece to fully utilize different machines [1][2][3].Designing multiple process routes for a workpiece is of great significance for improving the flexibility of scheduling and reducing resource conflicts [1][2][3].IPPS problem involves parallel machines and alternative process routes simultaneously.Compared to JSP and FSP, IPPS problem has a larger solution space and is more complex.The scheduling result of IPPS problem is more in line with real production situations.IPPS problem has been identified as NP-hard [19].
Based on the above literature, this paper proposes an IPPS problem considering equal batch splitting and limited vehicles with the makespan as the objective to be minimized.A model for the problem is built, and a PSO with a multilayer encoding structure is proposed.Finally, the model and algorithm are validated through a case study.
Problem Description.
There are different workpieces that must be processed in a factory.The batches of different workpieces are different.Each workpiece can be processed through more than one process route, and each process can be processed by more than one machine.There are several vehicles for transporting workpieces.Under these conditions, the makespan is regarded as an objective.An optimal solution is obtained by selecting the process routes, machines, vehicles and sorting process sequences for each workpiece.
BV-IPPS
Based on the above parameters, the makespan is the target to be minimized.The objective function is established as: The following constraints are considered in the model: (1) The sum of the lot sizes of each sublot of a workpiece is equal to the total lot size of the workpiece.
(2) One machine can only process one process of a sublot of a workpiece at a time.
If from time , (3) One process cannot be processed by more than one machine.
If from time , Then, Based on constraint (4), , , V , and are updated by the following formulas: If V = , then Formula (2) ensures that each batch of a workpiece is correct.The relationships between processes and machines are constrained through formulas ( 3)-( 6) to ensure the rationality of the manufacturing process.If a process of a sublot of the workpiece and the front of the process are performed in different workshops, a vehicle for transporting the sublot of the workpiece is determined by formula (7).The starting time of each process is calculated by formula (8).Several parameters are calculated based on formula (8).The vehicles' locations are updated based on formula (9).The vehicles' earliest free time are updated by formula (10).The completion time for each process is calculated by formula (11).The machines' earliest free time is updated by formula (12).The earliest starting time of each sublot of the workpiece is updated by formula (13).The final completion time of each sublot of workpiece is calculated by formula (14).
PSO with a Multilayer Encoding Structure
Based on the description and modelling of the problem, the BV-IPPS problem is an NP-hard problem.PSO was proposed by Kennedy and Eberhart in 1995 [29] and was formerly used to solve continuous optimization problems.Hitherto, researchers have attempted to improve PSO to solve discrete NP-hard problems.Considering the complexity and discreteness of BV-IPPS problems, this study proposes a PSO to solve the BV-IPPS problem.Each module of the algorithm is introduced below.A flowchart of the PSO with a seven-layer encoding structure for the BV-IPPS problem is proposed at the end of this section.
Solution Representation.
A seven-layer encoding structure is proposed to construct solutions.The structure consists of seven layers: the workpiece, sublot of the workpiece, batch, machine, workshop, process time and process route.An example of the encoding structure of BV-IPPS is provided in Table 1.
In Table 1, for example, the codes on the first column mean that 3 is adopted for 21 .Because code "21" first appears in the layer of the sublot of the workpiece, it represents the first process of 21 .There are 26 pieces of 21 that will be processed. 4 is chosen to complete this process.The process time is 0.8 per workpiece. 4 is located in 2 , and so on.
Vehicles are not contained in the solution representation.In this study, vehicles are scheduled dynamically by constraint (4) in Section 4.An order of vehicles is generated after scheduling.For example, order [2 1 1 3] means that 2 is used first, then 1 is used for transportation, and so on.
4.2.
Updating.Individuals are updated with the current global optimal individual or their own best solutions of history based on a probability.A precedence preserving orderbased crossover (POX) [30] operator is used for updating.Individuals are still feasible after updating by using the POX operator.Assuming that workpiece 3 will be updated, an example of a POX operator for the BV-IPPS problem is shown in Figure 2. Steps of Figure 2 are shown as follows.
(1) The columns which contain 3 are cleared in F1.Then, the temporary F1 is generated.
(2) Each column of global optimal individual is traversed orderly.The columns which contain 3 are taken.
(3) Each column of 3 is inserted into the blank columns of temporary F1.If the number of blank columns of temporary F1 is less than the number of columns of 3 , the rest of columns of 3 are inserted in the end of temporary F1.The new F1 is generated.
Mutating.
The process sequences and machines are mutated separately.Swapping is used to mutate the process sequence.Two locations are generated randomly.The two columns of data in the locations are exchanged.This may lead to an unfeasible solution after swapping.The new solution will be corrected after swapping.For machine mutating, a location is generated randomly.Then, the code of the machine in the location is changed randomly.
Assuming that the fifth and ninth columns will be swapped, swapping and correction are shown in Figure 3.
Steps of Figure 3 are shown as follows.
(1) The fifth and ninth columns are swapped.Then, the temporary F1 is generated.
(2) Since there is an invalid column which contains 31 in temporary F1, the columns which contain 31 are cleared.Then, the temporary F1 is generated.
(3) Each column of F1 is traversed orderly.The columns which contain 31 are taken.
(4) Each column of 31 is inserted into the blank columns of temporary F1 .The new F1 is generated.
Assuming that the machine of the third column will be mutated, 3 , 4 and 6 can be used to process the first process of 32 . 3 belongs to 1 . 4 and 6 belong to 2 .An example of changing for a machine is shown in Figure 4.
Steps of Figure 4 are shown as follows.
(1) A machine is selected randomly for replacing 6 at the third column.In Figure 4, 6 is replaced by 4 .
Taking columns which contain J 3 Columns of J 3 Clearing columns which contain J 3 Inserting valid columns of JS 31 into blank columns of JS 31
Temporary F1
Temporary &1 Clearing the columns which contain JS 31 Taking the columns front to back which contain JS 31 (2) Process time and workshop of the third column are updated.The new F1 is generated.
Parameters and Flowchart of PSO with a Seven-Layer
Encoding Structure.The parameters of PSO with a sevenlayer encoding structure are , , Alpha, Beta, , and , which are defined as follows: : Population size, controlling the number of individuals in the population.: Iteration, controlling the number of iterations in the algorithm.Alpha: Probability with which controlling individuals are updated with the global optimal individual, decimals in the (0, 1) interval.Beta: Probability with which controlling individuals are updated with their best solution of history, decimals in the (0, 1) interval.In this paper, Beta = 1 − Alpha.: Mutation probability of individuals, decimals in the (0, 1) interval.: Mutation probability of the process sequence of an individual, decimals in the (0, 1) interval.: Mutation probability of the machines of an individual, decimals in the (0, 1) interval.
Based on the above description, a flowchart of PSO with a seven-layer encoding structure for BV-IPPS is shown in Figure 5.
Experiment 1: Batch and Transport.
A lathe manufacturer receives an order.Several shaft parts must be processed.The minimum length of the shaft parts is 300 mm.The maximum length of the shaft parts is 900 mm.There are five types of shaft parts: 1 - 5 .The batch of each shaft part is 20, 10, 10, 30, and 20.There are six machines: 1 - 6 .There are three workshops: 1 - 3 . 1 and 2 are located in 1 , 3 and 4 are located in 2 , and 5 and 6 are located in 3 .There are six vehicles: 1 - 6 .There are six process methods: 1 - 6 .There are thirteen process routes: 1 - 13 . 1 , 2 , 1 and 2 are located in 1 at time zero, 3 , 4 , 3 and 4 are located in 2 at time zero, and 5 , 6 and 5 are located in 3 at time zero.The makespan is the objective to be minimized.PSO is used to solve the BV-IPPS problem.The process routes of each workpiece are provided in Tables 2-6.The transport time between workshops is provided in Table 7.The process time and transport time are specified in minutes.Generating Pf ∈ (0, 1) randomly
Assumptions
(1) All machines are idle at time zero; (2) Workpieces are processed according to the process sequence; (3) Waiting periods between processes are allowed; (4) Machines will never break down.
Mathematical Problems in Engineering
Table 2: Process routes of 1.
Process route
Results and Analyses.
In this study, parameters of PSO are set: = 300, = 500, Alpha = 0.5, Beta = 0.5, = 0.1, = 0.5, and = 0.5.Equal batch splitting is adopted, and the maximum lot number is four.The C# programming language is used to create the algorithm.The runtime is 75 seconds (CPU: 2.6 GHz, dual-core, RAM: 3 GB).The best solution is shown in Table 8.The makespan of the best solution is 66 min.The curve of the makespan of the optimal solution in each iteration is shown in Figure 6.The Gantt chart of the best solution is shown in Figure 7.
The best solution within the vehicles is shown in Table 8; for example, the first and second processes of 43 are not carried out in the same workshop. 1 is selected to transfer 43 from 2 to 1 according to formula (7).In another example, the third and fourth processes of 52 are both performed in 1 .Thus, a vehicle is not needed between these two processes.
(1) Batch.For the above problem, if workpieces are processed without considering batch splitting, the makespan is 82 min, which is longer than 66 min.The Gantt chart of the best solution without considering batch splitting is shown in Figure 8. Machines will work for prolonged periods of time.If batch splitting is adopted, each sublot of the workpieces can be processed concurrently.Therefore, the completion time of each sublot of the workpieces is shorter.
(2) Limited Vehicles.Figure 7 shows that vehicles and machines are scheduled simultaneously.With regard to the transport time of an empty vehicle, 4 is used to transfer 12 when the first process of 12 has been completed.In the twentieth minute, 4 is not in workshop 2 .However, 4 is located in workshop 2 , which is used to process the first process of 12 .Therefore, 4 must travel to 2 first.Thus, the transport time of empty vehicle 4 is generated.Then, 12 is transferred by 4 from workshop 2 to 1 .
The amounts and locations of vehicles are considered in the IPPS problem simultaneously in this paper.The result is more realistic for determining the real scheduling solution than the best solution obtained by solving the basic IPPS problem.
Experiment 2: Equal Batch Splitting and Fixed Batch
Splitting.To make the comparisons between the two batch splitting strategies more distinct, transport factors will be ignored in experiment 2, which involves two different scale samples.The assumptions are the same as those in experiment 1.As space is limited, several important parameters of the samples are provided in Table 9.The algorithm parameters are provided in Table 10.
In Table 9, "VRN" represents the value range of the number of procedures of each process route and "VRT" represents the process time range of each process.
Two samples are calculated 100 times for each batch splitting strategy.The results of the different batch splitting strategies are provided in Table 11.
(1) Sample 1.The makespan obtained by adopting equal batch splitting and setting the maximum lot number equal to four is better than other results obtained by adopting fixed batch splitting in sample 1.The lot number and batch of each workpiece are constant values in fixed batch splitting.This may lead to an imbalance between the machine and load.The lot number of each workpiece can be changed in equal batch splitting.Thus, equal batch splitting is more flexible than fixed batch splitting.The result obtained by adopting equal batch splitting is superior to that obtained with fixed batch splitting.
(2) Sample 2. The makespan obtained by adopting equal batch splitting and setting the maximum lot number equal to five is not the best makespan of all batch splitting strategies in sample 2. Sample 2 is a large-scale problem.Each workpiece can be divided into five types of lot numbers.The number of batch division results obtained by adopting equal batch splitting in sample 2 is 9536 (5 10 /4 5 ) times the number of batch division results obtained by adopting equal batch splitting in sample 1.Therefore, it is more difficult to achieve a smaller makespan by adopting the equal batch splitting strategy due to large-scale problems.
Conclusions
In this paper, main works are summarised as follows.
(1) This study considered batches and transportation in the IPPS problem simultaneously.The equal batch splitting strategy is used to split the batch of each workpiece.A BV-IPPS problem considering equal batch splitting and a limited number of vehicles is proposed.The makespan is taken as the objective to be minimized, and a mathematical model for the BV-IPPS problem is established.
(2) Due to the complexity and discreteness of the BV-IPPS problem, a PSO within a multilayer encoding structure is proposed to solve the problem.Each module of the algorithm is designed, and a flowchart of the algorithm is given.
(3) The above model and algorithm are validated experimentally.The results show that the makespan obtained by adopting an equal batch splitting strategy in the BV-IPPS problem is superior to that obtained with no splitting.Additionally, transportation is considered in the IPPS problem.The best solution obtained by scheduling vehicles and machines simultaneously is more feasible for determining the real scheduling solution than the best solution obtained by scheduling machines only.Additionally, some future works will be carried on.
(1) The result obtained using the equal splitting batch strategy in the BV-IPPS problem is better than that obtained with the fix splitting batch strategy.However, according to the experimental results, due to the larger search space in the larger-scale samples, it may be more difficult to find a better solution with the equal splitting batch strategy.In future research, we will focus on improving the algorithm.Specifically, more advanced algorithms will be developed to address large-scale BV-IPPS problems.
(2) BV-IPPS problem is a static scheduling problem in this paper.In order to make IPPS problem more in line with practical manufacturing environment, some dynamic factors will be considered in IPPS problem.For example, new job inserting, job cancelling or machine breakdown will be considered.
Figure 1 :
Figure 1: Processing flow of a scheduling solution.
Model.Based on the above descriptions, a BV-IPPS model is proposed.The following parameters are defined to establish the model for the problem: : Number of workpieces; : Number of machines; : Number of vehicles; : Number of workshops; : Number of process routes;
Figure 3 :
Figure 3: Example of swapping and correction.
Figure 4 :
Figure 4: Example of changing for a machine.
Initialing M, D, Alpha, Beta, Pm, Ps, Pa, Pt; initialing the first population Decoding; saving the global optimal individual and best solution of history of each individual Using POX to update the ith Pf < Alpha?individual with the global optimal individual; i++ Generating Pmf ∈ (0, 1) randomly i = M? Using POX to update the ith individual with the best solution of history of the ith individual; Yes Putting the jth individual into the next population; j++ Mutating the jth individual based on Ps and Pa Pmf < Pm?
Figure 5 :
Figure 5: Flowchart of PSO with a seven-layer encoding structure for BV-IPPS.
Figure 6 :
Figure 6: Curve of the makespan of the optimal solution in each iteration.
: Workshop in which is located; : Current workshop in which is located; : Earliest time at which can be used; : Earliest time at which can be processed; V : is chosen for .The starting time of the Vth process of , which is processed on , V ∈ [1, ]; V : is chosen for .The process time of the Vth process per one piece of , which is processed on ; V : is chosen for .The completion time of the Vth process of , which is processed on ; : Completion time of ; : Transport time of , which travels from to , ∈ [1, ], ∈ [1, ]; : Earliest time at which can be used; : Boolean value.If is chosen for , the value is 1; otherwise, it is 0; V : Boolean value.If the Vth process of is processed on , the value is 1; otherwise, it is 0; 1, ]; : Number of processes of ; : Lot size of ; : Lot number of ; : Lot size of ;
Table 7 :
Transport time between workshops.
Table 9 :
Two samples of Experiment 2.
Table 10 :
Algorithm parameters of the two samples.
Table 11 :
Results of the different batch splitting strategies. | 2019-01-02T22:20:29.463Z | 2018-03-20T00:00:00.000 | {
"year": 2018,
"sha1": "e6ce82f99e26076716e9e94879346d4d87d8ef00",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2018/7948693",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e6ce82f99e26076716e9e94879346d4d87d8ef00",
"s2fieldsofstudy": [
"Engineering",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
73438640 | pes2o/s2orc | v3-fos-license | Sarcomatoid carcinoma of the pancreas: A case report
BACKGROUND Sarcomatoid carcinoma of the pancreas (SCP) is a rare and aggressive epithelial tumor that has both epithelial and mesenchymal features. It is characterized by sarcomatous elements with evidence of epithelial differentiation. And the term “sarcomatoid carcinoma” is often confused with “carcinosarcoma”. CASE SUMMARY We present a case of SCP with lymph node metastasis in a 59-year-old male patient. He had experienced darkening of the urine, scleral icterus, and fatigue for 4 weeks. Computed tomography and magnetic resonance imaging revealed a mass in the pancreatic head, and laboratory tests revealed elevated serum bilirubin levels. The patient underwent pancreaticoduodenectomy after biliary decompression. Histologically, spindle cells with marked nuclear atypia and brisk mitotic activity arranged in a storiform or fascicular pattern were present in the bulk of the tumor. Immunohistochemical analysis found that the spindle cells exhibited strong diffuse positivity for epithelial markers, indicative of epithelial differentiation. Accordingly, the pathologic diagnosis of the pancreatic neoplasm was SCP. CONCLUSION Although sarcomatoid carcinomas and carcinosarcomas have different pathologic features, both have epithelial origin.
INTRODUCTION
Sarcomatoid carcinoma of the pancreas (SCP) is a rare and aggressive epithelial tumor with a sarcoma-like element, which exhibits epithelial markers and epithelial ultrastructural features. This could be considered as a stable intermediate stage of the epithelial-mesenchymal transition (EMT) [1,2] . As a variant of conventional pancreatic carcinoma, it has similar clinical features but shows a worse prognosis, with an average survival after diagnosis of 5 mo [3] . According to the World Health Organization (WHO) histological classification, it is grouped as an undifferentiated (anaplastic) carcinoma of the pancreas, together with anaplastic giant cell carcinoma and carcinosarcoma [3] . However, the terms "sarcomatoid carcinoma" and "carcinosarcoma" have been used interchangeably, and their definitions vary among authors. Herein, we report a case of sarcomatoid carcinoma arising in the pancreas and discuss the similarities and differences between sarcomatoid carcinomas and carcinosarcomas.
Chief complaints
A 59-year-old male patient had experienced darkening of the urine, scleral icterus, and fatigue for 4 wk.
History of present illness
Biliary decompression by placing stents via endoscopic retrograde cholangiopancreatography had been performed at a different hospital a few days prior, because of elevated serum bilirubin levels and an ampullary tumor revealed by computed tomography (CT). The patient was admitted to our hospital for further evaluation and treatment.
A physical examination revealed scleral icterus, cutaneous jaundice, but no palpable abdominal mass.
Imaging examinations
Contrast-enhanced CT revealed a low-density round mass measuring about 1.5 cm × 1.1 cm in the pancreatic head, which was slightly enhanced after intravenous administration of contrast material ( Figure 1A). The pancreatic duct, extrahepatic bile duct, and intrahepatic ducts upstream of the obstruction were dilated ( Figure 1B). Magnetic resonance imaging revealed an irregular bulky region in the head of the pancreas and a sheet-like lesion in the main pancreatic duct, with an iso-T1 and a long T2 signal.
TREATMENT
After his bilirubin levels returned to normal range, the patient underwent a laparotomy due to a suspected pancreatic tumor. During surgery, a firm tumor was palpated in the head of the pancreas. No direct invasion of the surrounding pancreatic tissue or adjacent organs, including the duodenum, stomach, liver, and peritoneum, was found. Subsequently, a pancreaticoduodenectomy was performed and regional lymph nodes were removed.
FINAL DIAGNOSIS
The gross pathology revealed a mass (2.5 cm × 2.5 cm × 2.0 cm) located mainly in the pancreatic head with extension into the main pancreatic duct. Microscopically, spindle cells with marked nuclear atypia and brisk mitotic activity arranged in a storiform or fascicular pattern were present in the bulk of the tumor (Figure 2A). The resection margins of the bile duct, stomach, and duodenum were free of tumor cells, but 3 of the 23 lymph nodes were positive for metastasis. An immunohistochemical examination was performed to identify the sarcomatous elements. The tumor did not express cluster of differentiation (CD) 34, CD117, soluble protein-100, smooth muscle actin, human melanoma black 45, and anaplastic lymphoma kinase, but exhibited strong diffuse positivity for cytokeratin 19 ( Figure 2B) and vimentin ( Figure 2C). More than 50% of the malignant cells expressed Ki-67. The metastatic lymph nodes exhibited similar histological and immunohistochemical results ( Figure 3). Accordingly, the pathologic diagnosis of the pancreatic neoplasm was SCP with TNM stage IIB (T2N1M0).
OUTCOME AND FOLLOW-UP
The patient was discharged from the hospital on the eleventh postoperative day and died of liver metastasis and peritoneal metastasis 6 mo later.
DISCUSSION
Sarcomatoid carcinomas and carcinosarcomas are rare aggressive malignancies that can develop at various sites of the body, including the genitourinary tract, respiratory tract, digestive tract, breast and thyroid glands, among others [1,4] . So far, 23 cases of sarcomatoid carcinomas or carcinosarcomas arising in the pancreas have been reported [5] . The use of the terms "sarcomatoid carcinoma" and "carcinosarcoma" is unclear and inconsistent both within and across organs, causing confusion for both pathologists and clinicians. For example, according to the WHO histological classification, carcinosarcoma is a hyponym of sarcomatoid carcinoma in lung tumors [6] , while they, together with anaplastic giant cell carcinoma, are grouped as undifferentiated (anaplastic) carcinomas of the pancreas [3] . Anaplastic giant cell carcinoma is a relatively common type composed of pleomorphic mononuclear cells and bizarre-appearing giant cells [3] , and the latter can be further divided into pleomorphic giant cells and osteoclast-like giant cells [7] . The definitions of pancreatic sarcomatoid carcinoma and carcinosarcoma vary among authors. Based on the histological, ultrastructural, and immunohistochemical evidence, it is undisputable that both sarcomatoid carcinoma and carcinosarcoma of the pancreas have epithelial and mesenchymal features.
Sarcomatoid carcinomas can exhibit a monophasic or biphasic appearance. The monophasic pattern, often referred to as spindle cell carcinoma, is akin to a soft tissue sarcoma without epithelioid areas. The biphasic pattern, the more frequent type, features a mixture of mesenchymal-like and epithelial-like cells with a transition zone [8] . The sarcomatous tissue of both biphasic and monophasic tumors shows evidence of epithelial differentiation, such as epithelial markers and epithelial ultrastructural features, rather than a specific line of mesenchymal differentiation [8,9] . SCP appear to be tumors at a stable intermediate stage of the EMT, as they retain many epithelial characteristics but have a mesenchymal morphology [1,2] . Transforming growth factor-β1 may induce the EMT in pancreatic cells and promote the formation of SCP [10] .
Carcinosarcomas are considered to be truly biphasic neoplasms composed of intermingled carcinomatous and sarcomatous components, which have epithelial and mesenchymal differentiation, respectively, according to their pathomorphological and immunohistochemical features [1] . These two components are typically separated without a transition zone [11] . The carcinomatous component expresses epithelial markers and exists as a variety of pathologic types; e.g., pancreatic ductal adenocarcinoma, mucinous cystadenocarcinoma, and intraductal papillary mucinous neoplasm. The sarcomatous component is sub-classified into homologous tissues (mostly malignant spindle cell proliferations) and heterologous tissues (such as osteosarcoma and rhabdomyosarcoma) [1] . The heterologous tissues are defined as those not native to the primary tumor site and show specific mesenchymal differentiation [12] . On immunohistochemical analysis, the sarcomatous components are positive for mesenchymal markers and negative for epithelial markers, indicative of mesenchymal differentiation. The classification of cases with weak or focal positivity for cytokeratin is controversial, and most researchers classify them as carcinosarcomas rather than sarcomatoid carcinomas [1,13,14] . While there is substantial evidence that both carcinosarcomas and sarcomatoid carcinomas have epithelial origin, carcinosarcomas show a more complete EMT of the sarcomatoid component compared to SCP [1] .
As variants of conventional pancreatic carcinoma, sarcomatoid carcinoma and carcinosarcoma of the pancreas share similar clinical features. They are found more frequently in the head of the pancreas, and can infiltrate adjacent tissues including the duodenum, stomach, and peripheral nerves. Regional lymph node metastasis and distant metastasis can also occur. The tumors are predominantly found in older persons, and strike both genders with a similar frequency [3,5] . The presenting signs and symptoms include abdominal pain, jaundice, nausea/vomiting, and weight loss [1] . The recommended treatments for sarcomatoid carcinoma and carcinosarcoma mirror those of conventional pancreatic carcinoma [1,15] . Almost all patients undergo surgical treatment, the standard of which is pancreaticoduodenectomy. If needed, postoperative adjuvant chemotherapy with gemcitabine can be applied [1] . Although the tumor is related with the EMT, agents that block or reverse the EMT are at a very early stage of development [15] . Irrespective of the treatment provided, patients have an extremely poor prognosis, with an average survival after diagnosis of 5 mo [3] . According to a report by Shi et al [16] , the median OS for T2N1M0 pancreatic ductal adenocarcinoma was 19 mo. In our case, by contrast, the patient survived 6 mo after surgery.
CONCLUSION
In summary, we present a case of pancreatic sarcomatoid carcinoma and describe its histologic and immunohistochemical features. Although sarcomatoid carcinomas and carcinosarcomas have different pathologic features, both can be interpreted as more malignant variants of conventional pancreatic carcinoma at different stages of the EMT. Therefore, the terms "sarcomatoid carcinoma" and "carcinosarcoma" can be used interchangeably for practical diagnostic purposes. | 2019-02-07T00:58:51.257Z | 2019-01-26T00:00:00.000 | {
"year": 2019,
"sha1": "acfbad11b8505fbc52e88b38ecb4b47c6cff2278",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12998/wjcc.v7.i2.236",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "acfbad11b8505fbc52e88b38ecb4b47c6cff2278",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214074327 | pes2o/s2orc | v3-fos-license | THE OPTIMAL TIMETABLE TO BOOST REGIONAL RAILWAY NETWORKS AND HOW THIS IS AFFECTED BY OPEN ACCESS OPERATIONS
In railway timetabling and railway network design the question for the optimal timetable is a fundamental design decision. Whether a country benefits more from high-speed services or integrated network services strongly depends on its settlement structure. Lille’s law of travelling is applied giving an indication which solution is more suitable for different European countries. In most railway networks an integrated network-oriented timetable like the integrated periodic would maximize the customer’s benefit. Furthermore, it allows for long-term infrastructure design and timetable planning. For a network approach suburban and regional railway lines are of significant importance. Three case studies of regional railway networks in the Austrian province of Styria depict how the application of a periodic timetable increased patronage. In addition, feasibility studies are presented showing the further potential of introducing an integrated periodic timetable. However, integrated periodic timetables considerably may be affected by open access services as use-cases in Austria and the Czech Republic show. While open access operation usually improves the situation on long-distance relations, regional railway services might be negatively affected. These effects and a possible procedure for solving this issue are presented.
Introduction
The main focus of liberalisation in the railway market is on the revitalisation of the sector making it more competitive. Competition in the rail freight and rail passenger market is expected to improve quality of services, raise cost-effectiveness and increase the modal split (European Commission, 2019a). Much effort was put in fostering international train corridors. However, the vast majority of passenger traffic volumes is operated on a national level (Fig. 1).
rules. While long-distance point-to-point services benefit from this situation, regional and commuter trains could be negatively affected.
The objectives of this paper are (i) to show why an ITF is an effective timetable to increase networkwide passenger demand. Furthermore (ii), the case studies of three suburban networks in the province of Styria, Austria, shows how an ITF can be successfully implemented leading to higher patronage. Finally, (iii) the effects of open access operation on regional networks are identified. On a national level long-distance services dominate in terms of passenger-kilometres. Nevertheless, local and regional train services are ahead in terms of passenger numbers. In Austria passengers travelling in local and regional train services, were generating 48% of all passenger kilometres in 2016. Furthermore, about 85% of all passengers are travelling in suburban, local and regional train services (ÖBB, 2016). This underlines the relevance of regional rail network and an adequate timetabling for suburban, local and regional services.
Integrated periodic timetables (known as ITF, short for German "Integrierter Taktfahrplan"), are perfectly suited for the needs of countries with many small cities in comparably close distances. This is often the case in medium sized countries that lack the potential for high-speed trains. Although legislation often supports the ITF in these countries, slot allocation processes potentially result in worsening the overall network performance due to vague implementation
Literature review
The definitons of an "optimal" timetable are manifold, the work of this paper focuses on an optimal timetable for regional and national networks. Extensive descriptions of relevant parameters and approaches can be found in Kroon et al. (2007), Cacchiani and Toth (2012), and Stergidou et al. (2013). Eliasson (2019) analysed the optimal timetable and the value of capacity from a transport economic point of view. The optimal timetable in a liberalized railway market and the importance of system train paths was discussed by Smoliner et al. (2018).
An extremly extensive investigation on highspeed railway projects has been done by Campos and de Rus (2009). The most relevant empirical issues of 166 high-speed railway project experiences around the world focussing on the costs in the planning and operation phase were analyzed. Givoni (2006) concludes that high-speed railway services offer much higher capacity and reduce travel time and therefore lead to mode substitution. However, the high investment in the required infrastructure cannot be justified based on its economic development. Fröidh (2013) presents a model to define the optimal design speed for new high-speed lines.
The integrated periodic timetalbe (ITF) has a wide and detailled reception in literature also known as Integrated Clock-Faced Schedule (Wardman et al., 2004), Integrated Timed Transfer (Clever, 1997;Maxwell, 1999), Integrated Fixed-Interval Timetable (Liebchen, 2006 or simply with the German term Taktfahrplan (Johnson et al., 2006). While the ITF has first been introduced in the Netherlands in the 1930ies, the fundamentals of the ITF, such as the rule of edges and the rule of cycles, have been defined by Lichtenegger (1990). The ITF is a systematic timetable concept requiring a soundly designed network service Pfeiler et al. (2012), combining advantages such as attractive network-wide travel times, high connectivity or systematic infrastructure requirements. Weis (2005) and Uttenthaler (2010) presented how infrastructure development in Austria and Central Europe can be aligned to a target timetable. Walter and Fellendorf (2015) expanded this approach to iteratively develop demand modelling, infrastructure, and timetable construction.
A wide range of literature contribution is available covering the effects of liberalisation on a European level (IBM, 2011;BCG, 2015;SDG, 2016). The focus tends to be placed on the economic aspects rather than on technical ones showing that competition in the railway sector helps increasing efficiency, moderning railway fleets, improving customer services and raising frequency of rail connections. Whether there is an overall benefit still remains open to discussion, considering obvious negative side effects such as an increased complexity of the system. Customers on the one hand benefit from OAO because of lower ticket prices, higher service standards and more frequent connections. On the other hand taxpayers end up with higher overall costs (Casullo, 2016). These additional costs result by a loss of economies of density, duplication of large upfront investment costs and higher coordination costs. Quality of rolling stock, on-board services and ticketing have been improved significantly as shown by Casullo (2016).
An up-to-date overview on how to liberalise passenger services analysing the market in Germany, Great Britain and Sweden can be found in Nash et al. (2019). An in-depth investigation of the effects of liberalisation on the passenger railway market in Poland, the Czech Repbulic, Slovakia and Austria has been done by Taczanowski (2015). The challenges of open access operators interfering with a systematic timetable particularly apply to Austria and the Czech Republic. The situation of Open access passenger rail competition in the Czech Republic is analysed in Zdenek et al. (2016). This effects of on-track competition in ITF-regimes concerning a long-term timetabling and infrastructure development can be found in Janoš, Baudyš (2013).
Methodology
Firstly (i), the suitability of high-speed networks is discussed by applying Lille's law of travelling ( Fig. 2) based on the findings of Mairhofer (1991). The law describes the passenger potential "P" and is in this case applied for high-speed lines for a travel speed of 200 km/h. The potential is calculated by multiplying the population of the cities and dividing it by the squared distance between them. For the calculation only towns with at least 100 000 inhabitants were considered. Furthermore, not more than 20 of the biggest cities per country were included. The resulting product is multiplied by a railway distance affinity factor. The railway distance affinity represents the attractiveness of long-distance railway services for a certain travel speed. It is assumed that at a travel speed of 200 km/h distances of 300 to 500 kilometres are very attractive for railway travelling. Shorter distances are more attractive for road traffic and longer distances are attractive for air traffic. A country's potential for a high-speed network is the accumulated sum of relevant pairs of towns of a country. This potential is calculated for selected European countries.
Secondly (ii), the effects of applying an ITF in regional networks is shown by discussing three case studies in the province of Styria, in Austria. The development of timetables is described by the kind of timetable, the number of trains per hour and the number of service kilometres. The increase of patronage is used as a factor for success.
Thirdly (iii), the effects of open access operation on regional rail networks is discussed based on a literature review.
Potential of high-speed railway services
Timetables have a crucial impact on transport planning and infrastructure development. Countries like France, Italy or Spain aligned their network primarily along the requirements of long-distance high-speed services rather than on regional network-oriented services. Highspeed-networks need to fulfil certain requirements to justify high investments in infrastructure and rolling stock. High-speed services are only effective if there is a high demand between two or more metropolitan areas. This requires an ade-quate number of inhabitants of at least some 100,000 inhabitants in an adequate distance of at least 300 km between them. This correlation of the settlement structure and the resulting potential can be calculated by applying Lille's law of travelling as described in chapter 3. In Fig. 3, this potential is shown for several European countries compared to Austria. The potential and population of Austria are assigned to the normalised value 1.0. In comparison Italy, France and Germany feature a considerably high potential, while the potential is clearly smaller for all other countries, Poland and Romania being the only significant exceptions. Italy and France have a favourable settlement structure with few big metropolitan centres in a distance of about 500 km. Germany with many small or medium-sized metropolitan areas shows a smaller potential. This underlines that high-speed networks can only be justified under very specific conditions.
If the potential for a highspeed network is not given as described above, network-oriented timetables might be a suitable alternative. However, in medium-sized countries the demand is not high enough to offer direct connections between every single hub. As a result, on many routes interchanges are necessary and travel time is rather low, which is affected by the connectivity of train connections and transfers (Brezina, Knoflacher, 2014). To improve this situation a network-focussed timetable like the ITF is recommendable. Therefore, the optimal timetable is the "integrated periodic timetable". Switzerland is the role model of a successful ITF application with the highest number of train kilometres per person per year in the world over the last years (Eurostat, 2016). Fig. 4 shows how the implementation of an extended ITF called "Bahn 2000" helped to strongly increase passenger kilometres in Switzerland in comparison to Austria. Nevertheless, a remarkable increase of the Austrian traffic performance was observed with the implementation of the so-called "Plan 912" combining several measures such as introducing additional regional clocked services.
A further feature of the ITF are regular services in continuous intervals reaching from two hours down to 15 minutes. Most often the interval is a full hour where all services meet at minute 0 or 30 in the hub. To allow transfers among different services trains need to arrive before the full or half hour and depart some minutes afterwards (Fig. 6). For an efficient train slot allocation regional trains need to arrive before and depart after long-distance services.
To integrate hubs in a network a certain edge riding time between the hubs is required, which needs to be aligned with two boundary conditions (Lichtenegger, 1990). This edge riding time is defined as the half of an integer multiple of the interval (Fig. 7). If Fig. 5. Ideal railway hub with interchange possibilities amongst long-distance, regional and bus services.
The integrated periodic timetable
The application of an ITF requires long-term timetabling and infrastructure development. Furthermore, an ITF requests (i) defined hubs for interchanges all over the network, (ii) an adequate interval and (iii) a network of edges connecting these hubs with a defined travel time.
The main characteristic of an integrated timetable is a smooth connection between long-distance trains, regional and local trains as well as bus services in defined hubs. This requires all relevant services being in the hub at the same time, allowing convenient interchanges between trains and other modes of transport (Fig. 5). To enable this the capacity of hubs needs to be adopted to provide a sufficient number of tracks, switches and platforms. the interval in a network is one hour, the edge riding time between the hubs of the network may be half an hour, one hour, one and a half hour and so on. To guarantee interchanges in the network the so-called cycle rule needs to be fulfilled. The multiplied riding time of all edges needs to be an integer multiple of the interval.
The given riding times in the existing network often do not correlate with the calculated edge rid- ing times. On the one hand this results in full hubs not being served at the intended minute 0 or 30 and therefore not offering interchanges in every direction. On the other hand edge riding times clearly show which lines need to be upgraded in order to shorten riding times. Fig. 8 illustrates how these rules are applied in the Austrian railway network. As many edges do not fit the planned schedule in the existing network, infrastructure measures will be undertaken to realize the presented schedule by 2040. Contrary to networks designed for high-speed services in ITF-networks it is more economical to travel as fast as necessary than to travel as fast as possible. Consequently, a clear demand for operational improvements and infrastructure upgrades in the network is defined. The described boundary conditions ensure connectivity and reduce travel times within the network as transfers are coordinated and waiting times are minimized.
Effects of an ITF for regional public services
In regional railway networks trains usually do not meet in hubs as shown in Fig. 9 (a). Infrastructure in 9. Transfer options in a regional non-integrated hub (left) compared to an integrated hub (right).
Source: Own depiction. such regional hubs can be reduced as fewer platforms and switches are required. However, additional passing loops along the railway line may be necessary. As a consequence of non-integrated operations, the service offer in the regional network is generally less attractive.
In regional networks aligned to an ITF train and bus services meet at the same time in the hub as depicted in Fig. 9 (b), thus offering passenger connections in all directions. This allows to optimise vehicle demand and to minimize waiting time for passengers and therefore shortens travel time. Consequently, the ITF offers an opportunity to serve regional areas economically and to increase patronage in these areas.
How the ITF benefits regional railway networks
In Austria a stepwise-implementation of the ITF according to the long-term vision of the target network 2025+ is being implemented until 2040. Amongst others a regional application of the ITF can be found in the province of Styria where an suburban railway system (S-Bahn) was introduced in December 2007. As described in theory by Liebchen (2006), step by step timetables were changed from individual or periodic timetables to symmetric periodic timetables. The final step towards a fully integrated periodic timetable will be done in 2026 when Graz will become an adequate hub in the Austrian long-distance network due to the opening of the Koralm link. The further discussion will focus on the three sub-networks of the Graz-Köflach Railway, the Graz-Szentgotthárd line and the Spielfeld-Bad Radkersburg line (Fig. 10). Source: Own depiction based on data from Amt der Steiermärkischen Landesregierung (2018).
Introduction of clocked services
As early as 1998 first attempts for the introduction of the so-called Steirertakt (Amt der Steiermärkischen Landesregierung, 2007) were aiming for the implementation of a symmetric periodic timetable. Before 2007 different kinds of timetables were applied on the three networks. In Austria individual timetables got replaced in the early 90ies on many lines in the course of NAT91, an initiative to implement an ITF in Austria. Despite positive results, only a few years later this initiative was stopped and the concept of an integrated timetable was ceased due to the high costs involved. However, it was reintroduced around 2005 on some lines when the idea of the ITF became popular again. On the Graz-Szentgotthárd line an in-dividual timetable was still in service till 2010. A somewhat periodic timetable had been applied on the Spielfeld-Bad Radkersburg line already in 1990, while a symmetric periodic timetable has been applied on the Graz-Köflach Railway around 2005. However, the introduction of the S-Bahn system was accompanied by the implementation of a Styrian-wide periodic timetable. This first phase of the S-Bahn in Styria has been a big success as patronage between October 2007 and October 2017 increased by 51% in the S-Bahn network (Amt der Steiermärkischen Landesregierung, 2018). The values for the three mentioned sub-networks are depicted in Fig. 11. This strong increase can be explained by a combination of several factors. First of all, the implementation was embedded in a sound marketing strategy, including rebranding of vehicle and massive advertisement. Furthermore, new vehicles were introduced amongst others on the Graz-Köflach Railway and partially on the Graz Szentgotthárd line. Moreover, stations have been upgraded and a high-speed-line south of Graz opened for regional services. Finally, additional services were offered and timetables were changed to a symmetric periodic timetable. This included infrastructure upgrades and construction of passing loops on the Graz-Szentgotthárd line till 2011. The patronage increase has been especially remarkable on lines where a symmetric periodic timetable was newly introduced like on the Graz-Szentgotthárd line (+85%). The improvements in the service offer are shown in Tab. 1.
Further potential of applying an ITF
About 10 years after the introduction of the S-Bahn feasibility studies were done on the three beforementioned sub-networks (Fig. 10). These studies show the patronage potential caused by an additional number of services and application of an ITF. A methodology has been developed to handle the joint timetable and infrastructure strategy in order to increase the demand .
Graz-Köflach Railway
The Graz-Köflach Railway (GKB) operates three lines in the south-west of Graz, covering both suburban and regional transport. South of Graz a recently built branch of the high-speed line "Koralmbahn" has been used since 2010. The goal of this feasibility study was to develop a comprehensive infrastructure strategy ranging from 2015 to 2045.
In the course of the project several timetable configurations have been investigated in order to increase patronage. The do-nothing configuration was dropped immediately as it showed a decrease in patronage by 2% in the subnetwork due to a shrinking population in this area. In the proximity of Graz dense intervals with a 15-minute headway in peak hours are recommendable. Furthermore, it was shown that an over-proportional patronage increase could only be achieved with significantly decreased riding time. The decision was thus made to pursue a strategy with denser intervals in the suburban area and to significantly reduce the number of stops further outside this area to decrease riding time. Finally, a joint timetable and infrastructure strategy could be developed with a patronage increase on the main branches of 75 to 123% and a modal split for public transport of 18 to 33% (Tab. 2).
One significant result of the study was the decision on whether or not to electrify the network. It had Source: Own elaboration based on data from Veit et al. (2014).
Tab. 2. Key figures for different scenarios on the Graz-Köflach Railway
The optimal timetable to boost regional railway networks and how this is affected by open access operations been clearly demonstrated that electrically powered vehicles outperformed diesel vehicles by so much in terms of acceleration that compensating infrastructure measures were required to achieve the same timetable in electric and diesel traction.
Spielfeld-Bad Radkersburg line
The second sub-network which has been investigated was significantly more complex in terms of design cases, while the line itself is considerably simpler. While integrated into the S-Bahn network of Graz, the Spielfeld-Bad Radkersburg line primarily serves local and regional transport. Due to its nature as a minor branch line, the closure of the line and replacement by bus also needed to be considered as a design case. In the initial analysis, a result similar to Graz-Köflach Railway was obtained: the do-nothing option, i.e. keeping up the current service, would result in a 12% decrease in patronage, while still requiring a double-digit million euro sum in infrastructure costs. The modal split of public transport would drop to 7%. The remaining decisions to be made differed ment costs compared to the do-nothing case (Tab. 3). The best-performing design case for railway service showed a 63% patronage increase and 10% modal split for public transport. This solution, however, also requires 30% more investment costs and 21% more running costs for the maintenance and service provision. With roughly 1,200 passengers a day, this design case is the only one to achieve the so-called system adequacy, a term coined by ÖBB infrastructure to justify railway infrastructure.
Graz-Szentgotthárd line
The Graz-Szentgotthárd line shares most properties with the Graz-Köflach Railway, and additionally serves as freight traffic line. Compared with the Graz-Köflach Railway, the upgrade possibilities between Graz and Gleisdorf, where most suburban demand could be allocated, were minor. Furthermore, the dense timetable models allowed for no riding-time decreases, and the initial target service offer even increased riding times for longer-distance passengers. considerably, however, largely due to the different nature of this railway compared to the Graz-Köflach railway.
The evaluation of the stopping pattern and riding time showed, that many stops are required to increase patronage, however a reduction of riding time is preferable too. If riding time is fitting the edge riding time patronage can be increased significantly and investment costs for a passing loop can be skipped. Another decision had to be made between rail and bus services. The bus service performed better than the do-nothing case (+24% patronage and 8% modal split for PuT), while requiring only 25% of the invest-A much more detailed investigation of timetable models thus had to be carried out, with the need to compare operationally feasible timetables, rather than model timetables. Additionally, the target service offer of four trains per hour between Graz and Gleisdorf was even decreased to three trains per hour in order to provide faster paths for limited service trains. Even further, the integrated timetable service originating at Graz station needed to be redesigned completely in order to allow for an adequate performance, especially east of Gleisdorf. Only then could a performance that enables an increase of patronage east of Gleisdorf be achieved.
Remarkably, the design case of only three trains per hour also performed better on the suburban part of the line, since the riding time advantages of the faster trains could also be used for the bigger stations west of Gleisdorf. This shows that a total of 48% patronage increase (person kilometres), or a 27% modal split for public transport (Tab. 4), was possible as presented by Veit et al. (2018).
Surprisingly, the electrification of the line did not allow for significant changes in patronage when compared to the diesel service; this is mainly due to the aforementioned capacity constraints between Graz and Gleisdorf. East of Gleisdorf, the higher possible top speeds of electrically powered vehicles could be made utilized.
Tab. 4. Key figures for different scenarios on the Graz-Szentgotthárd line.
Effects of open access operation on regional railway services
Competition in the railway passenger market can be distinguished in competition for the tracks and competition on the tracks. In the first case Public Service Obligations (PSO) or franchises are tendered for railway services, while the latter one is also known as open access operation (OAO). Railway markets in the European Union are quite diverse having both forms mixed, one of the two or neither one. While OAO is primarily profitable on long-distance services, there is a major impact on regional rail services as can be observed in busy open access-markets like the Czech Republic or Austria. The busiest line in terms of OAO is found between Praha and Ostrava in the Czech Republic. Due to low track access charges and high priority for long-distance trains in the slot allocation process, it is estimated that the market share of private operators exceeds 50% (Nash et al., 2015). Janoš and Baudyš (2013) observed that liberalisation in the Czech Republic resulted in fierce competition in the rush hour while services in off-peak-hours were reduced. Contradictory, on the Western Line in Austria the number of long-distance services has almost doubled since the incumbent has been challenged by a competitor. Consequently, train slot allocation procedures and therefore the quality of implementing integrated periodic timetables face considerable challenges. As competition leads to a growing number of train services, the struggle for attractive train slots results in major challenges for infrastructure managers (IM). It becomes increasingly difficult to combine competition with efficient timetable planning within constrained networks. The challenge is, as stated by Steer Davies Gleave (2016), to make infrastructure capacity available to open access operators and designing a timetable that is reliable, minimizes journey times and offers attractive connections.
In countries that are aligned to integrated periodic timetables, priority rules emphasize integrated and clocked services. However, Janoš and Baudyš (2013) show that prioritization rules for clocked services are useless if they are overruled by long-distance or international train services. In case commercial trains are preferred in train slot allocation processes this may lead to less attractive train paths or overtakings of PSO-services. The benefit for PSO-customers is reduced by longer travel and waiting times. Furthermore, vehicle turn-arounds are negatively affected leading to a higher vehicle demand.
The example of the hub Amstetten on Austria's Western Line underpins this challenge (Fig. 12). The left hub clock shows the original concept with longdistance trains arriving and departing at the full hour and regional trains arriving before and departing after the long-distance services. When two competing railway undertakings (RU) requested long-distance connections in a half-hourly interval applying for a well-connected train slot at the full hour a dispute resolution process had to be carried out. It resulted in both RU now serving the hub one after another as shown in the right hub clock. After the number of IC-trains at the full hour was doubled the duration between the arrival of the first IC-train and departure of the last one increased from two to twelve minutes. This now forces regional trains to arrive earlier and depart later and is called a hub spreading.
A hub spreading implies several negative effects for passengers. Firstly, edge-riding times for regional trains are reduced. Secondly, this leads to cancelled train stops or shorter line services. Thirdly, not all trains are connected to each other and this reduces the network-wide benefit for customers. Fourthly, some transfers are technically possible but given the non-integrated ticketing between RUs this leads to additional losses and a less comfortable ticket acquisition process for passengers. While competition leads to frequent connections on the long-distance level, regional train services are affected negatively. Hence, customers cannot benefit from the advantages of an ITF and furthermore cost-intensive infrastructure investments are to be questioned. Since both timetable and infrastructure must be developed jointly to justify the costly and long-lasting infrastructure measures, it is not financially sustainable to design infrastructure upon demand for short-term open access services only. By applying a schedule of system train paths, the advantages of on-track competition and ITF could be combined (Smoliner, 2019).
Advantages and disadvantages of the ITF
An ITF-aligned development of a network is very cost-intensive as targeted edge riding times and hub requirements are clearly defined. This means line upgrades or extensions cannot be implemented simply where it is cheap to do so due to the topography or the density of settlements. On the contrary, realignments and upgrades are necessary in sections where the travel time of the existing network is too long. Even if investment costs are high -they are based on a systematic and objective long-term planning and allow for the highest possible customer benefit. However, the successful implementation of an ITF requires a network-wide timetable design, political willingness and cooperation amongst different stakeholders.
The advantages of an ITF amongst customers, railway undertakings and infrastructure managers are manifold. Customers benefit from a timetable that is easy to remember and offers comfortable and timesaving interchanges and therefore short travel times. A symmetric periodic timetable facilitates systematic and effective vehicle circulation and staff dispatching. Finally, infrastructure managers benefit from a systematic infrastructure demand, a predictable timetable and an effective capacity allocation. Furthermore, the ITF is the optimal tool for long-term timetable and infrastructure development.
Approach to combine open access operation and clocked regional services
While customers benefit from OAO on long-distance services, regional services can be negatively affected. To prevent this, a sound procedure is needed to fix clocked long-distance services. In order to fully utilize a complex cost-intensive infrastructure train paths need to be systematically planned beforehand. Publicly tendered PSO contracts for system train paths are recommended to provide the optimal timetable as shown by Smoliner et al. (2018). In consideration of the 4 th railway package and the requirements of an ITF, PSO-contracts awarded by an independent railway agency fulfil the criteria of joint long-term development of timetable and infrastructure best. These contracts guarantee a coherent network-wide application of the ITF and enable the highest network-wide connectivity as well as customer benefits. The prerequisite, however, is that PSO services are in any case to be prioritised over self-sustaining services in train path allocation. As self-sustaining trains are technically speaking state-subsidized through track access charges (Walter, 2018) a tendering of integrated long-distance services allows for an efficient use of taxpayer's money. Bundles tendered in manageable sizes will allow new railway undertakings to enter the market. Niche segments such as accelerated point-to-point services will be open for self-sustaining open access trains as long as they do not interfere with clocked services. This will allow retaining connections for regional services without above mentioned negative effects and will guarantee full utilization of the ITF and infrastructure capacity.
Conclusion
The decision for an optimal timetable strongly depends on the structure and demography of a country. Using Lille's law of travelling, the potential of highspeed services was investigated showing that only a few countries in Europe have a considerable potential for a high-speed network according to size of metropolitan areas and the distances between them. Furthermore, most railway passengers are travelling in regional and local railway networks. Integrated periodic timetables (ITF) offer optimal connections in a network and transfer times are short. Therefore, the integrated periodic timetable allows for a high network-wide benefit for customers and represents the optimal solution for medium-sized countries. However, a joint planning of long-term timetable and infrastructure development is essential. Three case studies show how periodic timetables helped to raise patronage in regional railway networks. A further patronage increase is predicted in case of introducing an ITF aligned with additional services.
To guarantee a full utilization of the ITF, hub spreading, which negatively affects regional services must be avoided. These effects are reduced travel times, cancellation of stops or additional vehicle demand due to suboptimal vehicle circulation. Therefore, a process is needed to ensure the ITF in a liberalized railway system. A PSO-tendering of bundles of system train paths by an independent railway agency is suggested to guarantee long-term planning stability and to maximise the network-wide customer benefit. | 2019-11-28T12:35:17.178Z | 2019-09-30T00:00:00.000 | {
"year": 2019,
"sha1": "58c0451da8e7d36808692f20742c1c5de69f83f1",
"oa_license": null,
"oa_url": "https://www.ejournals.eu/pliki/art/15047/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d21cc028e7204f7ccab3bef21b3bc601d06e46fb",
"s2fieldsofstudy": [
"Engineering",
"Geography"
],
"extfieldsofstudy": [
"Business"
]
} |
59144724 | pes2o/s2orc | v3-fos-license | Isolation of Chlamydia abortus in dairy goat herds and its relation to abortion in Guanajuato , Mexico
Although Chlamydia abortus is classified as an exotic agent in Mexico, there is increasing evidence of its presence. The objective of this study was to isolate C. abortus in dairy goat herds with problems of abortion in the state of Guanajuato, Mexico, and to develop appropriate diagnostic methods for its detection. Serological samples and vaginal swabs were taken from 6 dairy goat herds. The ELISA revealed a seropositivity of 9.60% for C. abortus. The PCR test based on the vaginal mucus samples resulted in 30 of 126 positive animals (23.8%). Chlamydia spp. were isolated in 34 of the 126 animals tested (26.98%). The 3 diagnostic methods tested were valuable and complementary in zones where Chlamydia is suspected to cause abortions. We demonstrated that the bacteria are present in dairy goat herds of Mexico; thus, Veterinary Sanitary Authorities should consider this disease endemic to establish sanitary procedures to control the spread of the disease and to prevent human transmission.
Introduction
Enzootic abortion in small ruminants (EASR) is an infectious disease caused by Chlamydia abortus, previously named C psittaci type I or Chlamydophila abortus (Andersen, 1991;Everett et al., 1999), that affects sheep, goats, and cattle, provoking abortions during the final trimester of gestation or the birth of weak offspring that generally die during the first days of life (Kuo et al., 2011;Longbottom and Coulter, 2003;Rodolakis, 2001;Chisu et al., 2013).
Transmission among animals occurs primarily after parturition or abortion, due to the large quantity of bacteria that are spread through vaginal discharges, the placenta, and the skin of aborted fetuses (Longbottom and Coulter, 2003;Rodolakis, 2001;Gutierrez et al., 2011).In Mexico, this disease is considered exotic and is therefore included in Group 1 of the agreement that lists and classifies the diseases and plagues in animals as exotic and endemic (i.e., requiring obligatory notification) (SAGARPA, 2007).Despite this requirement, several reports of this disease have been made among small ruminants in Mexico.In 1996, C. psittaci was isolated Isolation of Chlamydia abortus in dairy goat herds and its relation to abortion in Guanajuato, Mexico from flocks of sheep in 5 states (Escalante-Ochoa et al., 1996), whereas, in 1997, the first reported presence of C. psittaci appeared in goat herds (Escalante-Ochoa et al., 1997).Later, additional studies of this disease in goats were conducted.In 2005, the presence of the Chlamydia spp. was confirmed in the state of Michoacán, where it was successfully isolated from feces, aborted fetuses, stillbirths, and kids dead within 5 days of age (Lazcano, 2006;Lazcano et al., 2005).In 2008, a serological study was conducted in dairy goat herds in 6 states of the country and antibodies against the bacteria were found (Mora et al., 2008).In 2001, Chlamydia spp.were connected with zoonotic infections in Mexico, which were related to Chlamydia spp.infected goats and sheep (Escalante et al., 2001;Barbosa Mireles et al., 2013).
Because EASR is considered an exotic disease in Mexico, it is difficult to obtain reagents, antibodies, and diagnostic techniques that allow a faster identification in possible cases in which C. abortus is suspected.Moreover, the process used to isolate the agent is complex, as it requires both specialized training of the technical personnel and a live biological medium for its culture, such as chicken embryos or cell culture.Finally, the procedure for achieving isolation and identification may require several weeks (Biberstein and Hirsh, 2004;Longbottom and Coulter, 2003).
Thus, the objective of this study was to determine the presence of C. abortus through isolation, PCR and ELISA in dairy goats with problems of abortion that suggest Chlamydiosis in the state of Guanajuato, Mexico, and to establish appropriate diagnostic methods for its detection.
Materials and methods Animals
Six dairy goat herds from Guanajuato, Mexico, with a history of abortion were selected.The production systems on these ranches were based on intensive stabling for cheese production and rebreeding.
To obtain the necessary information, two questionnaires were given to the owners of each herd.The first questionnaire focused on several aspects of farm management, such as genetics, nutrition, overall animal health, reproduction and facilities.The second questionnaire sought information on the individual goats sampled: age, parity, clinical history and production (Mora, 2011).The information reported in both questionnaires led us to establish the differential diagnosis for the abortion problem within each herd.The number of animals tested from each flock is shown in Table 1.
Clinical samples
A total of 126 samples of vaginal swabs were analyzed using cell culture and PCR to isolate and identify C. abortus; 125 serum samples from the same animals were used to detect antibodies against this microorganism by ELISA (Table 1).The goats selected for sampling fulfilled one of the following conditions: one or more previous births; at most 2 weeks prior to giving birth; a maximum of 4 weeks after a recent birth; or already had an abortion.
The vaginal samples were taken using sterile swabs and were transported in tubes with 2 ml of sucrose-phosphate/glutamate medium (SPG) (217 mM sucrose, 4 mM KH 2 PO 4 , 7 mM K 2 HPO 4 , and 1% L-Glutamine), supplemented with 10% fetal bovine serum (FBS) and antibiotics (100 µg/ml streptomycin, 50 µg/ml gentamicin) (Sachse et al., 2009).The swabs were first pressed against the walls of the tubes in which they were held, using a sterile clamp and were then discarded.Afterwards, the tubes were centrifuged at 3 000×g for 40 min at 4°C; then, 500 µl of the supernatant was extracted and transferred to a sterile microtube that was labeled and frozen at a temperature of -70°C to perform the isolation procedure.The remaining contents of the tube were transferred to another sterile microtube and frozen at -70°C for later DNA extraction.
Serological tests
The commercial kit "IDEXX Chlamydiosis Verification Test" (former "Pourquier® ELI-SA Chlamydiosis Serum Verification", IDDEX laboratories Inc.Westbrook, Maine,US) was used to detect a recombinant antigen that is a polymorphic external membrane protein of 80-90 kDa and that is specific for C. abortus and has no cross-reaction with Chlamydia pecorum.
The Rose Bengal test (3%) was conducted (Aba test card at 3%, PRONABIVE, DF, Mexico) to assure that there was no presence of Brucellosis in the herds.
Isolation and identification of Chlamydia spp.
Cellular monolayers of the L929 fibroblast were cultivated in Eagle's minimal essential medium (MEM, GIBCO, Life Technologies, Carlsbad, CA,USA), supplemented with 10% FBS, 1% non-essential amino acids, L-glutamine at 1%, and antibiotics (50 µg/ml gentamicin and 100 µg/ml streptomycin-penicillin) (MEM-C), all from Life Technologies, in humid conditions at 37°C with 5% CO 2 (Escalante-Ochoa et al., 1996).For infection, the culture was conducted in 24-well polystyrene plates (NUNC TM Thermo Scientific, Waltham, MA, USA) with 12-mm diameter sterile coverslips for the immunofluorescence test and at an initial concentration of 5 × 10 4 cells/well and an incubation period of 24 h until a confluence of 60-70% was obtained.Parallel to this, dishes without coverslips were prepared for use in case the performance of the blind passages proved necessary.
Infection process
To achieve cellular infection, the MEM-C of each well was removed and 100 µl of the supernatant from the clinical samples was added immediately to each well.Two wells/dish were used for each clinical sample for both diagnosis and the subsequent blind passages.Each dish had a positive control sample infected with a strain of C. abortus A22 and an uninfected negative control.The microplates were placed in an orbital incubator at 50 rpm for 1 h at 37°C in humid conditions.Afterward, 900 µl of MEM-C were added to each well, and the dishes were then incubated in humid conditions at 37°C with 5% CO 2 for 72 h.At the end of the incubation procedure, the plates without coverslips were stored at -70°C; the plates with coverslips had the MEM-C removed and were washed 3 times with a phosphate saline solution (PBS), for 5 min on each occasion.Next, the PBS was removed and the cell monolayers were fixed with 1 ml of pure methanol at -20°C for 10 min.The methanol was then eliminated, and the plates were left to dry at room temperature.
Direct immunofluorescence technique
Identification of the intracytoplasmatic inclusions produced by Chlamydia spp. was performed by direct immunofluorescence (IMAGEN TM Chlamydia DakoCytomation LTD, Cambs, UK), which detects the lipopolysaccharides of the bacteria using specific monoclonal antibodies marked with fluorescein.Next, 25 µl of fluorescein-5-isothyocianate (FITC) diluted to 1/10 with PBS and 2 µl of Evans Blue (0.5%) were placed on each coverslip on the microplates, followed by incubation in a humid atmosphere for 30 min at 37°C.Three washings were then performed using PBS for 5 min each, and the coverslips were removed from each well and the contents were allowed to dry at room temperature (Vanrompay et al., 1994).
The coverslips were mounted on slides using Vectashield ® medium (Vector Laboratories,Inc, Burlingame, CA, USA) and fixed with transparent nail polish.The preparations were analyzed using a Leica DM1000 fluorescence microscope (magnification 40X).If no cytoplasmatic inclusions were visualized on the first reading, then blind passages were performed using the plates without coverslips, previously stored at -70°C.These plates were frozen and thawed 5 times to lyse the cells and release the bacteria.The content was then transferred to sterile microtubes and new cell monolayers were infected.The samples were considered negative if after conducting two blind passages, no intracytoplasmatic inclusions were detected.
Identification of C. abortus by PCR
The DNA was extracted from L929 mouse fibroblast cells infected with C. abortus A22.In this procedure, 200 µl of an infected cell culture was collected and deactivated in a hot water bath at 80°C for 20 min.
The DNA extraction from clinical samples was conducted via the phenolchloroform method using 500 µl of the transport medium with the vaginal smears, following the protocol described by Sambrook and Russell (Sambrook and Russell, 2001).
Following the same procedure, DNA was extracted from uninfected L929 mouse fibroblast cells and utilized as a negative control in PCR.
First isolation of Chlamydia abortus in aborted goats of Mexico
Vol. 2 No. 1 January-March 2015
The PCR was performed in a Thermo Hybaid PCR Express thermocycler.All of the reactions were conducted using a final volume of 50 µl that contained 1X of PCR buffer, 3 mM MgCl 2 , 400 µM dNTP, 25 pmol of each primer, 1 U of Taq Polymerase (Invitrogen, Life Technologies) and 25 ng of DNA from the cell culture infected with C. abortus strain A22 that was utilized as a control or 25 ng of DNA from the L929 line mouse fibroblast cells.
In addition, the DNA from different bacteria that might be involved in the infections of the goats was used including Brucella abortus, Leptospira Hardjo, Histophilus somni, Salmonella Typhi, Campylobacter jejuni, Campylobacter fetus, and Mycoplasma bovis.The program to amplify C. abortus using 16S RNAr gene primers was as follows: after an initial denaturalization period of 5 min at 95°C, the reactions were exposed to 40 cycles of 1-min at 95°C, followed by 30 sec at 63°C, then at 72°C for 1 min and a final extension step at 72°C for 10 min.
The products of the amplification procedure were observed in a 1% agarose gel supplemented with ethidium bromide (0.5 µg/ml) (Sambrook and Russell, 2001) and analyzed in a photodocumenter (Kodak, Gel Logic200, Rochester, New York, USA) supported by Kodak Molecular Imaging Software v. 4.0.2.
Statistical analysis
The PCR used in the study was evaluated using a current concordance table that considers 3 parameters: sensitivity, specificity, and the predictive value of the test (positive and negative).The PCR validity was also determined (Armijo, 1994;Greenberg et al., 2002).In order to establish the degree of concordance between diagnostic tests, the Kappa coefficient was calculated (Cohen, 1960).
Serological test and bacterial isolation
The 125 sera from the goats were negative for Brucellosis.However, for the C. abortus tested by ELISA, 12 of the 125 goats proved positive (9.60%).The results of the processes of isolating and identifying Chlamydia spp. in cell culture and by direct cell immunofluorescence revealed that 34 of the 125 animals were positive (26.98%,Table 2).
PCR
The PCR did not amplify the DNA of L. Hardjo, H. somni, S. Typhi and M. bovis; however, amplification of the DNA of B. abortus, C. jejuni, and C. fetus was observed.Despite this finding, the identification of C. abortus was not impeded because the amplicons were approximately 900 bp for B. abortus, 400 bp for C. jejuni, and 800 bp for C. fetus.
First isolation of Chlamydia abortus in aborted goats of Mexico
Using DNA from the vaginal mucus, testing using PCR resulted in 30 positive animal tests (23.8%) (Table 2), which demonstrated a sensitivity of 70.58% and a specificity of 93.47% (using isolation in cell culture as the reference test).The positive predictive value of PCR was 80%, and the negative predictive value was 89.58%.Thus, this test achieved a validity of 87.30%.As Table 2 shows, only 7 goats tested positive when ELISA, bacterial isolation, and PCR were used.
The degree of concordance between diagnostic tests using the Kappa coefficient was as follows (Landis and Koch, 1977
Discussion
In Mexico, Chlamydiosis in goats caused by C. abortus is considered an exotic disease by the sanitary authorities (SAGARPA, 2007); however, evidence of its presence is becoming increasingly common.This study proved (through isolation and identification of the microorganism) the presence of C. abortus in the investigated herds, strongly suggesting that the abortions that occurred shortly before sampling could have been caused by C. abortus.To date, only a few isolations of Chlamydia have been reported, only serological and molecular proof exist of C. abortus presence, and there is no evidence of disease caused by C. pecorum (Mora-Díaz et al., 2009;Aguilar et al., 2011;Campos-Hernandez et al., 2014).
In 2007, a total of 1105 goat sera from several Mexican states where the goat-raising industry has developed (Tlaxcala, Estado de México, San Luis Potosi, Guanajuato, and Queretaro) were analyzed using the IDEXX Chlamydiosis Verification Test (IDDEX Laboratories Inc.).From these sera, a global seropositivity of 3.17% for C. abortus with a variation of 0-to-24% was found among the flocks However, because this is considered an exotic disease, there are still no approved diagnostic tests in Mexico that can be utilized routinely in diagnostic laboratories; a fact that makes detecting this disease even more complicated.It is important to have standardized tests that can be used quickly whenever suspicions of the existence of such exotic diseases emerge.This study demonstrated that PCR is an effective tool for demonstrating the presence of this disease in a region that is considered at risk.However, it is important to use various techniques during the process of diagnosing this disease because, while ELISA is a highly sensitive test, it is not indicative of the presence of disease but only of exposure to the etiologic agent.Thus, the 3 diagnostic methods tested are valuable and complementary in zones where Chlamydia is suspected to cause abortions.
The decision to work with the 6 dairy goat herds from the state of Guanajuato was made because their owners mentioned that they had experienced problems with abortions in recent years during the final trimester of gestation and had experienced births of weak offspring that died shortly after parturition.This occurred even though they are all Brucellosis free; Brucellosis is an endemic disease that can cause abortions.
It should be stressed that of the 7 animals that tested positive for all of the tests included in this study, 2 had aborted one month before the samples were taken.This indicates that even one month after aborting they continue shedding bacteria through the vagina.
It is of the utmost importance to emphasize that the herds included in this study contain animals of high genetic value whose reproductive potential is diminished by infections with C. abortus.We estimate that in these herds, the losses caused by the presence of an abortion are approximately $300.00 US dollars.Furthermore, these same production units sell breeding stock to replace those used in other goat-raising regions in Mexico.Thus, it is important to implement a program designed to control this disease, and this program should be based on a combination of accessible diagnostic tests.
Conclusions
Although other agents known to cause abortions in small ruminants are considered exotic in Mexico and therefore are difficult to investigate due to lack of reagents, we demonstrated that C. abortus is present in Mexican dairy goat herds as determined by serological and molecular tests and finally by bacterial isolation.Even though
Table 1 .
Number of animals tested per flock (A-F) and number of positive samples obtained with 3 different tests.
First isolation of Chlamydia abortus in aborted goats of Mexico
Vol. 2 No. 1 January-March 2015
Table 2 .
A comparison of 3 diagnostic tests for small ruminant enzootic abortion.The PCR and bacterial isolation were performed from vaginal swabs of goats that had recently given birth or from aborted goats.The ELISA was conducted with the commercial IDEXX Chlamydiosis Verification Test kit (IDDEX Laboratories Inc.).
isolation of Chlamydia abortus in aborted goats of Mexico studied
(Aguilar et al., 2011)9).In 2010 and 2011, the same ELISA test was used in a study conducted in six regions that are considered Mexico's main goat-raising zones: Puebla, Guerrero, Baja California Sur, Comarca Lagunera, Tlaxcala, and San Luis Potosi and different positivity percentages of 0.18%, 4%, 5%, 7.3%, 10%, and finally 11%, respectively, were found.In that study, researchers collected samples randomly from goats older than 2 years that were raised in production units with a history of abortions(Aguilar et al., 2011).Campos-Hernandez et al. (2014) demonstrated a high seroprevalence and molecular identification of C. abortus in commercial dairy goat farms in a tropical region in Mexico, although no isolation of the microorganisms has been achieved.Their results, together with those from the present study, clearly indicate that EASR is indeed present in Mexico. | 2018-12-15T05:57:07.818Z | 2015-03-11T00:00:00.000 | {
"year": 2015,
"sha1": "216292ab8c3fff3b90564b7d2a3e07ec1ed798a0",
"oa_license": "CCBY",
"oa_url": "http://veterinariamexico.unam.mx/index.php/vet/article/download/339/362",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "216292ab8c3fff3b90564b7d2a3e07ec1ed798a0",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239888792 | pes2o/s2orc | v3-fos-license | Treatment patterns and outcomes in older patients with advanced malignant pleural mesothelioma: Analyses of Surveillance, Epidemiology, and End Results‐Medicare data
Abstract Background Malignant mesothelioma is a rare neoplasm associated with asbestos exposure. Characterizing treatment patterns and outcomes of older patients with advanced malignant pleural mesothelioma (MPM) is important to understand the unmet needs of this population. Aim To evaluate the demographic and clinical characteristics, treatment patterns, and outcomes among older patients diagnosed with advanced MPM in the United States between 2007 and 2013. Methods This was a retrospective cohort study using Surveillance, Epidemiology, and End Results (SEER) data linked with Medicare claims. We included patients who were age 66 or older at the time of their primary MPM diagnosis between 2007 and 2013 and followed them through 2014. Treated patients who received first‐line chemotherapy with pemetrexed and platinum within 90 days of diagnosis, second‐line, or third‐line therapy were identified for evaluation of outcomes. Results There were 666 older patients with advanced MPM, of whom 82% were male, 87% White, 78% stage IV, and 70% had no mobility limitation indicators at diagnosis. There were 262 patients who received first‐line chemotherapy for advanced MPM, most of whom (80%; n = 209) received pemetrexed‐platinum. Of these 209 patients, 41% (n = 86) initiated second‐line therapy, and 26% (n = 22) initiated third‐line therapy. Median overall survival for the cohort of 209 patients was 7.2 months. Patients with epithelioid histology had better median overall survival (12.2 months) compared with other histologies (4.4–5.6 months). Within 90 days of diagnosis of advanced MPM, 78% of patients were hospitalized, 52% visited an emergency department, and 21% had hospice care. The 2‐year cost of care was over $100 000 for all patients with advanced MPM treated with first‐line pemetrexed‐platinum. Conclusions Although first‐line systemic anticancer treatment was generally consistent with guidelines (e.g., pemetrexed‐platinum), poor patient outcomes highlight the need for effective treatment options for older patients with advanced MPM.
MPM presents with gradually worsening, nonspecific pulmonary symptoms, typically in patients older than 60 years of age, decades after exposure to asbestos. 3 Criteria for staging MPM have been developed, but are regarded as difficult to apply accurately before surgery in clinical practice. 4,5 The prognosis of patients with advanced MPM is poor, with overall survival (OS) ranging from 9 to 17 months after diagnosis. 6 Mortality varies by underlying histology, with the epithelioid subtype associated with the longest median OS (11.1 months) and fibrous subtypes, the shortest (3.6 months). 7 Historically, the main treatment options for MPM included surgery, chemotherapy, and radiation therapy. 5 Although surgery is associated with improved survival, 6 it is generally not an option for patients with advanced MPM. 8 Guidelines as a preferred first-line treatment option for patients with unresectable biphasic or sarcomatoid MPM; nivolumab plus ipilimumab is also an option (category 1) for patients with epithelioid histology. 5,8,9 Pemetrexed and cisplatin, with or without bevacizumab, is another preferred first-line treatment option for patients with MPM (category 1). 5 Figure S1). MPM was considered advanced if it was classified as T3, T4, N3, or M1 using the 6th edition of the American Joint Committee on Cancer staging manual. 14 Furthermore, the cancer had to be the patient's first, primary cancer, and it had to be microscopically confirmed. Patients must have been enrolled in Medicare Part A and Part B for at least 12 months before diagnosis; as a result, because patients enroll in Medicare at age 65, patients must have been aged ≥66 at the time of diagnosis to ensure a sufficient "look-back" period prior to diagnosis. Patients were excluded for the following reasons: diagnosis by death certificate or autopsy, death in the month of diagnosis, or receipt of systemic therapy before the month of SEER diagnosis.
| Study time period
The index date was defined as the first day of the month of diagnosis because a specific day is not provided by SEER. The observation period comprised a baseline period spanning ≥12 months before and including the index date (and no earlier than January 1, 2006), and a follow-up period beginning immediately after the index date and continuing until death, second primary cancer, enrollment in a Medicare health maintenance organization, or the end of available records (December 31, 2014), whichever came first ( Figure S1). Baseline clinical and demographic characteristics were assessed during the baseline period. Outcomes, including patterns of care, were assessed during the follow-up period.
| Study variable definitions
Patient demographic and clinical characteristics, including age, sex, race, indicators related to socioeconomic status, and tumor characteristics, were derived from the SEER and Medicare databases. Histology was categorized as epithelioid, non-epithelioid (sarcomatoid or biphasic), and mesothelioma not otherwise specified (NOS). Because performance status is not available in SEER, we used a proxy based on claims-based indicators of mobility limitations, including the use of oxygen and related respiratory therapy supplies, wheelchair and supplies, home health agency use, and skilled nursing facility use. The presence of at least one of these claims-based indicators of mobility limitations has been identified as an important predictor for outcomes associated with cancer treatment. 15 Outpatient chemotherapy was defined using Healthcare Common Procedure Coding System codes for infused chemotherapy, as well as National Drug Codes for oral therapies with intravenous equivalents.
Systemic therapies for MPM were defined as any agents used by at least three patients to exclude therapies likely to be unrelated to the treatment of pleural mesothelioma. Selected results for all first-line patients, including those who did not receive pemetrexed-platinum, are provided in Supporting Information.
| Statistical analyses
Patient counts less than 11 are not reported to ensure patient privacy, as required by the data use agreement with National Cancer Institute. Categorical variables were summarized by frequencies and proportions, and continuous variables were summarized by means and SDs, or medians and interquartile ranges (IQR) as appropriate. OS over time was calculated using unadjusted Kaplan-Meier estimator. Cox proportional hazards regression was used to identify factors associated with mortality for each cohort.
Cumulative total Medicare reimbursements were calculated using inverse probability of censoring weights. 16
| Cohort characteristics
In the overall advanced MPM, first-line pemetrexed-platinum, and second-line cohorts, respectively, mean ages were 77.4, 75.0, and
| Overall survival
Approximately 90% of patients with advanced MPM died during follow-up. Median OS from time of diagnosis for the advanced MPM cohort was 7.2 months (95% CI: 6.6-8.2; Figure 2; Table S5). Median For the overall advanced MPM cohort, demographic factors that were significantly associated with worse OS outcomes were ( Other studies have used the SEER data (without the Medicare linkage) to evaluate factors associated with OS including histology. 6,7,22 These also showed consistently poor OS in mesothelioma and that epithelioid histology was associated with the best OS, followed by NOS and then non-epithelioid histology. Because these were limited to SEER data, they could not evaluate systemic therapy or the effects of risk factors unrelated to cancer or its treatment (e.g., comorbidity, mobility limitations, and rural/urban status). In conclusion, in older patients with advanced MPM, low treatment rates and poor OS were observed across lines of therapy.
Although most patients received treatment consistent with guidelines (e.g., pemetrexed-platinum), their poor outcomes highlight the areas where more effective treatment options could benefit older patients with advanced MPM.
ACKNOWLEDGMENTS
All authors contributed to and approved the manuscript. Writing and Lubeck: Conceptualization (lead); data curation (lead); formal analysis (lead); writingreview and editing (lead).
DATA AVAILABILITY STATEMENT
The Data Use Agreement for the SEER-Medicare data do not permit sharing of the data.
ETHICS STATEMENT
The study was exempted from institutional review board (IRB) approval because it used de-identified patient data (Quorum IRB, Protocol Exemption Determination 31309). | 2021-10-27T06:18:25.171Z | 2021-10-26T00:00:00.000 | {
"year": 2021,
"sha1": "3ae523f2094bb938fdbaae6ce6a8977b66dcf851",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Wiley",
"pdf_hash": "f5da5ad27014806814744a83df62865f5b6388fc",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252324251 | pes2o/s2orc | v3-fos-license | Coping Strategies and Perceiving Stress among Athletes during Different Waves of the COVID-19 Pandemic—Data from Poland, Romania, and Slovakia
Coronavirus disease (COVID-19), an infectious disease caused by the SARS-CoV-2 virus, has affected numerous aspects of human functioning. Social contacts, work, education, travel, and sports have drastically changed during the lockdown periods. The pandemic restrictions have severely limited professional athletes’ ability to train and participate in competitions. For many who rely on sports as their main source of income, this represents a source of intense stress. To assess the dynamics of perceived stress as well as coping strategies during different waves of the COVID-19 pandemic, we carried out a longitudinal study using the Perception of Stress Questionnaire and the Brief COPE on a sample of 2020 professional athletes in Poland, Romania, and Slovakia. The results revealed that in all three countries, the highest intrapsychic stress levels were reported during the fourth wave (all, p < 0.01) and the highest external stress levels were reported before the pandemic (p < 0.05). To analyze the data, analyses of variance were carried out using Tukey’s post hoc test and η2 for effect size. Further, emotional tension was the highest among Polish and Slovak athletes in the fourth wave, while the highest among Romanian athletes was in the pre-pandemic period. The coping strategies used by the athletes in the fourth wave were more dysfunctional than during the first wave (independent t test and Cohen’s d were used). The dynamics of the coping strategies—emotion focused and problem focused—were also discussed among Polish, Romanian, and Slovak athletes. Coaches and sports psychologists can modify the athletes’ perceived stress while simultaneously promoting effective coping strategies.
Introduction
SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) caused the coronavirus disease 2019 (COVID- 19), which was classified as a pandemic by the World Health Organization in March of 2020, only a few months after the first case was detected in December of 2019 [1]. The dynamics of transmission, severe health consequences, and high mortality of infected patients were reported in many societies. Thus, many governments decided to introduce restrictions aimed at limiting the transmission of the virus [2]. Personal protective measures, such as face masks, frequent hand disinfecting, and social distancing, proved insufficient to stop the pandemic entirely. Many countries also introduced lockdowns, which forced people to stay home, thus, limiting physical and sports activity [3].
The lockdowns were intended to protect against infection but, unfortunately, they resulted in negative health consequences, such as weight anxiety and depression increases [4,5]. Individual countries tried to limit the spread of COVID-19 by means of various restrictions and staying indoors (people being confined to their homes), situations that impacted the general population's mental health (people did not have an adequate space in which to work or exercise) [6,7]. It was assumed that COVID-19 s droplet transmission may be facilitated by strenuous physical activity, which results in deep lung ventilation [8]. Due to the lockdowns, sports halls and gyms were closed and outdoor physical activity was banned. Many people who were physically active were forced to suspend training almost overnight. Studies on athletes from over 140 countries showed that lockdowns led to lower intensity, frequency, and duration of training [9]. For professional athletes, lockdowns led to violations in long-term, rigorous training plans, inability to prepare for competitions, and canceling or severely limiting sports events [10]. For some of them, this may have caused an early end to their career due to their age and the inflexible schedules of global events [11]. The COVID-19 crisis decreased athletes' functional psychobiosocial states, e.g., cognitive, emotional, motivational, volitional, motor-behavioral, communicative, etc. (for psychobiosocial state examples, see [12]), while increasing their dysfunctional psychobiosocial states [13]. Even during the rebooting of sport activities, despite the resumption of sport activities, athletes experienced a detrimental situation, their mental health being still affected [14]. Further, increased stress and anxiety were common themes affecting student athletes' experiences when returning to sport amidst the COVID-19 pandemic [15].
Professional athletes who rely on sports as their main source of income found themselves in a difficult situation, as their physical activity was reduced and, thus, their income from broadcasting competitions, advertising, and awards dropped to almost zero. Restricting sports activities had a negative effect on athletes' health, but it was not effective in preventing the spread of COVID-19 in this population group [16]. Athletes' mental status naturally influences their functioning. A global sense of threat, social isolation, and uncertainty about the future may lead to anxiety, depression, and chronic stress [17].
Effective coping strategies reduced the stress experienced by athletes during the COVID-19 pandemic [15,18]. More specifically, positive reframing helps athletes to maintain a positive mood state and reduce distress, while self-blaming and behavioral disengagement are coping strategies that negatively influence athletes' mood. Regarding coping, it represents the conscious use of affective, cognitive, or behavioral efforts to effectively deal with demands, events that the individual perceives as potentially harmful or unpleasant [19]. The outcome of coping efforts is to reduce psychological distress, improve mental well-being and reduce physiological reactions that may impair performance [20,21]. The coping literature [22,23] discusses identifying the athlete's cognitive appraisal regarding the situation in which he/she is found (in training or competition). Larazus and Folkman [23] categorized appraisal as non-stressful (positive, harmless) or stressful, while stress appraisals were designated as threatening or challenging. Hoar et al. discussed (within another appraisal framework) the importance of perceived control. As authors mentioned, taking into account that perceived control can change over time, coping responses can become more or less effective (even well-learned coping strategies can be modified, as a consequence of environmental demands) [24]. Considering problem-focused and emotional approach coping, specialized literature underlines that men engage in more problem-focused coping, whereas women resort to more emotional-approach coping [25]. As stated, men who are using more emotion-focused coping strategies seemed to register higher levels of positive affect.
During the COVID-19 pandemic, physical activity can "generate and maintain resources combative of stress and protective of health" [26]. Significantly lower mental and physical health was found in individuals with the highest decrease in physical activity during the pandemic. Considering professional cyclists, those who followed a Sport Psychology Intervention (online) during the pandemic coped better with sport psychological stressors (no significant improvements were found, however, for sport social and for sport emotional well-being factors) [27]. In order to cope with negative psychological effects arising from the pandemic, researchers discussed the benefits of mental toughness training in the case of athletes [28], important aspects, since professional athletes obtained lower values for the agreeableness factor (during the pandemic crisis), compared to nonprofessionals [29]. Counselling athletes in an unprecedented situation, as in the COVID-19 pandemic, is very important for acquiring healthy behaviors. For example, mindful activities related to the body-the experience of one's body as trustworthy and safe-could reduce distress in athletes and increase positive stress [14]. Training regimens should be introduced as standard habits for well-being and health, especially for women and novice athletes, who registered higher levels of negative stress (distress) [13].
Each country is characterized by different dynamics of COVID-19 infections, resulting from, among others, the implemented prevention methods, the number of social contacts and foreign travels, national age average, healthcare quality, and economic conditions.
The purpose of the current research was to investigate the dynamics of stress perceived by Polish, Romanian, and Slovak athletes during the first four waves of the COVID-19 pandemic and to establish the changes in coping strategies between the first and fourth waves. The following research questions were put forward:
1.
What were the dynamics of emotional, external, and intrapsychic stress before the pandemic and during different waves among athletes in Poland, Romania, and Slovakia (country split and wave split)? 2.
What were the dynamics of perceived emotional tension, external stress, and intrapsychic stress in the total sample of athletes (regardless of country) throughout the research periods (until the fourth wave of the pandemic)? 3.
What are the differences in the frequency of using strategies of coping with stress among athletes in the first and fourth waves of the pandemic?
Design
Data collection was carried out from November 2019 to January 2020 (the pre-pandemic period) as well as during or in the proximity of the first, second, third, and fourth waves of the COVID-19 pandemic (see Table 1). For example, in Romania, the peak of illnesses was recorded on 18 November 2020 (for the second wave) and on 25 March 2021 for the third wave. It is worth mentioning, also, that the first case of the SARS-CoV-2 coronavirus appeared in Poland on 4 March 2020, in Romania on 25 February, and in Slovakia on 6 March 2020. When completing the Perception of Stress Questionnaire, the instruction was as follows: "Please describe your thoughts, behaviors, fears and hopes as you have experienced them lately (in the last few weeks) and currently". Data collection was carried out in Poland, Romania, and Slovakia and was concluded at the beginning of 2022. It is important to emphasize that in November 2019 (when data collection began) nobody knew about COVID-19. The original research idea of analyzing the stress experienced by athletes from Poland, Romania, and Slovakia at time t 0 (the moment they completed the survey) and the coping strategies used were restructured to conduct a longitudinal study, in order to observe the dynamics of athletes' perceived stress (during different waves of the pandemic) and coping strategies used to deal with stress.
Participants
A total of 2020 professional athletes took part in the study, practicing various sports disciplines: handball, soccer, martial arts (kickboxing, judo, fencing, karate, MMA, taekwondo), rugby, basketball, athletics, aerobic and artistic gymnastics, volleyball, tennis, and swimming (a total of sixteen sports disciplines in each country). The inclusion criteria were a career of at least two years of training in a specific sport branch, under the supervision of a coach and a minimum age of 18 years (seniors). Athletes have been practicing the sports disciplines (in the entire sample) for an average of 8.3 years. About 82% of the participants achieved local/regional level performances, approximately 12% registered national performances (being national champions, vice-champions, or being part of the national teams in the branch of sport practiced), while about 6% obtained international results (at World or European level, only martial arts athletes). In each research period/wave of the pandemic and in each country, athletes having local/regional, national, and international performances were investigated. No missing values were identified due to the online survey/submission in which all items had to be rated. In the preliminary analysis of the data, using stem and leaf, eighteen cases (in the total sample) were recognized as outliers and excluded from further investigation. Thus, we retained 2020 athletes (from the total sample of 2038 eligible athletes). It is relevant, also, to highlight that approximately 70% of the athletes tested in each research period/in every wave of the pandemic (in each country) were tested also in the pre-pandemic period. Table 1 shows the athletes' descriptive statistics divided by gender, age, and data collection period.
The data collection carried out in the third wave of the pandemic (from April to the end of June 2021) presented a smaller number of people surveyed. To avoid a sample size reduction, 3rd wave data were not included in subsequent statistical analyses.
Instruments
Personal data were collected using an ad hoc questionnaire regarding personal and sociodemographic data. It comprised four items measuring the participants' age, gender, years of training, and sport type.
Stress was measured using the Perception of Stress Questionnaire, comprising 21 items which form three scales: Emotional tension (7 items, e.g., "I get nervous more often than I used to, and for no obvious reason"), External stress (7 items, e.g., "I feel drained by constantly having to prove I am right"), and Intrapsychic stress (7 items, e.g., "Thinking about my problems makes it hard for me to fall asleep") [30]. The generalized stress level (total score) is the sum of the following scales: Emotional tension, External stress, and Intrapsychic stress. Participants answer each item on a five-point Likert-type scale from 1 (definitely disagree) to 5 (definitely agree). The Cronbach's α reliability coefficient in the Polish sample was as follows: emotional tension: from 0.75 to 0.81; external stress: from 0.68 to 0.74; intrapsychic stress: from 0.77 to 0.80. The Cronbach's α reliability coefficient in the Romanian sample was as follows: emotional tension: from 0.59 to 0.79; external stress: from 0.65 to 0.82; intrapsychic stress: from 0.72 to 0.85. The Cronbach's α reliability coefficient in the Slovak sample was as follows: emotional tension: from 0.68 to 0.78; external stress: from 0.63 to 0.75; intrapsychic stress: from 0.72 to 0.80. The Perception of Stress Questionnaire has been used in studies of athletes [30] including Romanian and Slovak athletes. The translation of the questionnaire in Romanian and Slovak language was carried out with the consent of the author (Makarowski Ryszard) and respecting the author's recommendations. First, the original Polish version was translated into English and then translated into Polish by translators with psychological experience. The final Romanian and Slovak versions of the English version were created through retroversion, compared and used in the study (this procedure has been used in previous research [31]).
Using the Brief COPE questionnaire, we measured the strategies of coping with stress. It comprises 28 items covering 14 coping strategies: self-distraction, active coping, denial, substance use, use of emotional support, use of instrumental support, behavioral disengagement, venting, positive reframing, planning, humor, acceptance, religion, and self-blame (two items for each strategy) [32]. The participants indicate their frequency of using each coping strategy on a four-point Likert-type scale, from 1 (I have not been doing this at all) to 4 (I have been doing this a lot). In all data collection periods, the Cronbach's α reliability coefficients for each subscale in the Polish, Romanian, and Slovak versions ranged from 0.48 to 0.94.
The 14 coping strategies of the Brief COPE questionnaire can be grouped in several ways. In the current study, we decided to divide them into three groups: emotion-focused strategies (emotional support, positive reframing, acceptance, religion, humor), problemfocused strategies, (active coping, planning, use of informational support), and dysfunctional strategies (venting, denial, substance use, behavioral disengagement, self-distraction, self-blame), according to the model by Su et al. [33]. This model was also used in other studies on athletes and other samples in many countries [34][35][36][37][38].
Procedure
Participants were informed about the study aim and procedure. They were also informed about the anonymity of the collected data and the right to withdraw their participation at any time without having to provide a reason. Informed consent was obtained from all participants. Furthermore, this study was conducted in accordance with the recommendations of the Declaration of Helsinki, the Polish Psychological Association's Psychologist's Code of Ethics, the Slovak Psychological Association, and the Romanian Psychological Association and it was approved by the Ethics Committee of the National University of Physical Education and Sports in Bucharest, Romania (ID: 1185).
Data Analysis
All the standard statistical analyses were conducted using the Statistica v. 13 software. Data were presented by means and standard deviations. Analysis of variance was carried out using Tukey's T test for unequal sample sizes. Independent t test was also used. The statistical significance was set at a p-value of ≤0.05 and effect size (Cohen's d) was interpreted as follows: ≤0.2, trivial; >0.2, small; >0.6, moderate; >1.2, large; >2.0, very large; >4.0, nearly perfect [39]. Considering η2 the range intervals were: 0.01, small effect; 0.06, medium; 0.14, large effect [40]. All variables were normally distributed, with skewness coefficients in absolute value being less than 1 [41]. The assumption of the equality of variance was verified by Levene's test (p > 0.05, see Table 2). Table 2 shows the analysis of variance results with the pandemic stage, that is, the pre-pandemic period and all four subsequent pandemic waves, as the grouping variable. The analyses were carried out separately for each national subsample (country split). Considering the significant differences observed between the research periods/waves of the pandemic (in each country and for each subscale of the Polish, Romanian, and Slovak versions), d value ranged from 0.22 to 0.80 (the smallest effect sizes were observed in each country for the total score).
Results
The obtained results show that the highest overall stress levels among Polish and Slovak athletes were reported during the fourth wave of the pandemic. Romanian athletes reported the highest overall stress in the pre-pandemic period. In all three countries, the highest intrapsychic stress levels were reported during the fourth wave and the highest external stress levels were reported before the pandemic. Emotional tension was the highest among Polish and Slovak athletes in the fourth wave and, among Romanian athletes, before the pandemic. η2 (the overall effect size) indicates, generally, small or moderate to small differences between the examined waves of the pandemic (considering athletes' perceived stress). Only for intrapsychic stress were moderate to strong differences found (in Romanian and Slovak athletes).
The obtained data show that all stress dimensions significantly decreased during the first and second wave of the pandemic but increased significantly during the fourth wave (except external stress where no significant differences were found compared to the first two waves). The highest increase was observed for intrapsychic stress. Eta 2 values (the overall effect size) are: 0.02 (for emotional tension), 0.03 (for external stress), respectively, 0.06 (for intrapsychic stress), emphasizing moderate to small (respectively medium) differences between the research periods/waves of the pandemic, taking into account (in each investigated wave) the total sample of athletes (regardless of country). Table 3 shows the differences in perceived stress (between countries, in each wave of the pandemic wave split) among the surveyed athletes. The highest level of stress in individual waves of the pandemic occurred in Slovakia. The lowest level of general stress was recorded in athletes from Romania (except for the tests performed before the pandemic). The overall effect size (η2) shows, generally, small or moderate to small differences between the three countries when talking about athletes' perceived stress, in each examined wave of the pandemic.
To examine whether and how the frequency of using coping strategies by athletes changed between the first and fourth waves of the pandemic, independent t-test was carried out. The results are shown in Table 4. Analyzing the dynamics of coping strategy use between the first and fourth waves of the COVID-19 pandemic, it can be observed that emotion-focused strategies became less frequent among Polish athletes. Regarding individual coping strategies, behavioral disengagement and venting became more frequent, while planning, positive reframing, and humor became less frequent. No significant changes in the frequency of the other individual coping strategies were observed.
1.
Among Romanian athletes, dysfunctional strategies became more frequent. Regarding individual coping strategies, a decrease in the frequency of using active coping and an increase in using behavioral disengagement were observed. Neither of the other individual coping strategies was used significantly more frequently in the fourth wave.
2.
In the Slovak athlete subsample, the frequency of using problem-focused strategies and dysfunctional strategies increased. Regarding individual coping strategies, active coping, planning, acceptance, and venting became more frequent. Neither of the other individual coping strategies was used significantly less or more frequently in the fourth wave.
3.
It is worth noting that dysfunctional strategies became noticeably more frequent in each national subsample during the fourth wave (this difference was not statistically significant in the Polish athlete subsample).
Discussion
The emergence of the COVID-19 pandemic changed the structure and functioning of the world as we know it, permanently and suddenly. Such a significant threat has not been experienced by many European countries for a long time. The coronavirus disease has impacted nearly all aspects of human functioning. Numerous strains have increased the intensity of experienced stress and initiated the activization of coping strategies [42][43][44]. Social contacts, working, transport, spending free time, and engaging in physical activity have changed noticeably. For professional athletes, limiting the opportunities for training and canceling or delaying sports events represented a significant challenge [45]. These situations occurred due to the lockdown periods (as preventive measures for reducing COVID-19 spread), because of the infections (or the fear of infection) of athletes, coaches, sports managers, and organizers of sports competitions. In an attempt to identify most of the infected athletes worldwide before August 2020 (according to gender, age, symptoms, sport level, or location of the contraction of infection), researchers found 521 COVID-19positive athletes [46]. It seems that most infected athletes practiced soccer and basketball (as authors asserted, the cases do not represent all of the infected athletes). Considering that globally, as of 5:44 p.m. CEST, 24 August 2022, there have been 595,219,966 confirmed cases of COVID-19 reported to the World Health Organization (WHO) [47], but it is very difficult to identify the number of COVID-19 infections among athletes (more so as there are professional athletes, amateur, college, junior, or senior athletes and some of them were asymptomatic).
Significantly restricted training opportunities, canceled or delayed events, and reduced income have compounded the universal concerns about one's own health and the health of one's family. Thus, the aim of the current study was to identify the stress level dynamics during the first four waves of the COVID-19 pandemic, in Polish, Romanian, and Slovak athletes, and to establish the changes in coping strategies between the first and fourth waves. The dependent variable was the level of stress under study, namely emotional tension, external stress, and intrapsychic stress. The independent variables (the variable playing the role of IVs) were three countries: Poland, Slovakia, and Romania; the research period: before the pandemic and the first, second, and fourth wave of the pandemic. An additional dependent variable was the way of coping with stress. The results revealed that in all three countries, the highest intrapsychic stress levels were reported during the fourth wave and the highest external stress levels were reported before the pandemic. Further, the coping strategies used by the athletes in the fourth wave were more dysfunctional than during the first wave.
Perceived stress levels among athletes differed depending on the country. The highest level of stress in individual waves of the pandemic was reported by Slovak athletes, while the lowest level of general stress was registered in athletes from Romania (except for the tests performed before the pandemic). Small or moderate to small differences were observed between the three investigated countries, when talking about athletes' experienced stress (in individual waves). The significant differences found (between countries) could be related to numerous variables: the intensity of the pandemic, the current economic and political situation in the country, and employment stability. The Human Development Report, which indicates the quality of life in a given country [48], is also important in this context.
In all countries, there was a noticeable trend in the overall pre-pandemic stress levels decreasing and remaining at a lower level throughout the first and second wave of the pandemic, before increasing during the fourth wave. Significant differences were observed in each investigated country (studied separately), between the waves of the pandemic, as well as small or moderate to small effect sizes (for emotional tension and external stress). In the case of intrapsychic stress, a moderate to strong effect size (Eta 2 ) was found in Romanian and Slovak athletes. Furthermore, when the three countries were studied together (the total sample), moderate to small (respectively, medium for intrapsychic stress) differences between the research periods/waves of the pandemic were highlighted. Similar results were observed for martial arts practitioners from Poland and Romania and also in athletes practicing various sports disciplines (non-martial-arts athletes) from Poland, Slovakia, and Romania, with stress levels decreasing during the height of the pandemic (during the lockdown and first wave), compared to the pre-pandemic period [30]. However, there are also studies that underline that about one month after the beginning of the lockdown (first wave), perceived stress increased in Italian athletes from individual and team sports [13] (martial arts athletes were not included in the sample).
Such differences (a significantly lower level of stress during the first three waves of the pandemic) can be explained when considering the psychological and social mechanisms behind the observed trend-it can be assumed that several phenomena co-occurred. First, habituation led to a lower intensity of reaction to a repeated stimulus. Due to a longterm presence of a constant stimulus, the stress reaction becomes reduced and, in time, it may become extinct [49][50][51]. Habituation has an adaptive function, as it allows for economical use of the individual's emotional and cognitive resources. Stress could have also decreased due to the cancellations or delays of upcoming sports events [52]. For many athletes, participation in competition involves intense psychological stress. In various sports disciplines (e.g., volleyball, tennis, track and field, cycling, boxing, soccer), higher perceived stress levels were observed upon resumption of competitions [14]. Further, for example, in swimmers, a significant release of stress hormones was observed, as a result of physical and mental stress associated with sports competition [53]. Cancelling or delaying such events may reduce psychological tension. Another reason for lowered stress levels could be the reduction in intensity of training and work, with the resulting rest and isolation allowing for a regeneration of psychological resources [54,55]. Due to training and competitions being limited, the perceived level of rivalry also decreased, as all athletes found themselves in similar circumstances [56,57]. It is worth underlining that the restrictions and possibilities in each time span (and in each investigated country) were relatively the same-during the lockdown period, coaches and athletes worked exclusively on different remote learning platforms (online) at home. When the conditions relaxed, athletes were able to practice outdoors, on sports fields or in parks, respecting the measures of social distancing. There are differences between sports branches. Some players, such as runners, were able to follow more easily the training plan during the COVID-19 pandemic (there was, almost, no break in the training). Sports competitions were also organized in Poland, Romania, and Slovakia (and televised) but without spectators (for months). Only coaches and athletes had access to the competition hall/area and they were previously tested against COVID-19. It is also important to mention that the vaccination campaign began (in the three countries) at the end of 2020-in Romania, on 27 December 2020. For example, during the third wave of the pandemic (on 7 April 2021), according to the National Institute of Public Health [58], only 1.288.487 Romanians (about 6.5% from the population) were vaccinated with both doses.
Sport is a stress-generating environment, as unpleasant remarks coming from supporters and noise from the stands (in many sports) increase athletes' stress levels [59,60]. The absence of spectators (during the first waves of the pandemic) can, therefore, reduce experienced stress in athletes. Here, also, we emphasize differences between sports disciplines, with team sports athletes feeling less negative stress [61] and reporting less anxiety and depression than in individual sports [62], while novice performers registered higher perceived stress than top athletes (potentially reflecting their less-adapted coping resources) [13]. Further, athletes with high athletic identity are less prone to higher levels of psychological distress compared to athletes with low athletic identity [61].
The pandemic has brought into focus the fundamental human issues of health and survival. Among stressors for athletes were the fear of COVID-19 infection (the fear of health deterioration), weight change, exercising at home, monthly income perception, and damaged performance in COVID-19 infection [63] (it is important to mention that athletes' trait anxiety values were below average). The risk of infection when rigorously following the hygiene and social isolation protocols is minimal and other life events are unable to cause intense stress reactions in the context of a global pandemic. When fundamental issues, such as health and survival, are threatened, people appraise everyday problems differently [64,65]. Moreover, the reduction in sports events resulted in a significant reduction in athletes' media exposure, which could have been a major source of pressure [66,67]. The increase in stress during the fourth wave of the pandemic could have been caused by the depletion of personal resources and poorer adaptation to the permanent conditions of the pandemic [68,69], as well as chronic fatigue syndrome [70]. All limitations and restrictions accumulated over time may cause higher levels of stress and greater health concerns. Further, athletes' worsening economic conditions and uncertain prospects for the future may be significant [71].
Emotional and external stress among athletes was lower during the first, second, and fourth wave than during the pre-pandemic period. However, a very high increase in intrapsychic stress was observed during the fourth wave. Increased intrapsychic stress is a consequence of prolonged negative events, with which the individual has not coped effectively (i.e., has used dysfunctional coping strategies) [30]. Past, present, as well as future anticipated events may be sources of stress [72]. In the context of a prolonged pandemic, this results in an accumulation of perceived stress.
The smaller number of people surveyed in the third wave of the pandemic resulted in the exclusion of this group of people due to the possibility of distorting the results of the research.
The long duration of the pandemic has also impacted strategies of coping with stress. On the one hand, athletes have learned which coping strategies are effective. On the other hand, permanent functioning during the pandemic may have modified the employed strategies. To identify changes in coping strategies among athletes, we compared them between the first and the fourth waves of the pandemic. In the Polish athlete subsample, we observed a decrease in the frequency of using emotion-focused strategies. It is assumed that emotion-focused strategies are ineffective in the long term, though they may negate the consequences of stress in some situations [73,74]. In the Romanian athlete subsample, the use of dysfunctional strategies increased in frequency. In the long term, using these coping strategies can be associated with a risk of depression, anxiety, and eating disorders [75,76]. Among Slovak athletes, the frequency of using problem-focused strategies and dysfunctional strategies increased simultaneously. Using problem-focused strategies allows for coping with difficult situations in the most effective way [77].
Our study revealed a significant trend in coping strategy use among athletes. Comparing the frequency of using coping strategies, we observed an increase in using dysfunctional strategies in each country. Aggregated results for individual coping strategies show that, in each country, the frequency of using dysfunctional strategies during the fourth wave of the pandemic was higher than in the first wave. Long-term functioning in stressful situations may reduce personal resources. Thus, seeking easier solutions for a difficult situation, athletes more frequently used dysfunctional strategies. Long-term use of such strategies carried with it a risk of depression and worsened health [78]. However, these strategies can be modified with appropriate psychological intervention [79]. Considering elite athletes, as well as physical education students practicing sports most often, researchers highlight the important role of cognitive and behavioral strategies in coping with the stress generated by the COVID-19 pandemic [80]. It was found that "the sports level depended on the strategies of coping with the stress of the COVID-19 pandemic more strongly than gender".
Regarding individual coping strategies, in the Polish athlete subsample, the frequency of using behavioral disengagement and venting increased, while the frequency of using planning, positive reframing, and humor decreased. The first two of these are dysfunctional strategies. Increasing the frequency of using these strategies decreases the probability of effectively coping with stress. Even if in the short term, they may prove useful in reducing perceived stress, we cannot promote them, given their known long-term effects [30]. Positive reframing and humor are emotion-focused strategies. They are not effective in the long term for the subsequent pandemic waves. Decreasing the frequency of using these strategies decreases the probability of effectively coping with stress and minimizing its effect on wellbeing. The long duration of the pandemic was related to a decrease in the frequency of using planning, a problem-focused strategy. The unpredictability of the pandemic, together with a lack of control over many aspects of life, may have caused a decrease in using this strategy in the perspective of the pandemic's increasing duration.
In the Romanian athlete subsample, we noticed an increase in the frequency of using the individual strategy of behavioral disengagement, which is a dysfunctional strategy. It can be assumed that, similar to Polish athletes, the lack of control over the situation could have caused an increase in the frequency of using this strategy. However, a different pattern was observed among the Slovak athletes. They reported an increase in using active coping, planning (problem-focused strategies), and acceptance (emotion-focused strategy), as well as a decrease in venting (dysfunctional strategy).
Permanent use of dysfunctional strategies is ineffective and is related to a risk for depression [81]. The noticeable increase in the frequency of using these coping strategies by athletes during the fourth wave of the pandemic, together with the increase in intrapsychic stress, should alert coaches and sport psychologists to the ways in which professional athletes modify their use of available coping strategies. Close cooperation with a sports psychologist and coach is essential in order to promote the most effective coping strategies for a given person [82]. Along with medical practitioners, members of the multidisciplinary team should work towards minimizing the strain experienced by athletes [83]. In order to reduce athletes' distress, specialists could use the so-called internal techniques (breathing and meditation, self-control techniques) [84], inner monologue (positive self-talk) increasing self-confidence, analytical relaxation, and autogenic training; self-monitoring of emotional reactions [85] could teach athletes positive conflict resolution strategies and guide them to get involved in motor and mental activities, which gives them great satisfaction [86]. Not least, specialists can use written emotional disclosure (WED) to support athletes during the COVID-19 pandemic and to promote their mental health [87] and the 4Ds for dealing with distress, an ultra-brief single session, which unifies strategies and exercises for problemsolving, emotion regulation, and for increasing resilience (restoring wellbeing) [88].
The present study has some limitations. The authors relied on self-report measures, supposing a possible recall bias and/or the issue of possible desirable answers (when talking about explicit evaluations), aspects being known [89] (however, the large number of athletes tested represents a strength of the study). Further, the results may be different if junior athletes were investigated, athletes from other countries, practicing a single sport discipline, if athletes were examined separately, according to the level of training, as well as according to their property status (these are relevant questions for future research). Moreover, the Cronbach's α reliability coefficients presented a range of low to very high level of reliability for the strategies of coping with stress, which should also be considered when interpreting the results of this study. Finally, even if there is a reciprocal relationship between stress and anxiety (the two dimensions having a mutual influence on each other), other investigation tools are recommended for anxiety, capturing the link between the anxiety of athletes (state anxiety and/or trait anxiety) and the size of a pandemic in a given country. This can be the subject of further research.
Conclusions
The conclusions of the current study, carried out in three countries, showed that the direct consequences of the pandemic are not related to an increase in perceived stress among athletes. Overall, stress levels during the fourth wave of the pandemic were not higher, in all countries, than during the pre-pandemic period. However, an increase in intrapsychic stress was noticeable between the first two waves and the fourth wave of the pandemic. The research is underlining the importance of athletes' experienced stress (which can influence, also, their anxiety level), capturing the dynamics of perceived emotional tension, external stress, and intrapsychic stress in athletes, before and throughout different waves of the COVID-19 pandemic.
Using constructive coping strategies allows for reducing the perceived stress. Using these strategies led to lower stress levels. Coaches and sport psychologists should continuously monitor stress levels among athletes, together with their coping efforts, in order to promote effective coping strategies. As the pandemic may have long-term consequences, it is particularly important to monitor athletes' psychological wellbeing also after its end, in a post-COVID-19 world. | 2022-09-17T15:03:35.133Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "589454248c3f861d415dac577b75406ce20f6a02",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9032/10/9/1770/pdf?version=1663160538",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "27e32aebcdc0af064716930be8d5dff1120f5805",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264464435 | pes2o/s2orc | v3-fos-license | Aerial Visible-to-Infrared Image Translation: Dataset, Evaluation, and Baseline
Aerial visible-to-infrared image translation aims to transfer aerial visible images to their corresponding infrared images, which can effectively generate the infrared images of specific targets. Although some image-to-image translation algorithms have been applied to color-to-thermal natural images and achieved impressive results, they cannot be directly applied to aerial visible-to-infrared image translation due to the substantial differences between natural images and aerial images, including shooting angles, multi-scale targets, and complicated backgrounds. In order to verify the performance of existing image-to-image translation algorithms on aerial scenes as well as advance the development of aerial visible-to-infrared image translation, an Aerial Visible-to-Infrared Image Dataset (AVIID) is created, which is the first specialized dataset for aerial visible-to-infrared image translation and consists of over 3,000 paired visible-infrared images. Over the constructed AVIID, a complete evaluation system is presented to evaluate the generated infrared images from 2 aspects: overall appearance and target quality. In addition, a comprehensive survey of existing image-to-image translation approaches that could be applied to aerial visible-to-infrared image translation is given. We then provide a performance analysis of a set of representative methods under our proposed evaluation system on AVIID, which can serve as baseline results for future work. Finally, we summarize some meaningful conclusions, problems of existing methods, and future research directions to advance state-of-the-art algorithms for aerial visible-to-infrared image translation.
Introduction
With the rapid development of infrared technology, the infrared camera equipped on unmanned aerial vehicles (UAVs) is increasingly applied for aerial photography.Aerial infrared images have been widely used in the military and in industrial, agricultural, and environmental settings, such as moving target detection [1][2][3] and tracking [4][5][6], photovoltaic panel error detection [7][8][9], image registration [10][11][12], and visible-infrared image fusion [13][14][15][16] because of their advantages, including high sensitivity to temperature variation, strong capability to penetrate through the fog, and powerful robustness when encountering the weak light condition.
Due to the high cost of an infrared camera or the limitations of taking photography conditions, obtaining many aerial infrared images of some specific targets is challenging.In this case, the mainstream method to obtain aerial infrared images is to employ the simulation software platform for target scene infrared simulation [17][18][19][20][21].These methods first analyze the target attributes to obtain a simulated 3D model scene and then compute the infrared radiation distribution of different materials in the scene according to the infrared radiation theory.Next, the radiation attenuation of the infrared radiation to the detector is calculated by the atmospheric transmission model.The imaging characteristics of the imaging sensor are then simulated and added to the infrared radiation distribution.Finally, the simulated scene is gray-scaled to produce the final infrared image.
Compared with actual photography, the use of infrared simulation software to simulate aerial infrared images of targets can significantly save manpower, material resources, and financial capacity.At the same time, the simulated infrared images with various periods and different bands can be obtained by adjusting the parameters of the infrared radiation distribution model and the imaging sensor.However, these methods have problems such as low simulation degree of the target temperature model, huge intermediate parameters, high coupling degree of each system, and complicated processing procedures, which could not be suitable for quickly obtaining many aerial infrared images.In this paper, we propose a new task called aerial visible-to-infrared image translation, which aims to generate aerial infrared images from visible images and has 3 main advantages: • Due to the easy acquisition and lower photography cost of aerial visible images, aerial visible images can be translated into corresponding infrared images in a fast, efficient, and low-cost manner.
• Additional modality information can be provided by the aerial visible images to improve the performance of the aerial infrared images in downstream tasks.
• The translated aerial infrared and corresponding visible images can provide paired data support for cross-modality and domain adaptation tasks.
Though translating aerial visible images into corresponding infrared images has the advantage in terms of efficiency and speed compared to actually taking photography and infrared simulation, 3 significant issues seriously limit the development of aerial visible-to-infrared image translation.
• Lacking an available dataset for aerial visible-to-infrared image translation experiments: So far, most datasets consist of color images and lack paired infrared images.Although there are several color-to-thermal datasets [22,23], they are all natural images, not taken from an aerial perspective, without diverse targets and complicated backgrounds like aerial images.Therefore, to the best of our knowledge, there are currently no available datasets for aerial visible-to-infrared image translation.
• Lacking a survey of methods that could apply to aerial visible-to-infrared image translation: The translation of aerial visible-to-infrared images can be considered as cross-modality learning, which makes it challenging to model the mapping.As far as we know, no specific approaches have been proposed to solve this problem.Therefore, a survey of methods that can be effectively applied to aerial visible-to-infrared image translation remains to be clarified.
• Lacking a complete evaluation system to evaluate the quality of generated images: Existing metrics for evaluating the similarity between images are mainly traditional perceptual indicators, such as MSE, peak signal-to-noise ratio (PSNR), and SSIM.However, they are too shallow functions to account for many nuances of human perception.In addition, evaluating the quality of the generated images only from the similarity of the appearance is obviously unreasonable.A more complete evaluation system to evaluate the quality of generated images is necessary.
In order to address the above issues and fully advance the development of aerial visible-to-infrared image translation, we propose a new specific dataset for aerial visible-to-infrared image translation, called AVIID (Aerial Visible-to-Infrared Image Dataset), consisting of over 3,000 paired visible-infrared images.The goal of AVIID is to provide researchers with an available data resource to evaluate and improve state-of-the-art algorithms.The aerial visible-to-infrared image translation aims to learn a mapping between 2 image domains, which can be regarded as a cross-modality image-to-image translation problem.Recently, image-to-image translation algorithms [16,[24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42] among color image domains with the application of deep convolutional neural networks (CNNs) [43][44][45] and generative adversarial networks (GANs) [46][47][48][49][50] have made significant progress in a wide range of tasks, including style transfer [40,51,52], image inpainting [53], colorization [54], super-solution [55][56][57][58], dehazing [59,60], and denoising [61,62].Some researchers have applied image-to-image translation approaches to colorto-thermal image translation tasks [22,63,64] and achieved impressive results.For example, Kniaz and Knyaz [65] achieve multi-spectral person re-identification by using GAN for colorto-thermal image translation.In this paper, we attempt to apply these image-to-image translation approaches to aerial visibleto-infrared image translation and make a comprehensive survey of these methods.In addition, we propose a complete evaluation system to evaluate the generated infrared images from the overall appearance and target quality.The overall appearance aims to determine the similarity between the generated infrared images and real ones from the visual perception.The target quality reflects the quality of the targets in the generated infrared images, which is important for some downstream tasks such as object detection and tracking.We further evaluate several representative image-to-image translation methods on AVIID under this proposed complete evaluation system, and the results can be seen as a baseline to advance the development of aerial visible-to-infrared image translation.
In summary, the major contributions of this paper are as follows: • The first specific dataset for aerial visible-infrared image translation, AVIID, is constructed, which provides researchers with an available data resource to evaluate and advance stateof-the-art algorithms.
• A comprehensive survey of up-to-date image-to-image translation algorithms that could be applied to aerial visibleto-infrared image translation is proposed to promote the development of this field.
• A complete evaluation system is presented to evaluate the generated infrared images in terms of the overall appearance and target quality.Several representative image-to-image translation methods are evaluated on AVIID under our proposed complete evaluation system.These results can be regarded as a baseline for future work.
• Some meaningful conclusions, problems of existing methods, and future research directions are summarized to advance state-of-the-art algorithms for aerial visible-to-infrared image translation.
The rest of this paper is organized as follows.We first provide a comprehensive survey of image-to-image translation methods that can be applied to aerial visible-to-infrared image translation in the "A Survey of Methods for Aerial Visible-to-Infrared Image Translation" section.The details of AVIID are then described in the "A Specific Dataset for Aerial Visible-to-Infrared Image Translation" section.In the "Experiments and Results" section, the description of our proposed complete evaluation system and baseline results of representative methods on AVIID are given.Finally, the conclusion of our work is given in the "Conclusion" section.
A Survey of Methods for Aerial Visible-to-Infrared Image Translation
In this section, we comprehensively make a survey of imageto-image translation methods that could be applied in aerial visible-to-infrared translation.Based on whether the method depends on paired images or not, we simply classify these methods into supervised and unsupervised categories.Supervised methods aim to learn a pixel-level mapping from the source domain to the target domain with the paired data for training, which limits their applications.In contrast, unsupervised methods only need 2 images from 2 different domains as training data to achieve image-to-image translation by adopting additional constraints.According to whether multi-modal outputs are generated based on one single image as input or not, these unsupervised methods can be further divided into 2 types: oneto-one (single modal) and one-to-many (multi-modal).In addition, depending on the mapping relationship between the source and target domains, one-to-one unsupervised approaches can be further classified into 1-sided and 2-sided methods.One-sided unsupervised image-to-image translation methods can only translate the images from the source domain to the target domain.In contrast, 2-sided ones can achieve a bidirectional mapping between the source domain and the target domain.Figure 1 shows an overview of these methods.In what follows, we will introduce each category of these methods in detail.
Supervised image-to-image translation methods
Supervised image-to-image translation methods aim to learn a pixel-level mapping to achieve image translation from one domain to another based on paired data.Paired data means the training data are paired, and every image from the source domain has a corresponding image in the target domain.In this case, Pix2Pix is the first method to achieve task-agnostic image translation, which uses a conditional generative adversarial network (cGAN) [21] to learn a mapping from input images to output images.Based on the framework of Pix2Pix, BicycleGAN adds a variational autoencoder (VAE) in cGAN to generate multiple outputs from a single input image.Additional details of Pix2Pix and BicycleGAN are as follows.
Pix2Pix [24]: Pix2Pix investigates cGANs, a variant of GAN, as a general solution to image-to-image translation problems.
The key idea of GAN is to simultaneously train the discriminator and the generator: the discriminator is designed to distinguish between the real data and the generated samples, while the generator aims to generate the fake samples that are as real as possible in order to convince the discriminator that the fake samples come from the real data.Given the paired image data (x, y), where x is from the source domain X and y is from the target domain Y.The cGANs aim to learn a mapping from the image x with a random latent vector z to the image y, y = G(x, z).The generator G is trained to produce outputs that cannot be distinguished from the "real" images in the target domain with an adversarial discriminator, D, which is trained to detect the generator's "fakes" as soon as possible.The full objective of the cGANs can be expressed as where G attempts to minimize this objective versus an adversarial D that tries to maximize it.In addition, Pix2Pix adds an additional L 1 distance constraint to the generator to make the Downloaded from https://spj.science.orgon January 04, 2024 translated image visually similar to its corresponding ground truth, which can be formulated as Therefore, the final objective of Pix2Pix can be formulated as where λ is a hyperparameter.
BicycleGAN [26]: Though Pix2Pix has achieved ambiguous results for image-to-image translation, it is prone to suffer from mode collapse, resulting in generating very similar images.To address this issue, BicycleGAN aims to enhance the relationship between the output with the latent code, which helps to produce more diverse results.For the paired image data (x, y), BicycleGAN first maps the target domain image y to a specific latent code z by a VAE encoder, z = E(y).The latent code is encoded by the real data in the training process, but a random latent code may not yield realistic images at testing time.To avoid this, an additional KL loss is used to align the distribution of the latent code with the standard normal distribution.Then, BicycleGAN combines the latent code with the input image to translate it from the source domain to the target domain by cGANs like in Pix2Pix, ŷ = G E y , x .The translated image ŷ is not necessarily needed to be close to the ground truth, which may suffer from mode collapse, but must be realistic.To achieve this, BicycleGAN recovers the latent code by the VAE encoder, ẑ = E ŷ , and utilizes an L 1 loss to keep the consistency between the recovered and the original latent code, which can be expressed as
One-to-one
Unsupervised image-to-image translation algorithms aim to learn a joint distribution by using images from the marginal distributions in individual domains.Since there exists an infinite set of possible joint distributions that can arrive at the marginal distributions, it is impossible to guarantee that a particular input and output correspond in a meaningful way without additional assumptions or constraints.As a consequence, various constraints have been proposed to achieve unsupervised image-to-image translation.
DistanceGAN assumes that the distance between 2 images in the source domain should be preserved after mapping them to the target domain.GCGAN develops a geometry-consistency constraint from the special property of images that simple geometric transformations will not change the semantic structure of images.CUT proposes a contrastive learning-based constraint to maximize the mutual information between the input and the output.These methods can be seen as one-sided unsupervised image-to-image translation because the mapping from the source domain to the target domain is unidirectional.In addition, some methods construct various specific constraints to achieve 2-sided unsupervised image-to-image translation.For example, CycleGAN, DualGAN, and DiscoGAN employ the cycle-consistency constraint, which aims to transfer an image in the source domain to the target domain, and this translated image can also be transferred back to the source domain.UNIT makes a shared-latent space assumption that also implies the cycle-consistency constraint.DCLGAN takes advantage of CycleGAN and CUT, employing the idea of mutual information maximization to enable 2-sided unsupervised image-to-image translation.More details of these methods are as follows.
DistanceGAN [37]: Let x ∈ X denote a random image from the source domain, and y ∈ Y represents a random target domain image.Unsupervised training data pairs are expressed as (x i , y j ), i = 1, 2, …, N, where N means the size of the dataset.DistanceGAN presents a distance-preserving mapping, which aims to enforce that the distance between images in the source domain is preserved after mapping them to the target domain and can be formulated as where d(.) is a predefined metric function to measure the distance between 2 samples, a and b are the linear coefficient and bias, and G XY (.) is the generator.
GCGAN [36]: GCGAN presents a geometry-consistency constraint in that a given specific geometric between the input images should be preserved after transferring them to the target domain.In detail, given a random image x from the source domain X, a specific geometric transformation f(.), and 2 related translators G XY and G X Ỹ, the geometry-consistency constraint can be expressed as where f −1 (.) is the inverse of the transformation f(.).
CUT [40]: CUT proposes a novel constraint to maximize the mutual information between the corresponding input and output patches based on the intuition that each path in the output should reflect the content of the counterpart patch in the input and be independent of the domain.To achieve this, CUT uses a type of contrastive learning loss function, InfoNCE loss [66], which aims to learn an embedding that associates a patch of the output v and its corresponding patch of the input v + , while separating it from the other N noncorresponding patches of the input v − , which can be formulated as where τ is a temperature hyperparameter.Intuitively, this loss can be seen as a classifier that attempts to classify v as v + .
CycleGAN [51]/DualGAN [29]/DiscoGAN [67]: CycleGAN, DualGAN, and DiscoGAN propose the cycle-consistency constraint to achieve the 2-sided unsupervised image-to-image translation.These methods construct 2 translators to learn 2 mappings simultaneously via transferring an image to the target domain and back, maintaining the fidelity of the input and the reconstructed image through the cycle-consistency constraint.Mathematically, for an image x from the source domain X, the translator G XY translates it to the target domain Y, and then this translated image is transferred back to the source domain by the translator G YX , and the cycle-consistency constraint is used to preserve the semantic structure of the reconstructed image and the input.For the domain Y, it is an inverse process and the whole objective of cycle-consistency constraint can be expressed as (3) Downloaded from https://spj.science.orgon January 04, 2024 UNIT [25]: UNIT presents a shared-latent space assumption, which assumes that a pair of corresponding images from different domains can be mapped to the same latent representation in a shared-latent space.Consequently, the latent code can be computed from each of the images, and these 2 images can also be recovered from the shared latent code.Based on this assumption, UNIT proposes a 2-sided unsupervised image-toimage translation framework consisting of 6 sub-networks, including 2 domain image encoders E X and E Y , 2 domain generators G X and G Y , and 2 domain discriminators D X and D Y .For any given pair of image data (x, y), the shared latent code can be obtained by encoders z = E X (x) = E Y (y), and conversely, the images can be recovered from this latent code, x = G X (E Y (y) and y = G Y (E X (x).In this way, images from the source and target domains can be mutually transferred.However, to achieve this, a necessary condition to exist is the cycle-consistency constraint: . Therefore, from this perspective, the shared-latent space assumption also implies the cycle-consistency constraint.
DCLGAN [34]: Although the cycle-consistency constraint can ensure that the translated images have similar semantic information compared to the target domain, it enforces the relationship between the 2 domains to be bijective, which is too restrictive.At the same time, CUT has demonstrated the effectiveness of contrastive learning in one-sided unsupervised image-to-image translation.However, one embedding for 2 separate domains may not capture the domain gap.To solve this, DCLGAN takes advantage of CycleGAN and CUT to propose a novel method based on contrastive learning and a dual learning setting to enable an efficient 2-sided domain mapping with unpaired data.
One-to-many
Though several methods have enabled unpaired image-toimage translation, they fail to generate multi-modal results.An effective way to handle multi-modal image-to-image translation is to perform image translation conditioned on the input image and a specific latent code.To achieve this, DRIT/DRIT++ and MUNIT assume that the image representation can be disentangled into 2 spaces: a domain-invariant content space capturing shared information across domains and a domain-specific style space.Then, to achieve translation, they recombine its content information with a random style feature sampled from the style space of the target domain.To improve the diversity, MSGAN presents a mode-seeking regularization term that maximizes the ratio of the distance between translated images with respect to the distance between latent vectors.DSMAP leverages domain-specific mappings for remapping latent features in the shared content space to domain-specific content spaces, which is conducive to achieve more challenging style transfer tasks that require more attention on local and structural-semantic correspondences.These methods are de s cribed in detail as follows.
DRIT [52]/DRIT++ [35]/MUNIT [27]: DRIT/DRIT++ and MUNIT assume that images from 2 domains can be decomposed into a domain-invariant content space and a domain-specific style space.The domain-invariant content space captures the shared information across 2 domains, while the style space captures domain-specific attributes.To transfer an image from the source domain to the target domain, they recombine its content code with a random style code sampled from the target domain space.Mathematically, for a given unpaired image data (x, y) random sampled from the source domain X and the target domain Y, DRIT/DRIT++ and MUNIT first use the content encoders E X c , E Y c and style encoders E X s , E Y s to disentangle the images into the domain-variant content code, z c = E X c (x) = E Y c (y), and domain-specific style codes, x s = E X s (x) and y s = E Y s (y).Then, they perform a cross-domain mapping to obtain translated images x, ỹ by recombining the content code with the specific style code to the generator, ỹ = G XY E X c (x), E Y s y , and , where G YX and G XY are cross-domain generators.After that, they apply the above cross-domain mapping one more time and leverage the cycle-consistency constraint to enforce the consistency between the reconstructed images and the original input images, which can be formulated as MSGAN [68]: Existing cGANs tend to focus on conditional input images but ignore random latent vectors that significantly contribute to the diversity of outputs and thus suffer from mode collapse.To address this issue and improve the diversity of the generated images, MSGAN proposes a simple yet effective mode-seeking regularization term, which aims to maximize the ratio of the distance between generated images with respect to the corresponding latent vectors.Let an input image x from the domain X, 2 latent vectors z 1 , z 2 from the latent space Z, and a cross-domain generator G XY that translates the input image with the latent vectors to the target domain, respectively.Then, the mode-seeking regularization term directly maximizes the ratio of the distance between the translated images to the distance between the latent vectors, which can be expressed as where d(.) denotes the predefined distance metric.
DSMAP [39]: Previous multi-modal unsupervised imageto-image translation methods often assume that the image representation can be decomposed into a shared domain-variant content space and a domain-specific space.However, this content space only considers the shared information across domains but ignores the relationship between content and style, which may weaken the presentation of content.To address this issue, DSMAP leverages 2 additional domain-specific mapping functions to remap the content features in the shared domain-invariant content space into the domain-specific content spaces for different domains, which can be expressed as where x, y are an unpaired image data randomly sampled from the domain X and domain Y, (9) Downloaded from https://spj.science.orgon January 04, 2024 domain-specific mapping functions, and E Y c , E X c are the domaininvariant encoders.By these domain-specific mapping functions, the features in the shared content space could be aligned with the target domain to encode the domain-specific content features and thus improve the content representation ability for translation.
A Specific Dataset for Aerial Visible-to-Infrared Image Translation
In this section, we introduce AVIID, a specific dataset for aerial visible-to-infrared image translation in detail.AVIID consists of paired aerial visible and infrared images that are taken by a dual-light camera equipped on the UAV. Figure 2 shows the dual-light camera and the UAV.Table 1 describes the detailed parameters of the dual-light camera.Depending on the shooting time, various scenarios, and conditions of photography, we further divide AVIID into 3 subdatasets named AVIID-1, AVIID-2, and AVIID-3, respectively.Table 2 shows the overall comparison of the 3 subdatasets, and the details of them are described in the following.
AVIID-1
AVIID-1 contains 993 pairs of paired visible-infrared images with an image size of 434 × 434.The scenes of AVIID-1 are the roads, and the targets in the images are common vehicles, including cars, buses, vans, and trucks.These images are taken between 9 a.m. and 12 p.m. with temperatures ranging from 28 ∘ C to 32 ∘ C. When taking images, the height of the UAV is about 15 m, the distance from the road is about 90 m, and the shooting angle of the dual-light camera is 90 ∘ horizontally.The scenarios in these images are very similar, mainly including various cars, trees beside the road, and houses in the distance.Therefore, using this subdataset for aerial visible-infrared image translation is relatively simple.Figure 3 shows some examples of AVIID-1.
AVIID-2
AVIID-2 contains 1,090 pairs of paired visible-infrared images with an image size of 434 × 434.The taking conditions and scenes of AVIID-2 are the same as AVIID-1, except that this subdataset is taken from 8 p.m. to 10 p.m., and the temperatures are between 26 ∘ C and 28 ∘ C. The images of AVIID-2 are taken under low-light conditions, resulting in much noise in the images, and even blurry targets and backgrounds, which is challenging for aerial visible-to-infrared translation compared with AVIID-1.Some examples of AVIID-2 can be seen in Fig. 4.
AVIID-3
AVIID-3 contains 1,280 pairs of paired visible-infrared images with an image size of 512 × 512.These images are taken by the UAV at 3 different heights, including about 50 m, 100 m, and 150 m, and 2 different shooting angles of 45 ∘ and 60 ∘ vertically.The taking time is mainly from 2 p.m. to 5 p.m., and the temperatures are between 30 ∘ C and 34 ∘ C. Compared with AVIID-1 and AVIID-2, this dataset contains more types of vehicles and numerous targets of multiple densities, viewpoints, and scales.In addition, AVIID-3 is collected in various scenarios with more complicated backgrounds, including roads, bridges across rivers, parking lots, and streets of residential communities.Therefore, this dataset is more challenging for aerial visible-toinfrared image translation and can be better used to evaluate the performance of different methods.Some figures of AVIID-3 are displayed in Fig. 5.
Experiments and Results
In this section, we evaluate some representative image-to-image methods on AVIID.First, we present our experiment settings, including dataset usage, baseline methods, and training and testing procedure details.Then, our proposed complete evaluation system that evaluates generated images from 2 aspects, overall appearance and target quality, is introduced in detail.Finally, the baseline results are given for future work.
Settings
We conduct experiments on all 3 subdatasets and set the ratios of the training set to be 50% and 80%, respectively, the left data for testing.We select 10 representative methods as baseline methods for our experiments, 2 supervised methods, including Pix2Pix and BicycleGAN, and 8 unsupervised methods, including GCGAN, CUT, CycleGAN, UNIT, DCLGAN, MUNIT, DRIT, and MSGAN.In the training time, every image is first resized to 286 × 286, then random cropped to 256 × 256, and finally horizontally flipped with a probability of 0.5 for data augmentation.To train Pix2Pix, BicycleGAN, GCGAN, CUT, CycleGAN, and DCLGAN, we use the Adam optimizer with a learning rate of 0.0002 and a batch size of 4 for 1,000 epochs on NVIDIA RTX3090.For DRIT and MSGAN, the whole networks are also optimized by the Adam optimizer with a learning rate of 0.0001 for 1,200 epochs on GTX1080Ti and the batch size is also set to 4. With respect to UNIT and MUNIT, we use the Adam optimizer to train them for 200,000 iterations on NVIDIA RTX3090, the learning rate is 0.0001, the batch size is 4, and the weight decay is set to 0.0001.In the testing procedure, the input image is resized to 256 × 256 without any data augmentation.
Overall appearance evaluation
In order to evaluate the overall appearance quality of the generated images, we adopt the most widely used traditional perceptual metrics, including MSE, PSNR, and SSIM.The details of these metrics are as follows.
MSE: MSE is used to evaluate the margin of the discrepancy between the pixels of the generated image and its ground truth, which can be defined as where y and ŷ where y and ŷrepresent the generated image and the corresponding real ones, and H and W are the height and width of the image, respectively.
PSNR: The PSNR aims to measure the degree of distortion for the generated image with respect to its corresponding ground truth, which can be expressed as where max ŷ means the max pixel of the real image.Higher PSNR indicates a smaller distortion of the generated image.
SSIM: SSIM can estimate the structural similarity between the generated image and the real image, which can be formulated as where c 1 and c 2 are constant, μ y , ŷ, σ y , and ŷ are the mean and variance of the generated image and the ground truth, respectively, and yŷ is their covariance.Higher SSIM means the generated image is more similar to its corresponding real image.
Though MSE, PSNR, and SSIM are the most widely used traditional perceptual metrics, they are relatively shallow functions and fail to account for many nuances of human perception.In recent years, regarding the deep features of deep CNN as a perceptual metric have been demonstrated to be an effective way and more consistent with human perception judgment.Therefore, to more accurately evaluate the quality of the generated images, we adopt three CNN-based perceptual metrics, including FID [69], KID [70], and LPIPS [71].More details of FID, KID and LPIPS are as follows.
LPIPS: LPIPS is a CNN-based perceptual metric and has been demonstrated to coincide greatly with human judgment.It can be computed by a weighted L 2 distance between the deep features extracted by the deep CNN of the generated images and their ground truth KID: KID is a metric similar to the FID, the Kernel Inception Distance, to be the squared MMD [15] between Inception representations and has a simple unbiased estimator.Correspondingly, a lower KID means a better performance.
In the testing process, we randomly sample 150 test images and implement translation on them to get the corresponding infrared images for Pix2Pix, BicycleGAN, CycleGAN, GCGAN, UNIT, CUT and DCLGAN.As for one-to-many methods, we generate 10 examples per input and randomly select one as the final result.These generated images and counterpart real ones are used to calculate the metrics mentioned above for each method.We repeat the experiments 5 times and report the average score and standard variances of each metric.
Target quality evaluation
For aerial infrared images, generating as real targets as possible is essential for many tasks, such as object detection and ( 16) Fig. 3. Some examples of AVIID-1.The scenes of AVIID-1 contain the roads with various kinds of vehicles, including cars, buses, vans, and trucks.tracking.However, existing perceptual metrics mainly consider the overall appearance of the generated images but ignore the evaluation of the targets in the generated images.To address this issue, we propose a new metric named RmAP, which aims to measure the similarity of the targets between the generated images and the real ones and can be obtained by computing the absolute value of the mAP between the real and generated images on the same object detection framework as where mAP is a widely used metric for evaluating the performance of object detection algorithms [72][73][74].
At testing time, we first use 80% of the real aerial infrared images to train 4 object detection models, including Faster RCNN [75], YOLOv3 [76], YOLOv5 [77], and YOLOx [78].Then, we randomly select 150 generated images with their ground truth for each method and compute the absolute value of their mAP on every object detection model with 3 kinds of IOU settings.Similar to the overall appearance evaluation, we also repeat the experiments 5 times and report the average score and standard variances of RmAP.
AVIID-1
Tables 3 and 4 show the means and standard variances of overall appearance evaluation metrics under 50% and 80% training ratio on AVIID-1, respectively.The results show that Pix2Pix performs better than BicycleGAN on both traditional and CNN-based perceptual metrics.DCLGAN and CUT perform similarly, outperforming other unsupervised methods on all appearance evaluation metrics, while CUT performs slightly worse.These results reveal that contrastive learning constraints can achieve a patch-level alignment by maximizing the mutual information between the corresponding input and output (17) patches, thereby improving the overall appearance quality of generated images.Tables 5 to 8 illustrate the means and standard variances of target quality evaluation metric under 4 objection detection models with 3 IOU settings on AVIID-1.The RmAP results indicate that supervised methods give significantly superior performance compared with unsupervised ones in terms of target quality, which is contrary to the conclusions drawn from the overall appearance evaluation.This suggests that the pixellevel mapping learned from the paired data is beneficial for generating fine-grained targets, while also indicating that the RmAP metric complements overall appearance evaluation metrics and thus more effectively evaluate the performance of algorithms.For unsupervised methods, contrastive learning-based methods do not achieve as excellent performance in target quality as excellent a performance in target quality as in overall appearance Similarly, GCGAN also gives better results on the YOLOv3 model under the 80% training ratio for all IOU settings.The possible reason for this phenomenon may be that the patch-level alignment can be seen as the coarse-grained mapping between the input and output images compared with the pixel-level mapping, which could lead to blurriness and distortion of targets in the generated images.This phenomenon becomes more serious in aerial images, mainly because there often exist many small and geometric discrepancy targets (such as cars and buses in our dataset).
Figures 6 and 7 display some generated images for each method under 50% and 80% training ratio on AVIID-1, respectively.By comparing these generated examples, we can find that the vehicles generated by DCLGAN and CUT have geometric distortion and blurred edges compared with Pix2Pix, especially CUT, which further confirms our assumption.
AVIID-2
Tables 9 and 10 show the means and standard variances of overall appearance evaluation metrics under 50% and 80% training ratio on AVIID-2, respectively.Through the results, we can get conclusions similar to those of AVIID-1 that Pix2Pix performs superiorly to BicycleGAN, and DCLGAN achieves the best performance followed by CUT in the unsupervised methods.It is worth noting that BicycleGAN gets a much lower performance than Pix2Pix, which is different from AVIID-1.The reason may be that the visible images in AVIID-2 are seriously affected by weak light and noise, resulting in large discrepancies between them and their corresponding infrared images, especially the backgrounds.As a result, the generator may pay too much attention to the latent vector encoded from infrared images in the translating process, which leads to the distortion of details in the generated images.In addition, the values of overall appearance evaluation metrics obtained by each method are significantly lower than those on AVIID-1, indicating that AVIID-2 is more challenging.Tables 11 to 14 illustrate the means and standard variances of target quality evaluation metric under 4 objection detection models with 3 IOU settings on AVIID-2.From the RmAP results, we can find that Pix2Pix has achieved a better performance than all other methods by a large margin in terms of target quality, which is similar to AVIID-1.As for the unsupervised approaches, DCLGAN achieves superior results on AVIID-2.For example, it gives the best performance on the Faster RCNN, YOLOv3, and YOLOv5 object detection models under 80% training ratio and a lower RmAP on the Faster RCNN and YOLOv5 under 50% training ratio when the IOU is set to 0.75.
Figures 8 and 9 display some generated images for each method under 50% and 80% training ratio on AVIID-2, respectively.From these figures, we can see that some generated images have blurred backgrounds and geometric distortion of the targets, which is more severe in supervised methods.This phenomenon may indicate that pixel-level mapping becomes too strict when the visible images are severely disturbed by weak light and noise, thus degrading the quality of the generated images.In this case, patch-level alignment is less strict than pixel-level mapping; thus, contrastive learning-based methods can better preserve the clarity of backgrounds and the geometry of targets in the generated images.
AVIID-3
Tables 15 and 16 show the means and standard variances of overall appearance evaluation metrics under 50% and 80% training ratio on AVIID-3, respectively.From the results, we can find that Pix2Pix still performs better than BicycleGAN, which is similar to AVIID-1 and AVIID-2.However, in the unsupervised methods, GCGAN gives a significantly improved performance of DCLGAN, which performs best on AVIID-1 and AVIID-2 under all overall appearance quality metrics.This phenomenon may result in the conclusion that simple geometryconsistency constraint can effectively maintain the geometric shape of the targets (particularly tiny and dense cars in AVIID) during the translating process, which is beneficial to reduce the blur and detail distortions of the generated images in the case of various scenarios with more complicated backgrounds, while contrastive learning and cycle-consistency constraint are too strict.Tables 17 to 20 illustrate the means and standard variances of target quality evaluation metric under 4 objection detection models with 3 IOU settings on AVIID-3.From the RmAP results, we can see that GCGAN achieves an overwhelming superiority in target quality compared with other unsupervised methods, which further reflects the effectiveness of geometry-consistency constraint on generating high-quality targets.
Figures 10 and 11 display some generated images for each method under 50% and 80% training ratio on AVIID-3, respectively.From the figures, we can find that GCGAN can maintain the geometric shape of targets to reduce distortions and blur, especially in the case of dense cars, which further proves our conclusion.
Conclusion
From the above experimental results and discussion, we can sum up some meaningful conclusions as follows.
• The pixel-level mapping learned from the paired data is beneficial for generating fine-grained targets.Therefore, supervised methods give significantly superior performance in target quality evaluation compared with unsupervised approaches.
• The contrastive learning constraint can be seen as a patchlevel mapping by maximizing mutual information between the corresponding input and output patches.This patch-level alignment can enhance the correspondence of the input and output patches, which helps to improve the quality of generating images, especially in weak light and noisy conditions.
• The geometry-consistency constraint is a simple and effective way to maintain the geometric shape of the targets (particularly tiny and dense targets) during the translating process, which can meaningfully reduce the blur and detail distortions of the generated images in the case of various scenarios with complicated backgrounds.
In addition, several problems of existing methods can be summarized from the experiment results and discussion, which can be seen as follows.
• Current approaches only consider migrating the global styles or attributes onto the entire images but ignore the considerable discrepancy between targets and backgrounds in infrared attributes, resulting in unrealistic targets in the generated images.• Existing methods can only transfer styles or attributes between aerial visible and infrared images without taking into account the different properties of each modality.Consequently, the authenticity of generated images is poor.
• For aerial images with multi-scale dense targets, complex backgrounds, and diverse scenes, current methods struggle to capture the spatial differences between images, resulting in distortion and blurring of generated targets and backgrounds, significantly reducing the quality of generated images.
The above conclusions can provide meaningful guidance for investigating more efficient methods on more challenging datasets to facilitate the process of aerial visible-to-infrared image translation.
summarized to advance state-of-the-art algorithms for aerial visible-to-infrared image translation.In addition, several future research directions of this field are analyzed and summarized as follows.
• Current image-to-image translation methods are not concerned with the imaging mechanism between the visible and infrared image.How to construct reasonable imaging mechanism constraints to improve the realism of generated infrared images is a future research direction.
• The AVIID dataset proposed in this article are aerial remote sensing images taken by infrared camera equipped on the UAV.The visible-to-infrared image translation in satellite platform also deserves to be researched in the future.
• Existing image-to-image translation methods are mainly based on deep CNNs.However, due to the limitation computational resource, the parameters of the model cannot be infinitely large, so the size of the generated image is limited.Therefore, finding an effective way to transfer these approaches to large-scale areas is necessary.
• The quality of the generated images through image-toimage translation methods is highly correlated with the similarity between training and test data.Therefore, improving the transferability and generalizability of these methods is one of the research directions in the future.
• The radiation value of thermal images has a great relationship with the atmospheric conditions, and when the infrared images are taken at a very high height above the ground, solving the atmospheric compensation is a worthwhile problem.Moreover, AVIID and PyTorch codes of these methods can be freely downloaded to advance the process of aerial visibleto-infrared image translation.
( 1 )Fig. 1 .
Fig.1.Overview of image-to-image translation methods that could be applied to aerial visible-to-infrared image translation.Each color represents a category.
where Y and Ŷ represent the generated images and the real ones, y l and ŷl are normalized deep features extracted from the l layer of the deep CNN, w l means the weighted parameters, and N is the number of the images.We use the AlexNet pretrained on the ImageNet as the deep feature extractor, and a lower LPIPS score indicates a better quantity of the generated images.FID: FID is a widely used metric to estimate the distribution of real and generated images through deep features extracted by the last pooling layer of the Inception-V3 model trained on the ImageNet and compute the divergence between them, which can be formulated as where m indicates the mean of the deep features, C means the covariance matrix, and Tr(.) is the trace operation.Intuitively, if the generated images are similar to the real ones, they should have lower FID values.
Table 1 .
Detailed parameters of the dual-light camera
Table 3 .
Overall appearance evaluation under 50% training ratio on AVIID-1.The best results are highlighted in bold Downloaded from https://spj.science.orgon January 04, 2024
Table 4 .
Overall appearance evaluation under 80% training ratio on AVIID-1.The best results are highlighted in bold
Table 5 .
RmAP under the Faster RCNN object detection model on AVIID-1.The best results are highlighted in bold
Table 6 .
RmAP under the YOLOv3 object detection model on AVIID-1.The best results are highlighted in bold Downloaded from https://spj.science.orgon January 04, 2024
Table 7 .
RmAP under the YOLOv5 object detection model on AVIID-1.The best results are highlighted in bold
Table 8 .
RmAP under the YOLOx object detection model on AVIID-1.The best results are highlighted in bold Downloaded from https://spj.science.orgon January 04, 2024
Table 9 .
Overall appearance evaluation under 50% training ratio on AVIID-2.The best results are highlighted in bold
Table 10 .
Overall appearance evaluation under 80% training ratio on AVIID-2.The best results are highlighted in bold
Table 11 .
RmAP under the Fater RCNN object detection model on AVIID-2.The best results are highlighted in bold Downloaded from https://spj.science.orgon January 04, 2024 quality and even perform worse than other approaches.For instance, DRIT has achieved much lower RmAP values than DCLGAN on Faster RCNN and YOLOv5 object detection algorithms under 50% training ratio with 3 kinds of IOU settings.
Table 12 .
RmAP under the YOLOv3 object detection model on AVIID-2.The best results are highlighted in bold
Table 13 .
RmAP under the YOLOv5 object detection model on AVIID-2.The best results are highlighted in bold
Table 14 .
RmAP under the YOLOx object detection model on AVIID-2.The best results are highlighted in bold Fig. 8.Some generated images for each method under 50% training ratio on AVIID-2.Downloaded from https://spj.science.orgon January 04, 2024
Table 15 .
Overall appearance evaluation under 50% training ratio on AVIID-3.The best results are highlighted in bold Downloaded from https://spj.science.orgon January 04, 2024
Table 16 .
Overall appearance evaluation under 80% training ratio on AVIID-3.The best results are highlighted in bold
Table 17 .
RmAP under the Faster RCNN object detection model on AVIID-3.The best results are highlighted in bold Downloaded from https://spj.science.orgon January 04, 2024
Table 20 .
RmAP under the YOLOx object detection model on AVIID-3.The best results are highlighted in bold | 2023-10-26T15:39:45.669Z | 2023-10-23T00:00:00.000 | {
"year": 2023,
"sha1": "d458e690dad7dd6e36c102d86a9b564c4bbb4461",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.34133/remotesensing.0096",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "84cb7d60bf01d911061f4f66fb390f563eac491b",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
} |
246115069 | pes2o/s2orc | v3-fos-license | Assessment of Regional Spatiotemporal Variations in Drought from the Perspective of Soil Moisture in Guangxi, China
: Understanding the changes in regional droughts is important for promoting overall sus-tainable development. However, the spatiotemporal dynamics of soil droughts in Guangxi under the background of global warming and regional vegetation restoration have not been studied extensively, and the potential causes are scarcely understood. Here, using TerraClimate soil moisture data, we constructed a monthly standardized soil moisture index (SSMI), analyzed the seasonal and annual spatiotemporal distribution of droughts from the perspective of soil moisture, and studied past soil drought events in Guangxi. Migration methods of drought centroid, trend analysis, and principal component decomposition were used. In the interannual dynamics, the overall SSMI increased, indicating that the soil drought situation was gradually alleviated in Guangxi. Further, the frequency of extreme and severe droughts decreased with time, mainly in autumn and winter. During early drought stages, the migration path was short, which extended as the droughts progressed. Ocean temperature and soil moisture were strongly correlated, indicating that abnormal ocean surface temperature may drive soil moisture. This study provides scientific guidance for the early warning, prevention, and mitigation of losses associated with soil droughts in Guangxi and serves as valuable reference for understanding the impacts of large-scale climate anomalies on soil moisture. an empirical orthogonal function (EOF-1), ( c ) main transformer mode (EOF-1) corresponding to the monthly change in the principal component (PC-1), and ( d ) correlation between ocean temperature and PC-1 (corresponding to EOF-1).
Introduction
Drought is a major natural disaster severely affecting the ecosystem and humans [1][2][3]. It is generally represented by soil water shortage and has long periods, wide range, occurs frequently, and affects large populations [4]. China is largely an agricultural country facing frequent droughts, which cause huge economic losses [5][6][7]. Therefore, strengthening drought monitoring, especially on a large scale with high spatiotemporal continuity, is necessary, and it can facilitate real-time dynamic capturing of drought occurrence and its development process and provide a reference for decision making to undertake timely and effective mitigation measures.
Previously, studies have been conducted on methods to monitor and evaluate droughts objectively, accurately, and quantitatively [1]. Generally, several drought assessment indicators are constructed using observation factors, such as precipitation, temperature, dong and Guangxi hills, and south of the North Bay. The terrain of this region is flat in the middle and south areas, which are in turn surrounded by mountains and plateaus, and the average altitude of the area is 802 m. An inclining trend is observed in the entire terrain from northwest to southeast. As a typical subtropical monsoon humid area, the annual precipitation in Guangxi is abundant (range 1500~2000 mm), with uneven spatiotemporal distribution, and the average annual temperature is relatively high, between 16~23 • C. Furthermore, karst developed hills and depressions are widely distributed [9]. Due to the special geological environment of karst areas in Guangxi, atmospheric precipitation can easily leak into the deep underground layer and become deeply buried groundwater, forming a pattern of water and soil separation, resulting in drought on the surface due to soil water shortage. At present, the development of rocky desertification in karst areas in Guangxi has become the most serious eco-environmental problem, restricting the sustainable development of Southwest China, and soil humidity is the key factor Therefore, the study of soil moisture in Guangxi has become an important measure for the ecological restoration and reconstruction of the region.
Soil Moisture Data
Monthly TerraClimate precipitation data from January 1990 to November 2018 were used in this study. The data spatial resolution was 1/24 • (~4-km). TerraClimate includes the requisite variables for calculating energy-based reference potential evapotranspiration and a water balance model [21]. TerraClimate uses satellite and climatic data that can be integrated and has the characteristics of high accuracy, a wide detectable range, and high spatiotemporal resolution [21]. In this study, the soil moisture mentioned includes all water below the surface except groundwater, rather than only plant root or surface soil water. Further, soil moisture data were acquired from TerraClimate: Monthly Climate and Climatic Water Balance for Global Terrestrial Surfaces, http://www.climatologylab.org/terraclimate (accessed on 11 August 2019).
Standardized Soil Drought Index
SSMI is a standardized anomaly of remotely sensed soil moisture data from 1990 to 2018. We used soil moisture data in TerraClimate to calculate the SSMI to characterize agricultural drought.
Here, i is the observation year from 1990 to 2018, j is the observation month from January to December, and SM j and ∂ j are the average and standard deviation of soil humidity in month j, respectively. A detailed description of the this method can be found in the previous studies [24,25]. SSMI is dimensionless and is used to detect drought. When SSMI is greater than 0, it can be considered that it is wetter than that in the same period of many years; otherwise, it is drier. In this study, the drought situation levels, including slight (SSMI range: −0.5 to 0), moderate (−1 to −0.5), severe (−1.5 to −1), and extreme droughts (−2 to −1.5). If the SSMI value is lower than −1.5 in a certain month from 1990 to 2018, it represents an extreme drought event.
Drought Frequency
Drought frequency is defined as the number of droughts that exceeds a certain risk threshold per unit time. In this study, the drought frequency was defined as a ratio of the total number of drought months with different grades to the number of the total months of the study period (totally 468 months). For example, from 1990 to 2018, the number of months with SSMI lower than −1.5 for each grid cell was 10, and the frequency of extreme drought was 10/468, or 2.13%. The spatial frequency of droughts with different grades was calculated. Subsequently, the spatial frequency of different drought levels in spring (March-May), summer (June-August), autumn (September-November), and winter (December-February) were calculated to discuss the seasonal dynamics of soil drought frequency.
Migration Path of Droughts
The center of mass used to study the migration of matter and energy is an important method to study the geographical distribution [26]. In this study, the centroid model was used to study the spatiotemporal migration characteristics of soil dryness, and the distance of centroid movement reflects the spatial difference of the differentiation degree of SSMI change. Further, we used the migration of drought centers to describe the spatiotemporal evolution of soil drought. Initially, we used the statistical analysis box (mean center tool) of ArcGIS 10.0 software to obtain the spatial centroid of the SSMI drought index and plotted the centroid migration of two extreme soil drought events to describe the spatiotemporal evolution of soil drought better. The drought centers were then connected to record the track, path length, direction, and velocity characteristics of the droughts.
Empirical Orthogonal Function Decomposition
Empirical orthogonal function (EOF) decomposition, also known as eigenvector analysis, is a method to analyze the structural features of matrix data and extract the main data features [27]. Feature vector corresponds to space vector, also known as space feature vector or space mode, which reflects the spatial distribution characteristics of the factor field to a certain extent. The principal component (PC), also known as the time coefficient, corresponds to the time variation, which reflects the weight variation of the corresponding spatial mode with time. To investigate the causes of soil dryness in Guangxi, we further analyzed the correlation between the main variation model of SSMI (EOF-1) and its corresponding principal component (PC-1) and sea surface temperature (SST) from the perspective of remote correlation. Figure 2 shows that, since 1990, drought and flood disasters occurred alternately in Guangxi, with slight or serious droughts and floods occurring almost every year; additionally, the temporal distribution of different types of droughts and floods is evident from the figure. Serious soil drought occurred every five to six years on average in Guangxi, with multiple droughts observed in 1993, 1998, and 2004. In general, Guangxi experienced many soil droughts during 1990-2018, with the drought duration and intensity being generally heavy. After 2000, the soil droughts in arid areas decreased ( Figure 2). tionally, the temporal distribution of different types of droughts and floods is evident from the figure. Serious soil drought occurred every five to six years on average in Guangxi, with multiple droughts observed in 1993, 1998, and 2004. In general, Guangxi experienced many soil droughts during 1990-2018, with the drought duration and intensity being generally heavy. After 2000, the soil droughts in arid areas decreased (Figure 2). Damaged areas affected by soil drought. Damaged area is defined as the ratio between the number of pixels with a certain level of drought and the total number of pixels in this area, and then the ratio is multiplied by the total area of Guangxi to obtain the regional drought damage area.
Identification of Variation Characteristics of Soil Drought
Further, the SSMI showed evident seasonal characteristics in Guangxi ( Figure 3), with the magnitude of variation being the highest in autumn. Serious soil droughts were observed in the autumns of 1992 and 1998, but none have occurred since 2010. However, the SSMI trends in winter and spring were similar. Overall, the seasonal soil moisture dynamics in Guangxi showed similar changes with the interannual dynamics. After 2012, the observed loss of soil moisture in each season was alleviated. (b) Damaged areas affected by soil drought. Damaged area is defined as the ratio between the number of pixels with a certain level of drought and the total number of pixels in this area, and then the ratio is multiplied by the total area of Guangxi to obtain the regional drought damage area.
Further, the SSMI showed evident seasonal characteristics in Guangxi ( Figure 3), with the magnitude of variation being the highest in autumn. Serious soil droughts were observed in the autumns of 1992 and 1998, but none have occurred since 2010. However, the SSMI trends in winter and spring were similar. Overall, the seasonal soil moisture dynamics in Guangxi showed similar changes with the interannual dynamics. After 2012, the observed loss of soil moisture in each season was alleviated.
Statistics of Drought Frequency
The results of the frequency of slight to extreme drought occurrence ( Figure 4) indicated that the frequency of slight droughts in Guangxi was 16.37-34.77%, of which the frequency in central Guangxi was the highest, followed by the southern region. The frequency of slight droughts in most other areas was less than 30%. The frequency of moderate droughts was 10.63-21.84%, while it was less than 18% in most areas. The frequency of severe droughts was less than 5% in most areas, with the lowest frequency being 2.29%. The frequency of extreme droughts was extremely low (0-6%), with the value being less than 2% for most areas. The order of the average frequency of different drought levels (Figure 4) was light drought > (26.84%) > moderate drought (16.15%) > severe drought (6.11%) > extreme drought (2.64%) in Guangxi. Overall, no significant geographical difference was observed in the soil droughts.
Statistics of Drought Frequency
The results of the frequency of slight to extreme drought occurrence ( Figure 4) indicated that the frequency of slight droughts in Guangxi was 16.37-34.77%, of which the frequency in central Guangxi was the highest, followed by the southern region. The frequency of slight droughts in most other areas was less than 30%. The frequency of moderate droughts was 10.63-21.84%, while it was less than 18% in most areas. The frequency of severe droughts was less than 5% in most areas, with the lowest frequency being 2.29%. The frequency of extreme droughts was extremely low (0-6%), with the value being less than 2% for most areas. The order of the average frequency of different drought levels ( Figure 4) was light drought > (26.84%) > moderate drought (16.15%) > severe drought (6.11%) > extreme drought (2.64%) in Guangxi. Overall, no significant geographical difference was observed in the soil droughts. Figure 5 shows that slight droughts mostly occurred in autumn and winter, lasting for more than 20 months. During spring, the probability of mild weather in the southwest was higher than that in the southeast. During summer, soil droughts occurred for a smaller number of months. The spatial variation of moderate droughts was similar to that of slight droughts, with soil droughts occurring in autumn and winter. Further, the number of months of severe and extreme droughts was relatively small, and the number of months of sudden droughts in each season was mostly less than five months. The underlying surface in Guangxi is relatively uniform, and the water and heat redistribution of this region do not allow for evident regional drought differences, thereby resulting in no spatial heterogeneity. However, severe and extreme droughts occurred in the least number of months. These trends are significantly important factors that affect crop production and carbon accumulation. Figure 5 shows that slight droughts mostly occurred in autumn and winter, lasting for more than 20 months. During spring, the probability of mild weather in the southwest was higher than that in the southeast. During summer, soil droughts occurred for a smaller number of months. The spatial variation of moderate droughts was similar to that of slight droughts, with soil droughts occurring in autumn and winter. Further, the number of months of severe and extreme droughts was relatively small, and the number of months of sudden droughts in each season was mostly less than five months. The underlying surface in Guangxi is relatively uniform, and the water and heat redistribution of this region do not allow for evident regional drought differences, thereby resulting in no spatial heterogeneity. However, severe and extreme droughts occurred in the least number of months. These trends are significantly important factors that affect crop production and carbon accumulation.
Spatial Evolution Characteristics of Two Extremely Severe Soil Droughts
Two extreme soil drought events, which occurred in 1998 and 2003, were selected from the period 1990-2018 using the drought migration method ( Figure 6). The migration direction of drought cores indicated that the two droughts were mainly concentrated in central Guangxi. The 1998 and 2003 soil drought followed a similar northeast to southwest trajectory. The migration paths of the two soil droughts were longer at the initial stage of formation and later extended with the aggravation of drought duration.
Spatial Evolution Characteristics of Two Extremely Severe Soil Droughts
Two extreme soil drought events, which occurred in 1998 and 2003, were selected from the period 1990-2018 using the drought migration method ( Figure 6). The migration direction of drought cores indicated that the two droughts were mainly concentrated in central Guangxi. The 1998 and 2003 soil drought followed a similar northeast to southwest trajectory. The migration paths of the two soil droughts were longer at the initial stage of formation and later extended with the aggravation of drought duration.
Correlation between Soil Moisture Anomaly and Ocean Surface Temperature
The soil moisture anomaly is regulated by precipitation, and the main reasons for precipitation differences are caused by anomalies in the ocean temperature [28,29]. Therefore, to understand the importance of atmospheric circulation caused by SST anomaly to soil moisture in Guangxi, we compared the teleconnection between soil moisture and ocean temperature (Figure 7). The spatial variation of the dominant pattern (EOF-1) obtained from EOF analysis was similar to the soil moisture trend during 1990-2018, accounting for 66.9% of the total square covariance of Guangxi. Overall, the PC-1 showed that the soil moisture in Guangxi showed an increasing trend over time. Correlation analysis showed that PC-1 and SST were significantly positively correlated (p < 0.05), suggesting that SST might be an important teleconnection factor affecting soil moisture in Guangxi.
Correlation between Soil Moisture Anomaly and Ocean Surface Temperature
The soil moisture anomaly is regulated by precipitation, and the main reasons for precipitation differences are caused by anomalies in the ocean temperature [28,29]. Therefore, to understand the importance of atmospheric circulation caused by SST anomaly to soil moisture in Guangxi, we compared the teleconnection between soil moisture and ocean temperature (Figure 7). The spatial variation of the dominant pattern (EOF-1) obtained from EOF analysis was similar to the soil moisture trend during 1990-2018, accounting for 66.9% of the total square covariance of Guangxi. Overall, the PC-1 showed that the soil moisture in Guangxi showed an increasing trend over time. Correlation analysis showed that PC-1 and SST were significantly positively correlated (p < 0.05), suggesting that SST might be an important teleconnection factor affecting soil moisture in Guangxi.
Discussion
As soil moisture plays an important role in drought research, an assimilation data product is a useful alternative method in the absence of long-term consistent soil moisture observational data at the national scale [15,30,31]. In this study, TerraClimate soil moisture product was used to construct the SSMI. The soil drought index derived from the data set was ideal to monitor regional droughts, and it accurately describe the spatiotemporal characteristics of regional soil water addition and loss. Using this index, we observed that the drought types in Guangxi are mainly light drought and moderate drought, and the occurrence of severe and extreme drought is relatively low, which is basically once in five years. Spatially, the occurrence frequency is low in the middle and high in the east and west. The majority of the soil droughts in Guangxi evolved from moderate droughts, and the probability of sudden, short, and strong droughts was low. This provides additional time for early warning and prevention from the beginning of droughts to the beginning of abnormal droughts, which further helps to reduce negative implications of droughts.
According to the different disaster seasons, soil droughts can be divided into spring, summer, autumn and winter droughts [15]. The drought pattern in Guangxi differed under the influence of monsoon circulation and tropical cyclone, and the frequency of autumn droughts was the highest, followed by winter droughts, while that of spring and summer droughts was low. Spring droughts refer to the droughts between March and
Discussion
As soil moisture plays an important role in drought research, an assimilation data product is a useful alternative method in the absence of long-term consistent soil moisture observational data at the national scale [15,30,31]. In this study, TerraClimate soil moisture product was used to construct the SSMI. The soil drought index derived from the data set was ideal to monitor regional droughts, and it accurately describe the spatiotemporal characteristics of regional soil water addition and loss. Using this index, we observed that the drought types in Guangxi are mainly light drought and moderate drought, and the occurrence of severe and extreme drought is relatively low, which is basically once in five years. Spatially, the occurrence frequency is low in the middle and high in the east and west. The majority of the soil droughts in Guangxi evolved from moderate droughts, and the probability of sudden, short, and strong droughts was low. This provides additional time for early warning and prevention from the beginning of droughts to the beginning of abnormal droughts, which further helps to reduce negative implications of droughts.
According to the different disaster seasons, soil droughts can be divided into spring, summer, autumn and winter droughts [15]. The drought pattern in Guangxi differed under the influence of monsoon circulation and tropical cyclone, and the frequency of autumn droughts was the highest, followed by winter droughts, while that of spring and summer droughts was low. Spring droughts refer to the droughts between March and May [32].
During spring, crops bloom, grow, and develop in Guangxi; moreover, it is the sowing and emergence season for spring plants. Spring precipitation in Guangxi is relatively less, and precipitation less than the usual intensity can cause serious droughts that not only affect summer vegetation productivity, but also cause bad spring sowing conditions and affect the growth and harvest of autumn crops and carbon accumulation. Summer affects vegetation productivity and ecosystem operation [33] and is most vulnerable to monsoon. The frequency of droughts in Guangxi in summer was extremely low, possibly due to the location of Guangxi in the monsoon region. In summer, the strong East Asian monsoon brings in a large water mass from the ocean, and the rain-forming clouds over Guangxi lead to abundant regional precipitation, which improves the soil water content. During autumn, autumn harvest plants mature and overwintering plants sprout and are sown. Autumn droughts occur between September and November. They may not only affect the autumn vegetation productivity of the current year, but also the summer vegetation productivity of the next year [34]. Autumn droughts occur almost once every two years in Guangxi, with slight droughts being more common. The frequency of these droughts was higher in the middle-eastern regions than of that in the western regions. Moreover, the frequency of moderate, severe, and extreme droughts in this season was also significantly higher than of that in other seasons. In addition, autumn is generally characterized by water storage, long-term droughts, and less rain. Subsequently, the reductions in runoff cause insufficient water reserves for water conservancy projects, thereby creating difficulties in using water during winter and spring. Winter droughts occur from December to February of the next year. In Guangxi, winter droughts occur once in two years, with slight droughts having a higher frequency in the central and southwest regions than in in the northeast regions. Overall, the frequency of soil droughts in autumn and winter in Guangxi was relatively high. Among these droughts, most were slight and moderate droughts. These findings suggested that the impact of autumn and winter soil droughts should be considered while assessing the impacts of droughts on regional crop production and ecosystem.
From the perspective of temporal characteristics, the two major soil droughts (1998 and 2003) in Guangxi lasted for more than six months. Every natural phenomenon has its own unique process of formation, occurrence, development, and extinction [15]. For example, floods tend to form quickly and can be formed in a few days or even hours [35]. Hurricanes form relatively faster, probably within hours, minutes, or seconds [36]. Contrastingly, the occurrence and development of soil droughts is much slower (several months and several seasons) [15]. Long-term soil water deficit affects the regional crop production and domestic and ecological water demand [13,30]. In addition, regarding the spatial scale, the two major soil droughts in Guangxi occurred extensively. Most areas in Guangxi are affected by the subtropical monsoon humid climate, which reduces the probability of soil droughts. However, once soil droughts occur, the soil moisture in the entire region is relatively reduced. Some studies predict that although Guangxi is a humid region, several measures to deal with soil water deficit under the background of frequent extreme climate events in the future will assist in reducing losses in agricultural production and other associated economic losses [37].
Regarding variability, as soil droughts are temporary phenomena, they are a direct reflection of the persistent anomalies in atmospheric circulation and major weather systems. The time and intensity of monsoon onset and retreat and the duration of monsoon interruption are directly related to soil drought [38]. The atmospheric circulation anomaly refers to the abnormal changes in the development, mutual configuration and interaction, and intensity and location of some atmospheric circulation systems, all of which directly cause large-scale droughts and floods [39,40]. The anomaly of monsoon circulation implies that the time, position, advance and retreat speed, and intensity of monsoon change considerably compare with those of normal years [41,42], which is often the reason for the frequent occurrence of soil droughts in the monsoon region. The abnormal atmospheric or monsoon circulation results in less precipitation in a certain area compared with the normal conditions. When the degree and duration of low precipitation reach a certain degree, meteorological droughts occurs [43]. As precipitation is the main source of water supply, meteorological droughts may induce soil droughts [15]. During the early stage of meteorological droughts, the soil moisture content will not decrease immediately due to the regulation and storage of soil moisture [44]. However, less precipitation is generally accompanied by a temperature increase, which further enhances evapotranspiration and excessive water consumption in the vadose zone. Under other constant conditions, when the meteorological droughts intensify and spread further, the precipitation and runoff may decrease, while the water in the vadose zone will continue to be consumed and not be supplemented, and thus, the soil water condition will further deteriorate [45,46]. Our findings indicated a strong positive correlation between the soil moisture in Guangxi and the ocean temperature in the surrounding sea area, which was in agreement with our assumption that the ocean surface temperature anomaly creates the atmospheric circulation anomaly or monsoon circulation anomaly, and later affects the rainfall and soil moisture anomaly in Guangxi.
This study makes up for the deficiency of previous studies on drought in Guangxi from the perspective of soil moisture, but the analysis results still have some uncertainties and deficiencies. First, this study only selects TerraClimate soil moisture products with high spatial resolution, but the applicability of this data in karst areas has not been fully evaluated. However, comparing the soil moisture products through model and reanalysis, TerraClimate soil moisture, which is more reliable and corrected by remote sensing and models, has been applied to the study of Guangxi for the first time. In addition, there are many factors affecting the dynamics of soil moisture in the driving force analysis. This study uses the most fundamental driving factor-ocean surface temperature, which may not fully explain the long-term evolution characteristics of soil drought in Guangxi.
Conclusions
In this study, the SSMI model was constructed using the TerraClimate soil moisture data; additionally, the applicability of SSMI in soil drought monitoring in Guangxi was evaluated. The following conclusions were drawn: (1) The annual autumn and winter soil droughts in Guangxi were moderate from 1990 to 2018, and the probability of moderate and higher-grade drought after 2005 is much lower than that before 2005. (2) The level of soil drought in Guangxi is mainly light drought and moderate drought, and the possibility of severe drought and extreme drought is relatively low. (3) Two severe soil droughts that occurred in 1998 and 2003 exhibited a large disaster-affected area and persisted for a long duration. (4) The principal component variables of ocean surface temperature and soil moisture showed a strong positive correlation, implying that the ocean surface temperature anomaly may be the root driving force of soil moisture variation in Guangxi. These findings provide scientific guidance for the early warning, prevention, and mitigation of social, ecological, and economic losses associated with soil droughts in Guangxi. Moreover, the results serve as a valuable reference for understanding the impacts of large-scale climate anomalies on soil moisture. | 2022-01-22T16:37:00.583Z | 2022-01-19T00:00:00.000 | {
"year": 2022,
"sha1": "f4fcca03cb6fe09fac1f21a883ac43e8f04766cd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/14/3/289/pdf?version=1644293687",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1ff6186a234262429a4e4973dd95275ca9b296ac",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
} |
224705904 | pes2o/s2orc | v3-fos-license | DEAL: Difficulty-aware Active Learning for Semantic Segmentation
Active learning aims to address the paucity of labeled data by finding the most informative samples. However, when applying to semantic segmentation, existing methods ignore the segmentation difficulty of different semantic areas, which leads to poor performance on those hard semantic areas such as tiny or slender objects. To deal with this problem, we propose a semantic Difficulty-awarE Active Learning (DEAL) network composed of two branches: the common segmentation branch and the semantic difficulty branch. For the latter branch, with the supervision of segmentation error between the segmentation result and GT, a pixel-wise probability attention module is introduced to learn the semantic difficulty scores for different semantic areas. Finally, two acquisition functions are devised to select the most valuable samples with semantic difficulty. Competitive results on semantic segmentation benchmarks demonstrate that DEAL achieves state-of-the-art active learning performance and improves the performance of the hard semantic areas in particular.
Introduction
Semantic segmentation is a fundamental task for various applications such as autonomous driving [1,2], biomedical image analysis [3,4,5], remote sensing [6] and robot manipulation [7]. Recently, data-driven methods have achieved great success with large-scale datasets [8,9]. However, tremendous annotation cost has become an obstacle for these methods to be widely applied in practical scenarios. Active Learning (AL) can be the right solution by finding the most informative samples. Annotating those selected samples can support sufficient supervision information and reduce the requirement of labeled samples dramatically.
Previous methods can be mainly categorized into two families: uncertaintybased [10,11,12,13] and representation-based [14,15,16]. However, many works [10,12,14,16] are only evaluated on image classification benchmarks. There has been considerably less work specially designed for semantic segmentation. Traditional uncertainty-based methods like Entropy [17] and Query-By-Committee (QBC) [18] have demonstrated their effectiveness in semantic segmentation [19,20]. However, all of them are solely based on the uncertainty reflected on each pixel, without considering the semantic difficulty and the actual labeling scenarios.
In this paper, we propose a semantic Difficulty-awarE Active Learning (DEAL) method taking the semantic difficulty into consideration. Due to the class imbalance and shape disparity, a noticeable semantic difficulty difference exists among the different semantic areas in an image. To capture this difference, we adopt a two-branch network composed of a semantic segmentation branch and a semantic difficulty branch. For the former branch, we adopt the common segmentation network. For the latter branch, we leverage the wrong predicted result as the supervision, which is termed as the error mask. It's a binary image where the right and wrong positions have a value 0 and 1, respectively. As illustrated in Fig. 1(e), we color these wrong positions for better visualization. Then, a pixelwise probability attention module is introduced to aggregate similar pixels into areas and learn the proportion of misclassified pixels as the difficulty score for each area. Finally, we can obtain the semantic difficulty map in Fig. 1(b).
Then two acquisition functions are devised based on the map. One is Difficultyaware uncertainty Score (DS) combining the uncertainty and difficulty. The other is Difficulty-aware semantic Entropy (DE) solely based on the difficulty. Experiments show that the learned difficulty scores have a strong connection with the standard evaluation metric IoU. And DEAL can effectively improve the overall AL performance and the IoU of the hard semantic classes in particular.
In summary, our major contributions are as follows: 1) Proposing a new AL framework incorporating the semantic difficulty to select the most informative samples for semantic segmentation. 2) Utilizing error mask to learn the semantic difficulty. 3) Competitive results on CamVid [21] and Cityscapes [8].
Related Work
AL for semantic segmentation The core of AL is measuring the informativeness of the unlabelled samples. Modern AL methods can be mainly divided into two groups: uncertainty-based [10,11,12,13] and representation-based [14,15,16]. The latter views the AL process as an approximation of the entire data distribution and query samples to increase the data diversity, such as Core-set [14] and VAAL [15], which can be directly used in semantic segmentation. There are also some methods specially designed for semantic segmentation, which can also be divided into two groups: image-level [4,11,19,22] and region-level [23,24,20].
Image-level methods use the complete image as the sampling unit. [4] propose suggestive annotation (SA) and train a group of models on various labeled sets obtained with bootstrap sampling and select samples with the highest variance. [11] employ MC dropout to measure uncertainty for melanoma segmentation. [19] adopt QBC strategy and propose a cost-sensitive active learning method for intracranial hemorrhage detection. [22] build a batch mode multi-clue method, incorporating edge information with QBC strategy and graph-based representativeness. All of them are based on a group of models and time-consuming when querying a large unlabeled data pool.
Region-level methods only sample the informative regions from images. [23] combines the MC dropout uncertainty with an effort estimation regressed from the annotation click patterns, which is hard to access for many datasets. [24] propose ViewAL and use the inconsistencies in model predictions across viewpoints to measure the uncertainty of super-pixels, which is specially designed for RGB-D data. [20] model a deep Q-network-based query network as a reinforcement learning agent, trying to learn sampling strategies based on prior AL experience. In this work, we incorporate the semantic difficulty to measure the informativeness and select samples at the image level. Region-level method will be studied in the future.
Self-attention mechanism for semantic segmentation The self-attention mechanism is first proposed by [25] in the machine translation task. Now, it has been widely used in many tasks [25,26,27,28] owing to its intuition, versatility and interpretability [29]. The ability to capture the long-range dependencies inspires many semantic segmentation works designing their attention modules. [30] use a point-wise spatial attention module to aggregate context information in a self-adaptive manner. [31] introduce an object context pooling scheme to better aggregate similar pixels belonging to the same object category. [32] replace the non-local operation [33] into two consecutive criss-cross operations and gather long-range contextual information in the horizontal and vertical directions. [34] design two types of attention modules to exploit the dependencies between pixels and channel maps. Our method also uses the pixel-wise positional attention mechanism in [34] to aggregate similar pixels.
Method
Before introducing our method, we first give the definition of the AL problem. Let (x a , y a ) be an annotated sample from the initial annotated dataset D a and x u be an unlabeled sample from a much larger unlabeled data pool D u . AL aims to iteratively query a subset D s containing the most informative m samples In what follows, we first give an overview of our difficulty-aware active learning framework, then detail the probability attention module and loss functions, finally define two acquisition functions.
Difficulty-aware Active Learning
To learn the semantic difficulty, we exploit the error mask generated from the segmentation result. Our intuition is that these wrong predictions are what our model "feels" difficult to segment, which may have a relation with the semantic difficulty. Thus, we build a two-branch network generating semantic segmentation result and semantic difficulty map in a multi-task manner, best viewed in Fig. 2. The first branch is a common segmentation network, which can be used to generate the error mask. The second branch is devised to learn the semantic difficulty map with the guidance of error mask. This two-branch architecture is inspired by [13]. In their work, a loss prediction module is attached to the task model to predict a reliable loss for x u and samples with the highest losses are selected. While in our task, we dig deeper into the scene and analyze the semantic difficulty of each area.
As illustrated in Fig. 2, the first branch can be any semantic segmentation network. Assume S * is the output of softmax layer and S p is the prediction result after the argmax operation. With the segmentation result S p and GT S g , the error mask M e can be computed by: where S p k and S g k denote the k th pixel value of the segmentation result and GT, M e k is the k th pixel value of the error mask.
The second branch is composed of two parts: a probability attention module and a simple 1 × 1 convolution layer used for binary classification. The softmax output of the first branch S * is directly used as the input of the second branch, which are C-channel probability maps and C is the number of classes. We denote it as P ∈ R C×H×W and P k is the probability vector for the k th pixel. Using probability maps is naive but accompanied with two advantages. First, pixels with similar difficulty tend to have similar P k . Second, pixels of the same semantic tend to have similar P k . Combined with a pixel-wise attention module, we can easily aggregate these similar pixels and learn similar difficulty scores for them. In our learning schema, the performance of the second branch depends much on the output of the first branch. However, there is no much difference if we branch these two tasks earlier and learn the independent features. We validate this opinion in Sec. 5.2. The semantic difficulty learning process can be imagined into two steps. Firstly, we learn a binary segmentation network with the supervision of error mask M e . Each pixel will learn a semantic difficulty score. Secondly, similar pixels are aggregated into an area so that this score can be spread among them. Finally, we can obtain the semantic difficulty map M d .
Probability Attention Module
In this section, we detail the probability attention module (PAM) in our task. Inspired by [34], we use this module to aggregate pixels with similar softmax probability. Given the probability maps P ∈ R C×H×W , we first reshape it to P ∈ R C×K , where K = H × W . Then the probability attention matrix A ∈ R K×K can be computed with P T P and a softmax operation as below: (a) Error inside the object (b) Error on the object boundary where A ji is the i th pixel's impact on j th pixel, P j is the original probability vector of the j th pixel and Q j is the one after attention, γ is a learnable weight factor. Finally, we can get the probability maps Q ∈ R C×H×W after attention.
Let's take the segmentation result of the two bicyclists in Fig. 2 to explain the role of PAM as it reflects two typical errors in semantic segmentation: (1) error inside the object (the smaller one b 1 ); (2) error on the object boundary (the larger one b 2 ), as shown in Fig. 3. Assume our attention module can aggregate pixels from the same object together, the right part of the object learns 0 while the wrong part learns 1. Since b 1 has a larger part of wrong areas, it tends to learn larger difficulty scores than b 2 . Similar to objects, pixels from the same semantic, such as road, sky and buildings can also learn similar difficulty scores. Ablation study in Sec. 5.1 also demonstrates that PAM can learn more smooth difficulty scores for various semantic areas.
Some traditional methods also employ the softmax probabilities to measure the uncertainty, such as least confidence (LC) [35], margin sampling (MS) [36] and Entropy [17]. The most significant difference between our method and these methods is that we consider difficulty at the semantic level with an attention module, rather than measuring the uncertainty of each pixel alone. QBC [18] can use a group of models, but it still stays at the pixel level. To clearly see the difference, we compare our semantic difficulty map with the uncertainty maps of these methods in Fig. 4 these methods, which are loyal to the uncertainty of each pixel. For example, some pixels belonging to sky can have the same uncertainty with traffic light and traffic sign. Supposing an image has many pixels with high uncertainty belonging to the easier classes, it will be selected by these methods. While our semantic difficulty map (first in the second row) can serve as the difficulty attention and distinguish more valuable pixels. As illustrated in the second row, combined with our difficulty map, the uncertainty of the easier semantic areas like sky is suppressed while the harder semantic areas like traffic sign is reserved.
Loss Functions
Loss of Semantic Segmentation To make an equitable comparison with other methods, we use the standard cross-entropy loss for the semantic segmentation branch, which is defined as: where S * k and S g k are the segmentation output and ground truth for pixel k, (·) is the cross-entropy loss, K is the total pixel number, and R is the L2-norm regularization term.
Loss of Semantic Difficulty For the semantic difficulty branch, we use an inverted weighted binary cross-entropy loss defined below, as there is a considerable imbalance between the right and wrong areas of error mask.
where M d k and M e k are the difficulty prediction and error mask ground truth for pixel k, 1(·) is the indicator function, and λ 1 and λ 2 are dynamic weight factors.
Final Loss Our final training objective is a combination of Eq. 3 and Eq. 4, which is computed as: where α is a weight factor and set to 1 in the experiments.
Acquisition Functions
Samples from D u are usually ranked with a scalar score in AL. However, semantic segmentation is a dense-classification task, many methods output a score for each pixel on the image, including our semantic difficulty map. Thus, it's quite significant to design a proper acquisition function. Below are two functions we have designed. Difficulty-aware uncertainty Score (DS) Assume M c is the uncertainty map generated with traditional methods like Entropy, we can define the equation below to make each pixel aware of its semantic difficulty.
where M c k and M d k are the uncertainty score and difficulty score of the k th pixel, K is the total pixel number, S DS is the average difficulty-aware uncertainty score for selecting samples with the highest uncertainty.
Difficulty-aware semantic Entropy (DE) This acquisition function is inspired by the laddered semantic difficulty reflected on M d . Usually, pixels from the same semantic area have almost the same semantic difficulty scores, best viewed in Fig. 5(c). In this example, we quantify the difficulty of pixels in Fig. 5(a) into 6 levels in Fig. 5(d-i), with difficulty scores gradually increasing from level 1 to level 6. Generally, if we quantify the difficulty in an image into L levels, the difficulty entropy acquisition function can be defined below to query samples with more balanced semantic difficulty, which can be viewed as a representationbased method at the image scale.
where K l is the number of pixels falling in the l th difficulty level, K is the total pixel number, L is the quantified difficulty levels, S DE is the difficulty-aware semantic entropy for selecting samples with more balanced semantic difficulty. Our full algorithm of DEAL is shown in Algorithm 1.
Algorithm 1: Difficulty-aware Active Learning Algorithm
Input: D a , D u , budget m, AL query times N , initialized network parameter Θ Input: iterations T , weight factor α, quantified difficulty levels L (optional) Output: D a , D u , Optimized Θ 1: for n = 1, 2, ..., N do 2: Train the two-branch difficulty learning network on D a 3: for t = 1, 2, ..., T do 4: Sample (x a , y a ) from D a 5: Compute the segmentation output S * and result S p 6: Obtain M e according to Eq. 1 7: Compute the difficulty prediction M d 8: Compute Lseg, L dif , L according to Eq. 3, Eq. 4, Eq. 5 9: Update Θ using gradient descent 10: end for 11: Rank x u based on Eq. 6 or Eq. 7 12: Select D s with top m samples 13: Annotate D s by oralces 14: D a ← D a + D s 15: D u ← D u − D s 16: end for 17: return D a , D u , Optimized Θ
Experiments and Results
In this section, we first describe the datasets we use to evaluate our method and the implementation details, then the baseline methods, finally compare our results with these baselines.
Experimental Setup
Datasets We evaluate DEAL on two street scene semantic segmentation datasets: CamVid [21] and Cityscapes [8]. For Cityscapes, we randomly select 300 samples from the training set as the validation set, and the original validation set serves as the test set, same to [15]. The detailed configurations are list in Table 1. For each dataset, we first randomly sample 10% data from the training set as the initial annotated dataset D a , then iteratively query 5% new data D s from the remaining training set, which serves as the unlabeled data pool D u . Considering samples in the street scenes have high similarities, we first randomly choose a subset from D u , then query m samples from the subset, same to [37]. Implementation Details We adopt the Deeplabv3+ [38] architecture with a Mobilenetv2 [39] backbone. For each dataset, we use mini-batch SGD [40] with momentum 0.9 and weight decay 5e −4 in training. The batch size is 4 and 8 for CamVid and Cityscapes, respectively. For all methods and the upper bound method with the full training data, we train 100 epochs with an unweighted cross-entropy loss function. Similar to [38], we apply the "poly" learning rate strategy and the initial learning rate is 0.01 and multiplied by (1 − iter max iter ) 0.9 . To accelerate the calculation of the probability attention module, the input of the difficulty branch is resized to 80×60 and 86×86 for CamVid and Cityscapes.
Evaluated Methods
We compare DEAL with the following methods. Random is a simple baseline method. Entropy and QBC are two uncertainty-based methods. Core-set and VAAL are two representation-based methods. DEAL (DS) and DEAL (DE) are our methods with different acquisition functions.
-Random: each sample in D u is queried with uniform probability.
-Entropy (Uncertainty): we query samples with max mean entropy of all pixels. [13] and [20] have verified this method is quite competitive in image classification and segmentation tasks. -QBC (Uncertainty): previous methods designed for semantic segmentation, like [4,11,22,23], all use a group of models to measure uncertainty. We use the efficient MC dropout to represent these methods and report the best performance out of both the max-entropy and variation-ratio acquisition functions. -Core-set (Representation): we query samples that can best cover the entire data distribution. We use the global average pooling operation on the encoder output features of Deeplabv3+ and get a feature vector for each sample. Then k-Center-Greedy strategy is used to query the most informative samples, and the distance metric is l 2 distance according to [14]. -VAAL (Representation): as a new state-of-the-art task-agnostic method, the sample query process of VAAL is totally separated from the task learner. We use this method to query samples that are most likely from D u and then report the performance with our task model.
Experimental Results
The mean Intersection over Union (mIoU) at each AL stage: 10%, 15%, 20%, 25%, 30%, 35%, 40% of the full training set, are adopted as the evaluation metric. Every method is run 5 times and the average mIoUs are reported.
(a) AL in CamVid (b) AL in Cityscapes Fig. 6: DEAL performance on CamVid [21] and Cityscapes [8]. Every method is evaluated by the average mIoU of 5 runs. The dashed line represents the upper performance we can reach compared with the full training data.
Results on CamVid Fig. 6(a) shows results on a small dataset CamVid. Both DEAL (DS) and DEAL (DE) outperform baseline methods at each AL stage.
We can obtain a performance of 61.64% mIoU with 40% training data, about 95% of the upper performance with full training data. Entropy can achieve good results at the last stage. However, it's quite unstable and depends much on the performance of current model, making it behave poorly and exceeded by Random at some stages. On the contrary, DEAL (DS) behaves better with the difficulty attention. QBC has a more stable growth curve as it depends less on the single model. Representation-based methods like VAAL and Core-set behave much better at earlier stages like 15% and 20%. However, Core-set lags behind later while VAAL still works well. Also, the experiment results suggest that the data diversity is more important when the entire dataset is small.
Results on Cityscapes Fig. 6(b) shows results on a larger dataset Cityscapes. The budget is 150 and all methods have more stable growth curves. When the budget is enough, Entropy can achieve better performance than other baseline methods. Consistently, with semantic difficulty, both DEAL (DS) and DEAL (DE) outperform other methods. Table 2 shows the per-class IoU for each method at the last AL stage (40% training data). Compared with Entropy, our method are more competitive on the difficult classes, such as pole, traffic sign, rider and motorcycle. For representation-based methods, the gap between Coreset and VAAL is more obvious, suggesting that Core-set is less effective when the input has a higher dimension. And VAAL is easily affected by the performance of the learned variational autoencoder, which introduces more uncertainty to the active learning system. If continue querying new data, our method will reach the upper performance of full training data with about 60% data. 5 Ablation Study
Effect of PAM
To further understand the effect of PAM, we first visualize the attention heatmaps of the wrong predictions in Fig. 7. For each row, three points are selected from error mask and marked as {1, 2, 3} in Fig. 7(a,b,c). In the first row, point 1 is from road and misclassified as bicyclist, we can observe that its related classes are bicyclist, road and sidewalk in Fig. 7(d). Point 2 is from buildings and misclassified as bicyclist, too. Point 3 is from sign symbol and misclassified as tree, we can also observe its related semantic areas in Fig. 7(f). Then we conduct an ablation study by removing PAM and directly learning the semantic difficulty map without the attention among pixels. The qualitative results are shown in Fig. 8(a). Basically, without the long-range dependencies, pixels of the same semantic can learn quite different scores because the learned score of each pixel is more sensitive to its original uncertainty. Combined with PAM, we can learn more smooth difficulty map, which is more friendly to the annotators since the aggregated semantic areas are close to the labeling units in the real scenario. Also, we compare this ablation model with our original model on Cityscapes in Fig. 8 (b). DEAL with PAM can achieve a better performance at each AL stage. DEAL without PAM fails to find samples with more balanced semantic difficulty, which makes it get a lower entropy of class distributions.
Branch Position
In this section, we discuss the branch position of our framework. In our method above, the semantic difficulty branch is simply added after the segmentation branch. It may occur to us that if the segmentation branch performs poorly, the difficulty branch will perform poorly, too. These two tasks should be separated earlier and learn independent features. Thus, we modify our model architecture and branch out two tasks earlier at the boarder of encoder and decoder based on the Deeplabv3+ [38] architecture, as shown in Fig. 9(a). Also, we compare the AL performance on Cityscapes with both architectures in Fig. 9(b). The performance of the modified version is slightly poor than the original version but still competitive with other methods. However, this modified version requires more computations, while our original version is simple yet effective and can be easily plugged into any segmentation networks.
Conclusion and Future Work
In this work, we have introduced a novel Difficulty-awarE Active Learning (DEAL) method for semantic segmentation, which incorporates the semantic difficulty to select the most informative samples. For any segmentation network, the error mask is firstly generated with the predicted segmentation result and GT. Then, with the guidance of error mask, the probability attention module is introduced to aggregate similar pixels and predict the semantic difficulty maps. Finally, two acquisition functions are devised. One is combining the uncertainty of segmentation result and the semantic difficulty. The other is solely based on the difficulty. Experiments on CamVid and Cityscapes demonstrate that the proposed DEAL achieves SOTA performance and can effectively improve the performance of hard semantic areas. In the future work, we will explore more possibilities with the semantic difficulty map and apply it to region-level active learning method for semantic segmentation. | 2020-10-20T01:01:14.383Z | 2020-10-17T00:00:00.000 | {
"year": 2020,
"sha1": "199bae678f2b21f93996ef02e08cbe30603b1411",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2010.08705",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "199bae678f2b21f93996ef02e08cbe30603b1411",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
236913155 | pes2o/s2orc | v3-fos-license | Complete chloroplast genome sequence of medicinal plant Potentilla lineata Treviranus (Rosaceae) from Yunnan, China
Abstract Potentilla lineata Treviranus is a common medicinal plant distributed in the southwest of China. In this study, we sequenced the complete chloroplast genome sequence of P. lineata and investigated its phylogenetic relationship within Potentilla and related genera. The total length of the chloroplast genome is 156,985 bp. The genome exhibits a typical quadripartite structure containing a pair of IRs (inverted repeats) of 25,974 bp separated by a small single copy (SSC) region of 18,859 bp and a large single copy (LSC) region of 86,178 bp. The chloroplast genome contained 112 genes, including 78 protein-coding genes, 30 tRNA genes, and four rRNA genes. The phylogenetic analysis indicates that Potentilla L. is a monophytic taxon that is sister to Potaninia Maxim. Potentilla lineata is closely related to P. centigrana Maxim in the present study. This study provides a reference for the phylogeny and species evolution of the genus Potentilla and related genera.
Potentilla lineata Treviranus is classified in a complex genus with about 500 species, namely Potentilla in the Rosaceae (Xu and Podlech 2010). The species is mainly distributed in Yunnan, Guizhou and Tibet provinces of China. As a common medicinal plant, the root of P. lineata is used to treat enteritis and diarrhea, traumatic bleeding, and is an anti-inflammatory (Manju et al. 2006). The plant is widely used by Bai and Dai people due to its medicinal value (Duan 2017;Jiang 2017). Studies on this species have mainly focused on pharmacological effects and active ingredients (Chen et al. 2013;To grul et al. 2015). However, the chloroplast genome of P. lineata has not been analyzed. Here we sequenced the complete chloroplast genome sequence of P. lineata, as well as reconstructed a phylogenetic tree with other taxa classified in the Rosaceae to determine its evolutionary history.
Fresh and cleaned leaf materials of P. lineata were sampled from Dali, Yunnan, China (N 25 52 0 38.91 00 , E 100 00 0 27.25 00 ). A voucher specimen (No. ZDQ021) was collected and deposited at the Herbarium of Medicinal Plants and Crude Drugs of the College of Pharmacy, Dali University (Professor Dequan Zhang, zhangdeq2008@126.com). The total genomic DNA was extracted using an improved CTAB method (Yang et al. 2014), and sequenced with Illumina Hiseq 2500 (Novogene, Tianjing, China) platform using a pair-end (2 Â 300 bp) library. The raw data were filtered using Trimmomatic v.0.32 with the default settings (Bolger et al. 2014). The resulting paired-end reads were assembled into a circular contigs using GetOrganelle.py (Jin et al. 2020).The complete chloroplast genome was annotated with Geneious 8.0.2 software (Kearse et al. 2012) and the annotated results were obtained after manual correction.
The annotated chloroplast genome was submitted to the GenBank with an accession number MT677853. The complete chloroplast genome was 156,985 bp. It consists of a typical quadripartite structure with a pair of IRs (inverted repeats) of 25,974 bp that are separated by a small single copy (SSC) region of 18,859 bp and a large single copy (LSC) region of 86,178 bp. The overall GC content is 36.7%, while 34.4%, 30.5%, and 42.7% are respectively calculated for the LSC, SSC, and IR regions. The chloroplast genome contains 112 genes, including 78 protein coding, 30 tRNA, and four rRNA genes. Of these, 17 genes occur in duplicate in the inverted repeat regions, 9 genes (rpl2, rpl16, rps12, rps16, rpoC1, ndhB, ndhA, petD, petB) contained one intron, while two genes (ycf3 and clpP) have two introns.
For the phylogenetic analysis, 21 complete chloroplast genome sequences of Rosoideae were downloaded from the NCBI database. The sequences were aligned with MAFFT V.7.149 using the default settings (Katoh and Standley 2013). The nucleotide substitution model was determined by jModelTest v.2.1.7 (Darriba et al. 2012). A Bayesian inference (BI) analysis was performed by MrBayes v.3.2.6 (Ronquist et al. 2012) with Crataegus kansuensis Wils. (No. NC_039374) designated as the outgroup (Figure 1). The phylogenetic analysis indicated Potentilla L. is a monophyletic genus, which is consistent with the previous phylogenetic analysis by Faghir et al. (2014) Li et al. (2020) and Li et al. (2021). In this analysis, Potentilla is sister to Potaninia Maxim. Potentilla lineata was fully resolved in a clade with P. centigrana Maxim., P. suavis and P. hebiichigo . This complete chloroplast genome of P. lineata provides a reference for further phylogenetic studies and the evolution of species in Potentilla and related genera, as well as for conservation and utilization of the resources.
Disclosure statement
The authors declare no conflicts of interest and are responsible for the content.
Funding
This study was co-supported by National Natural Science Foundation
Data availability statement
The genome sequence data that support the findings of this study are openly available in GenBank of NCBI at (https://www.ncbi.nlm.nih.gov/) under the accession no. MT677853. The associated BioProject, SRA, and BioSample numbers are PRJNA737049, SRR14825541, and SAMN19698019 respectively. | 2021-08-05T05:48:11.845Z | 2021-07-19T00:00:00.000 | {
"year": 2021,
"sha1": "122033757525f227ff1e6ca59f26508a932844e7",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23802359.2021.1951137?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "122033757525f227ff1e6ca59f26508a932844e7",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9459484 | pes2o/s2orc | v3-fos-license | Reducing Over-Utilization of Cardiac Telemetry with Pop-Ups in an Electronic Medical Record System
Non-invasive cardiac monitoring has well-established indications and protocols. Telemetry is often overused leading to a shortage of tele-beds and an increment of hospital expenses. In some cases, patients are kept on telemetry longer than the indicated length because providers are unaware of its ongoing use. We investigated the effect of reminder pop-ups, incorporated into an electronic medical record (EMR), on minimizing the use of telemetry. Three regional hospitals implemented an electronic pop-up reminder for discontinuing the use of telemetry when no longer indicated. A retrospective analysis of data for patients on telemetry, outside of the intensive care unit (ICU), was conducted and comparisons were drawn from pre- and post-implementation periods. A composite analysis of the number of days on telemetry was calculated using the Kruskal-Wallis test. With the implementation of the pop-up reminder, the median number of days on telemetry was significantly lower in 2016 than in 2015 (2.25 vs 3.61 days, p < 0.0001). Overutilization of telemetry is widely recognized, despite not being warranted in non-ICU hospitalizations. The implementation of a pop-up reminder built into the electronic medical record system reduced the overuse of telemetry by 37% between the two time periods studied.
Introduction
Cardiovascular disease causes significant morbidity and mortality in the United States [1]. Prompt recognition and treatment of arrhythmias among admitted patients is one way of reducing this burden. The American Heart Association (AHA) has published guidelines on the use of cardiac telemetry among non-intensive care unit (ICU) patients; however, it is often difficult to translate the guidelines into practice [2]. Furthermore, the overuse of telemetry can often outweigh the benefits, causing harm to patients and increasing healthcare costs [2][3]. Continuous healthcare reformation warrants efficient and cost-effective health care practices. The overuse of telemetry often leads to a shortage of tele-beds and an increment of expenses [4]. The American Board of Internal Medicine (ABIM) does not recommend the use of telemetry monitoring outside the ICU without a continuation protocol [5]. As of 2009, a day's cost of telemetric monitoring was at least $1,400 per patient. In some cases, patients are kept on telemetry longer than indicated because providers are unaware of its ongoing use [6][7]. Several strategies have been studied to reduce the overuse of cardiac telemetry [6,8]. While there is a lack of consensus on the most effective method to improve the overutilization of cardiac telemetry [9], we developed a continuous quality improvement strategy and investigated the effect of reminder pop-ups, in an electronic medical record (EMR) system, on the duration and utilization of cardiac telemetry.
Materials And Methods
Three regional hospitals in Wichita, Kansas, (using the same EMR system) implemented an electronic pop-up for discontinuing the use of telemetry when no longer indicated. The criteria for discontinuing the use of telemetry included the appearance of a pop-up after 48 hours of the patient being on telemetry. The pop-up would alert the clinician to either continue telemetry or discontinue it if no longer required. The study design was submitted to the Institutional Review Board (IRB) in Wichita. It was determined that the study did not constitute human subjects research; therefore, IRB approval was not required. The board issued a waiver letter and all ethical guidelines were followed.
Results
The median number of days on telemetry (pre-implementation period) was 3.61. This was reduced to 2.25 days following the implementation of electronic record pop-ups. This reduction was statistically significant (2.25 days vs 3.61 days, p < 0.0001. ( Table 1) (Figure 1).
Discussion
The implementation of a pop-up reminder built into the EMR system reduced telemetry overuse at our institution by 37% between the two time periods studied. Cardiac telemetry use has been expanding exponentially for the past 30 years. Initially, it was recommended for cardiac and sometimes non-cardiac patients in the ICU [10]. With time, its use had advanced to patients in non-ICU settings [8]. Overutilization of telemetry is recognized widely, despite not being warranted in non-ICU hospitalizations.
Guidelines for in-hospital cardiac monitoring have been published by the American College of Cardiology [11]. However, cardiac telemetry continues to be overused due to non-adherence to the guidelines by physicians or due to unawareness of telemetry continuation days [6][7]10]. The number of days on telemetry is a clear component in the increment of hospital expenses. The ABIM does not recommend the use of telemetry outside of the ICU setting without a continuation protocol [5]. Clinician education is critical to understand the risks and benefits of the use of telemetry in non-ICU patients. Furthermore, patient satisfaction has been shown to increase due to the decreased number of alerts and cardiac alarms [2].
Conclusions
The implementation of electronic pop-up reminders reduced the duration of telemetry in non-ICU settings. Being a time bound analysis, we were unable to show if this change can persist beyond the project, or ascertain the effect of this project on individual physician practices on discontinuation of telemetry. The financial savings from early telemetry discontinuation is inferred and further research would ascertain this.
Conflicts of interest:
The authors have declared that no conflicts of interest exist. | 2018-04-03T00:10:48.325Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "dca40f7569b9e9b5debdac6f9c653125b3149f0b",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/7388/1566922095-20190827-1921-1oz6bda.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59688f4fa5f389924bdfe7635e6cb75371b97ece",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
89798377 | pes2o/s2orc | v3-fos-license | Some aspects about the spatial dependence index for variability of soil attributes
The main purpose of this article was to evaluate the behavior and relationship of the range and components of SDI (Spatial Dependence Index) in general and in function of field factors such as soil types, type of attribute and soil layers. This evaluation was based on real data collected in national journals. It was noticed that the parameter range, in general and for different field factors, presented asymmetric positive behavior. The components of the SDI showed approximately symmetrical behavior. The SDI can capture the range behavior more intensely (the spatial variability behavior in the horizontal direction of the semivariogram), and, in a less intense way, the behavior of the contribution and sill parameters (the spatial dependence behavior in the vertical direction of the semivariogram). Thus, the SDI describes the behavior of spatial dependence of the total set of aspects of the semivariogram.
The soil attributes, determinants of agricultural productivity and their impacts on the environment, vary at space and/or time (CAVALLINI et al., 2010).Usually, the evaluation of this variability is done using the experimental semivariogram and; consequently, of the estimation of the model parameters of the semivariogram, which, in most cases, are the nugget effect, the contribution, the sill and the range (SEIDEL & OLIVEIRA, 2013, 2014).
The range has the capacity of describing the spatial variability in the horizontal direction of the semivariogram, and this parameter can be measured in meters, independently of the attribute under study.This parameter indicates the distance that the sampled points are correlated (VIEIRA et al., 1983;RODRIGUES et al., 2012;AQUINO et al., 2014).The nugget effect, the contribution and the sill allow the evaluation of the spatial variability in the vertical direction of the semivariogram, depending, however, on the unit of measure of the attribute under study, making a general evaluation directly based on its numerical values impossible.BIONDI et al. (1994) and CAMBARDELLA et al. (1994) proposed to relate these vertical parameters of the semivariogram to generate dimensionless spatial dependence measures.
Recently, SEIDEL & OLIVEIRA (2014, 2016) proposed a dimensionless spatial dependence index (SDI) to evaluate the spatial variability contemplating all semivariogram parameters under spherical, exponential and Gaussian models adjustment.SEIDEL & OLIVEIRA (2014) carried out theoretical and simulation studies that showed good performance of the SDI in the measurement of spatial variability.
However, a deeper assessment of the components of the SDI index still needs to be done.Thus, the objective of this article is to evaluate the behavior and the relationship of the range and the components of the SDI in general and in function of some field factors (different soil types, type of attribute and soil depths).According to CHERUBIN et al. (2014), several studies show how the spatial variability depends on field factors, such as soil type or type of attribute in study.ZANÃO JÚNIOR et al. (2010) comments that soil depths also influenced spatial variability.However, more field factors, such as relief type or land use and management, should be considered in future studies.
The data were obtained from 25 articles, published from 2006 to 2015 and made available on the Scielo Brazil portal, with application of Geostatistics in Soil attributes, used and cited in SEIDEL & OLIVEIRA (2016).From the papers, the following information was collected for each attribute: model of semivariogram adjusted, estimated range (a), estimated nugget effect (C 0 ), estimated contribution (C 1 ), estimated sill (C=C 0 +C 1 ), maximum sample distance (MD), soil type, type of attribute (chemical, physical or mineralogical) and soil layer (depth).In this way, it was possible to obtain the SDI and its two components: . The mathematical expression of the SDI and its respective classification of the spatial dependence are detailed in SEIDEL & OLIVEIRA (2016).
A total of 587 attributes were raised in the search, and in 275 of them (corresponding to 46.85% of the total) the spherical model was adjusted; in 123 (20.95%) the exponential model was used; in 102 (17.38%) attributes the Gaussian model was adjusted and; in 87 (14.82%) the pure nugget effect model was used.MONTANARI et al. (2008) reported that several surveys of spatial variability of soils used prominently spherical and exponential models.As an estimation, it is possible to highlight the spherical model as one of greater use.This model is what predominates in studies in Soil Sciences (GREGO & VIEIRA, 2005;GONTIJO et al., 2012).In addition, considering the exponential, Gaussian and spherical model adjustments (500 attributes), regardless of the type of attribute studied (whether chemical, physical or mineralogical), the spherical model is the predominate in semivariogram adjustments (54.37% in chemical attributes, 53.82% in physical attributes, 64.44% in mineralogical attributes).
From the sampling of range values and the obtainment of the components of the SDI, statistical analyzes of this information were carried out, through descriptive measures.The Spearman correlations between range, , and SDI were calculated and tested (p<0.05).Data analysis procedures were performed in the software R (R CORE TEAM, 2016).
Sample distribution of the range is a positive asymmetry, with an asymmetry coefficient equal 2.12 and a median of the range equals to 39m (Table 1).The same behavior of positive asymmetry is evident for the range in different semivariogram models (spherical, exponential and Gaussian), soil types, chemical and physical attributes and soil layers (Table 1).
Considering the median of the range as a comparative measure, the ultisol (median of 48.69m) has a higher value than the oxisol (median of 18.90m) (Table 1).This greater spatial continuity may be due to the shape of the landscape that the soil classes are inserted (CAMARGO et al., 2010(CAMARGO et al., , 2013;;SILVA JUNIOR et al., 2012;RESENDE et al., 2014).MONTANARI et al. ( 2008) noticed higher values of range for chemical soil attributes in areas with linear pedoform (ultisol) compared to areas with convex pedoform (oxisol).
Another important observation, from table 1, is that the range has higher median values for bottom soil layers.This can be explained by the fact that deeper layers have less spatial discontinuity of the soil attributes, since they are less susceptible to the effects of surface management, thus maintaining their original characteristics of homogeneity (LEÃO et al., 2007).
Chemical and physical attributes have higher median values to the range (median of 40.00m and median of 38.00m, respectively) in comparison with mineralogical attributes (median of 30.50m) (Table 1).The value of the range influences the quality of the estimation (COSTA et al., 2014) and has application in the planning of samples (ZANÃO JÚNIOR et al., 2010).AQUINO et al. (2014) andOLIVEIRA et al. (2015a, b) demonstrated the applicability of the range in the definition of sampling densities for future studies.As regards the , a slightly symmetrical behavior was observed, in general, with a median of 0.51 and an asymmetry coefficient of 0.19 (Table 1).Also, this component showed more median value for the spherical semivariogram model; Ultisol had a higher median value when compared to oxisol; Both soil depths have very close median values for this component; Physical attributes had a higher median value when compared to chemical and mineralogical attributes (Table 1).
The component showed an approximately symmetrical behavior, in general, with asymmetry coefficient equal 0.09 and median of 0.67, and for the all semivariogram models, except for the Gaussian model that presented negative asymmetry, with a coefficient of asymmetry equal -0.34 (Table 1).
Considering the different types of soil, it can be seen from table 1 that has higher median values in the all the types of attributes (Table 1).(1994), it tends to consider more attributes as having greater spatial dependence (that is, with a degree of spatial dependence tending to be stronger) than would occur in reality.In addition, it is emphasized that, in the Gaussian model, there is a greater tendency of strong classifications of spatial dependence, when compared with the other two semivariogram models.
Amplitud of the component
Based on table 2, it can be shown that, in general and for different field factors, the correlations between the range and the SDI index are moderately or strongly positive, and all significant (p<0.05).
The SDI index has a positive asymmetric sampling distribution (SEIDEL & OLIVEIRA, 2016) similar to what occurs with the range, explaining results obtained by the correlations.The correlations between and SDI are all strongly positive, and all significant (p<0.05).However, the correlations Furthermore, it can be seen from table 2 that only weak or moderate and negative correlations occurred between Results obtained from the correlations indicated that the SDI is able to capture, in an intense way, the behavior of the range and the when evaluating the spatial dependence, evidencing also the behavior in the horizontal sense of the semivariogram.This is an important feature of the SDI index, which differentiates it from other indexes in the literature, since according to FERRAZ et al. ( 2012) the range has a considerable role in determining the limit of spatial dependence.In addition, the SDI also succeeds in capturing, in a less intense way, the behavior of for different soil depths.This result differs from that obtained by GREGO & VIEIRA (2005) and ZANÃO JÚNIOR et al. (2010) who observed differences in spatial dependence for different soil depths.
ranged from 0.20 to 1.00, showing that this component does not tend to present low values in practice.Based on this, inBIONDI et al. (1994), which is used together with the classification of CAMBARDELLA et al.
weak or moderate (and most of the time are positive).
means that where we have greater values of the range it could occur lower values the semivariogram) is not the most adequate method of describing the spatial variability of attributes.
the vertical behavior of the semivariogram.In general, the SDI attempts to describe spatial dependence by capturing both the aspect described by the range (horizontal parameter of semivariogram) and the aspect described by nugget effect, contribution and sill (vertical parameters of semivariogram). | 2019-04-02T13:15:18.258Z | 2018-06-28T00:00:00.000 | {
"year": 2018,
"sha1": "9c792f1af0315b129ac96279b12acad80da1f27f",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/cr/v48n6/1678-4596-cr-48-06-e20170710.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f6cf89975bcd53b021dafc9271fc85cfa03601c3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
137780052 | pes2o/s2orc | v3-fos-license | Modeling of micrometeoric streams under the action of the high-power laser pulse on multicomponent polycrystal rocks
The paper presents the analysis of interaction of the high-power laser pulse with multicomponent polycrystal rocks. Experiments were completed on a laser facility “Saturn” with intensity of 1010-1013 W/cm2. Structural analysis of the materials from the spall crater and from the plasma flame show significant differences. The article demonstrates experimental results of the spall formation moment depending on the thickness of the radiated target. The scale of damage to aluminium 6 pm thick foil at the rare side of the target is illustrated when it is hit by the andesite fragments from a spall crater.
Introduction
Today, due to LTS program research, laser radiation interaction with metals and dielectrics is well studied in a wide range of intensities. However, the mechanism of laser radiation interaction with multicomponent polycrystalline rocks at I ~ 10 10 -10 13 W/cm 2 is studied insufficiently and poorly described in literature.
It is of great interest for a variety of technological applications, in particular, for laser simulation of micrometeorite impact. There were a number of articles in scientific literature on laser simulation of micrometeorite impact with the help of metallic foils acceleration [1]. Though it was known that most of micrometeorites are particles of stony dust [2], the information about laser simulation of micrometeorite impact was poorly described in literature.
The problem of laser simulation of corresponding micrometeorite impacts is therefore considered to be studied thoroughly. Andesite can be used as the main material. Its composition is close to that of meteorites and Martian dust [3].
Taking these facts into consideration, the aim of this research was the detailed study of the highpower laser radiation interaction with the multicomponent polycrystalline rocks and the development of a laser-plasma source of µm-sized particles with solid density with the speed close to that in the micrometeorite flows.
Experimental results
The experiments were carried out at the high-power facility "Saturn" described in detail in [4]. Laser pulse was formed at the output of the final amplifier with the energy E L ~ 20-50 J, FWHM duration τ FWHM = 30 ns and divergence θ ~ 1.5•10 -4 rad. Focal spot diameter D L varied from 100 µm to 300 µm.
The research of the crater formation process requires accurate parameters of the plasma flame and the magnitude of the ablation pressure, generating a shock wave in the unevaporated part of the target. 1 To whom any correspondence should be addressed. A simplified analytical model of a stationary spherical expansion of plasma, proposed by P. Mora [5], was used to calculate the flame parameters. It is supposed that the main mechanism of laser radiation is the inverse bremsstrahlung.
It should be pointed out that the plasma flame temperature can be used as a parameter for model verification. The method of absorbers (filters) was used to determine the temperature of the resulting dense plasma. As the test objects for verification method, Al targets were used.
The data obtained for aluminium and andesite targets [6] agree within the experimental error with the results calculated on the basis of Mora's plasma expansion model [5]. It is shown that Mora's model can be used to obtain reliable data on the plasma ablation pressure and temperature. Theoretical calculations according to Mora's model show ablation pressure of about 4 Mbar for laser radiation intensity I =10 13 W/cm 2 .
Crater parameters in the studied range of intensity can be described by means of an analytical model suggested by S. Yu. Guskov [7]. Crater formation during the laser interaction with the target is the result of phase transformations of pressed and heated substance behind the shock wave front.
The calculation results [4] have demonstrated that the estimated crater's diameter and depth for aluminum and andesite targets agree with the experimental data. The model allows to estimate the shock wave speed. According to the experiments with 2 cm thick andesite targets, the shock wave speed is about 5 km/s and it decreases before coming to free surface [4].
To study the shock wave exit onto the rear side, series of shootings at thin andesite [8] targets were made. As a result of all our experiments with thin targets, the spall crater was formed from rear side of the targets. In some cases, a through channel was formed inside the spall crater. As a rule, it was formed on the targets with thickness less than 350 µm.
To study the material ejected from the plasma flame and the spall crater, targets material spraying on special silicon plates was researched [4]. The analysis of spraying structure is carried out with the help of the electron microscope, whereas chemical analysis is studied using the X-ray spectrometer.
There is a large number of liquid fragments in the material ejected from the crater into the front space (figure 1). At the same time, there are only solid fragments (figure 2) in the material ejected from the spall. This fact proves that the temperature of the material from the spall crater is lower than the melting temperature. Sizes of the fragments vary from 0.5 µm to 50 µm [8]. To study the moment of spall formation in thin targets, the scheme presented in figure 3 was developed. A thin stripe of 6 µm thick Al foil is placed into the base holes and is adjusted by contacts C1 and C2. The target is fixed on the base close to the stripe.
When the foil is being torn, the oscilloscope records potential shift. The moment of diode D 1 switching on is synchronized with the laser impulse input on the front surface of the target. Figure 4 shows signals registered at Al foil tearing during irradiation of 400 µm thick andesite targets. The signal of the foil tearing on one of the targets is shown with negative magnitude for better visualization. One should note satisfactory repeatability of the results obtained at the same thickness of the target and at the same laser intensity. Figure 5 shows the dependence of the spall formation moment on the thickness of andesite targets. The spall is formed at shock wave ejection on the free surface of the target and depression wave generation. Prior to the moment of spall formation, the shock wave passes a distance of about a target thickness. The speed of the shock wave for targets with thicknesses from 400 to 700 µm is W ≈ 7 km/s. This data interpretation is coherent with the calculation data [4]. Taking in consideration the material speed doubling at shock wave ejection on the free surface and approximation of a strong shock wave, expansion velocity of fragments from the spall crater is proved to be around 4W/(γ+1) ~ 10 km/s. If the thickness of the target is getting less than 400 µm, a through channel can be formed in the spall crater. The glow at the rare side of the target was registered during the experiments with a through channel formation [8].
To show the scale of surface damage when andesite fragments from the spall crater hit it, a piece of 6 µm thick aluminum foil was fixed 2 mm and 6 mm off the rare side of the target. In figure 6 one can see the traces of the damage to foil induced by the fragments. The sizes of the craters in foil vary from 1-20 µm (left in figure 6) to 2 mm (right in figure 6).
Conclusion
Thus, it has been demonstrated that under the action of the laser pulse of moderate intensity on the multicomponent polycrystal rocks, µm-sized fragments are formed at the rare sides of targets. These fragments are equivalent in composition, temperature, dimension, and speed to the micrometeorite flows in space. | 2019-04-29T13:13:34.999Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "736b8e7009ad2c455deab1e1b7889703cce479f2",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/747/1/012068",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a9871e75e32a7ea0c4d61ff2090d3059c2e3080f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
14509165 | pes2o/s2orc | v3-fos-license | The “white globe appearance” (WGA): a novel marker for a correct diagnosis of early gastric cancer by magnifying endoscopy with narrow-band imaging (M-NBI)
Background and study aims: Although magnifying endoscopy with narrow-band imaging (M-NBI) is useful for the diagnosis of gastric mucosal lesions, differentiating between early cancer (EC) and low grade adenoma (LGA) remains a challenge. During M-NBI examination, we have noted the presence of a small, white lesion with a globular shape underneath cancerous gastric epithelium, and have termed this endoscopic finding the “white globe appearance” (WGA). The aim of this study was to determine whether or not the WGA could be an endoscopic marker for distinguishing EC from LGA. Methods: We retrospectively analyzed both the M-NBI scans and resected specimens of a total of 111 gastric lesions from 95 consecutive patients. Our main outcome was a difference in the prevalence of the WGA in EC and LGA. Results: The prevalence of the WGA in EC and LGA was 21.5 % (20 /93) and 0 % (0 /18), respectively (P = 0.039). The sensitivity, specificity, positive predictive value, and negative predictive value for differentiating between EC and LGA, according to the presence of the WGA, were 21.5, 100, 100, and 19.8 %, respectively. Conclusion: A positive WGA in a suspicious lesion on M-NBI would be an adjunct to the M-NBI diagnosis of possible EC because the specificity and positive predictive value of the WGA for differentiating between EC and LGA were extremely high. The WGA could be a novel endoscopic marker for differentiating between EC and LGA.
Introduction ! Because magnifying endoscopy with narrowband imaging (M-NBI) can clearly visualize both the gastric subepithelial microvascular architecture and the microsurface structure [1], it is useful for the diagnosis of gastric mucosal lesions [2 -4]. However, differentiating between cancer and adenoma remains a challenge [3,5]. Followup without endoscopic treatment for low grade adenoma (LGA) is permitted because the risk of progression for LGA to gastric cancer is relatively low [6]. During M-NBI examination, we have noted the presence of a small, white lesion with a globular shape (< 1 mm) underneath cancerous gastric epithelium. It is invisible under nonmagnifying endoscopy. Additionally, this finding is more clearly visualized with NBI than with white-light imaging and is rarely detected in noncancerous lesions. We have termed this endoscopic finding the "white globe appearance" (WGA). By careful histological investigation, some of the WGA visu-alized with M-NBI was found to correspond to intraglandular necrotic debris (IND) within markedly dilated neoplastic glands, suggesting it as a possible histological marker specific for cancer [7]. Accordingly, this study was undertaken to determine the accuracy of the WGA as an endoscopic marker for gastric cancer.
Study design and patients
This observational study was conducted at a single tertiary referral center in Japan, as part of the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) program [8].
In accordance with the Declaration of Helsinki, the institutional review board of Ishikawa Prefectural Central Hospital approved this study, and written informed consent was obtained from all subjects. We retrospectively reviewed both the M-NBI images and resected specimens of a total of 122 gas-Doyama Hisashi et al. White globe appearance as a novel endoscopic marker for gastric cancer … Endoscopy International Open 2015; 03: E120-E124 Background and study aims: Although magnifying endoscopy with narrow-band imaging (M-NBI) is useful for the diagnosis of gastric mucosal lesions, differentiating between early cancer (EC) and low grade adenoma (LGA) remains a challenge. During M-NBI examination, we have noted the presence of a small, white lesion with a globular shape underneath cancerous gastric epithelium, and have termed this endoscopic finding the "white globe appearance" (WGA). The aim of this study was to determine whether or not the WGA could be an endoscopic marker for distinguishing EC from LGA. Methods: We retrospectively analyzed both the M-NBI scans and resected specimens of a total of 111 gastric lesions from 95 consecutive patients. Our main outcome was a difference in the prevalence of the WGA in EC and LGA.
Results: The prevalence of the WGA in EC and LGA was 21.5 % (20 /93) and 0 % (0 /18), respectively (P = 0.039). The sensitivity, specificity, positive predictive value, and negative predictive value for differentiating between EC and LGA, according to the presence of the WGA, were 21.5, 100, 100, and 19.8 %, respectively. Conclusion: A positive WGA in a suspicious lesion on M-NBI would be an adjunct to the M-NBI diagnosis of possible EC because the specificity and positive predictive value of the WGA for differentiating between EC and LGA were extremely high. The WGA could be a novel endoscopic marker for differentiating between EC and LGA. tric lesions from 106 consecutive patients, who had undergone preoperative M-NBI examination and lesion resection by endoscopic submucosal dissection (ESD) between July 2013 and January 2014 at our hospital. ESD was principally indicated for gastric cancer under the following conditions: differentiated intramucosal adenocarcinoma without ulceration regardless of size; differentiated intramucosal adenocarcinoma with ulceration and ≤ 3 cm in size; and undifferentiated intramucosal adenocarcinoma without ulceration and ≤ 2 cm in size. These conditions were determined by preoperative biopsy and/or endoscopy. Therefore, 20 gastric adenomas were followed up without endoscopic treatment. Baseline characteristics and endoscopic and histopathological data were reviewed by means of medical records. We excluded 3 lesions for which histopathological diagnoses based on the resected specimens were benign and 8 lesions for which M-NBI findings were at a low magnification or out of focus-leaving 111 lesions from 95 patients suitable for final analysis (• " Fig. 1).
Endoscopy system and setting
We used an upper gastrointestinal magnifying endoscope (GIF-H260Z, Olympus Medical Systems, Tokyo, Japan), a video processor (EVIS LUCERA Olympus CV-260SL, Olympus Medical Systems), and a light source (EVIS LUCERA Olympus CLV-260SL, Olympus Medical Systems). The structure enhancement of the endoscopic video processor was set to B-mode level 8 for M-NBI. The color mode was fixed at level 1. To obtain stable endoscopic images at maximal magnification, a black, soft hood (MAJ-1990, Olympus Medical Systems) was mounted at the tip of the endoscope prior to examination.
Endoscopic definitions and investigation of WGA
The WGA was defined as a small, white lesion with a globular shape (< 1 mm) present underneath the gastric epithelium and identified during M-NBI examination (• " Fig. 2). Criteria for a positive WGA were less intense peripheral brightness than in the center (reflecting its globular shape) and the presence of overlying microvessels, because the WGA lies underneath the gastric epithelium and the subepithelial microvessels. The presence or absence of the WGA in early cancer (EC), LGA, or non-neoplastic background mucosa (BM) was retrospectively assessed using M-NBI images taken at maximal magnification rate by an experienced endoscopist (H. D.) who was not aware of the histology. If the WGA inside a neoplasm was recognized with a demarcation line between the neoplastic lesion and the surrounding mucosa, this distribution was termed "marginal."
Histopathological investigation
Endoscopically resected specimens were extended on boards with pins and fixed in 10 % formalin for 24 hours. After fixation, all resected specimens were cut into 2to 3-mm thick longitudinal slices. These were embedded in paraffin and stained with hematoxylin-eosin. Postoperative histopathological diagnosis was performed by two pathologists and the results were doublechecked for all cases. Histopathological diagnoses were made with reference to the revised Vienna classification [9]. For the purposes of this study, we defined the revised Vienna category 3 as LGA and the revised Vienna categories 4 and 5 as EC, reclassifying all lesions into LGA and EC groups. The EC group was subclassified into differentiated (intestinal) and undifferentiated (diffuse) types.
Histological definitions and investigation of IND
IND was defined as eosinophilic material with necrotic epithelial fragments within the lumen of a dilated gland (• " Fig. 3 ac). There was segmental necrosis of the glandular lining, characterized by cytoplasmic vacuolization and dark nuclei [7]. The presence or absence of IND was retrospectively assessed using resected specimens by an experienced pathologist (S. T.) who was unaware of the endoscopic findings.
Outcome measurements
Outcomes were: (1) a difference in the prevalence of the WGA in EC and LGA; (2) the prevalence of the WGA in BM; (3) a correlation between the presence of the WGA and IND; and (4) clinicopathological characteristics of EC associated with the WGA.
Statistical analysis
Continuous variables were compared using Student's t-test. Categorical variables were compared using the χ 2 or Fisher's exact test when the expected values were less than 5. P < 0.05 was consid- Original article E121 THIEME ered statistically significant. All analyses were performed using statistical software (JMP 11, SAS Institute Inc., Cary, NC, USA). Table 1).
Prevalence of the WGA in EC, LGA, and BM
The prevalence of the WGA in EC and LGA was 21.5 % (20 /93) and 0 % (0/18), respectively (P = 0.039) (• " Table 2). The sensitivity, specificity, positive predictive value, and negative predictive value for differentiating between EC and LGA according to the presence of the WGA were 21.5 % (95 % confidence interval [CI] 13.7 -31.2 %), 100 % (95 %CI 84.7 -100 %), 100 % (95 %CI 86.1 -100 %), and 19.8 % (95 %CI 12.2 -29.4 %), respectively. In addition, the prevalence of the WGA in BM was 0 % (0/111). Fig. 3 a. c The IND within dilated neoplastic glands was present near a lateral margin of the cancer. However, the IND was not located just underneath the cancerous gastric epithelium but in the deeper part of the lamina propria in the cancerous tissue. This IND was 0.18 mm in size. Original article E122 THIEME Clinicopathological characteristics of EC with WGA There was no correlation between the presence of the WGA and tumor size, macroscopic type, ulcerative findings, histological type, or depth (P > 0.05). The prevalence of the WGA in the lower third of the stomach was significant (P = 0.0046) (• " Table 4). Especially in 0-IIa, it was difficult to distinguish EC from LGA in non-ulcerative, differentiated, and mucosal cancers; the prevalence of the WGA was 17.9 % (5/28), similar to that of other ECs. The average number of WGA in 20 lesions with WGA was 2.3 (range 1 -5). Also, 46 (95.8 %) of the 48 WGA were judged as marginal distribution. There was no correlation between the number of WGA and tumor size, macroscopic type, ulcerative findings, histological type, or depth (P > 0.05).
!
The WGA was evident in EC lesions but not in LGA. The specificity and positive predictive value of the WGA for differentiating between EC and LGA were extremely high, although the sensitivity and negative predictive value were low. We have often experienced cancerous lesions with low confidence prediction, as demonstrated by a prospective multicenter M-NBI study [3]. Certainly, the presence of the WGA adds to the specificity of M-NBI diagnoses with low confidence prediction. A positive WGA in a suspicious lesion on M-NBI would be an adjunct to an M-NBI diagnosis of possible EC. Watanabe et al. reported that no IND was detected in cases of LGA, and was detected in only 1 of 52 cases in Vienna category 1 or 2 [7]. We also detected no WGA in LGA or BM. The WGA might show promise in differentiating between cancer and gastritis. We had predicted that the WGA would correlate exactly with the finding of IND just underneath the gastric epithelium; it seeming Fig. 3 c. All IND cannot be identified if we examine from the surface of the mucosa using M-NBI. Second, since the horizontal extent of neoplasms needs to be determined during the preoperative M-NBI examinations, we mainly photographed its margins. Accordingly, we may have missed any WGA located in a nonmarginal distribution with respect to the neoplasm. Although differentiated and submucosal cancers showed the highest incidence of IND in a report by Watanabe et al. [7], there was no correlation between the presence of the WGA and histological type or depth in our study. We suggest that the reason is that the cancers included in this study were limited to intramucosal or submucosal lesions that showed microinvasion into the submucosa because we recruited patients who were candidates for ESD. We also found that the WGA in EC tended to demonstrate a predominantly marginal distribution. By electron microscopy, there is a spectrum of apoptotic-necrotic phenomena in ordinary (nonmucinous) adenocarcinomas of the gastrointestinal tract, ranging from crypt lumen apoptosis (mainly because of apoptosis of adenocarcinoma cells) to "dirty necrosis" and IND (mainly because of necrosis-like cell death) up to tissue necrosis [11]. The apoptotic-necrotic phenomena may predominate near the margin of EC. The newly developed optical technology of M-NBI might allow endoscopic visualization of the spectrum of apoptotic-necrotic phenomena in EC presenting with the WGA. Our study had several limitations. First, it was retrospective. Second, we found a number of WGA-negative lesions that were positive for IND. As we focused on the margins of neoplasms during preoperative M-NBI, we may have underestimated the prevalence of the WGA. Its prevalence in BM may also have been underestimated. Third, the number of submucosal and undifferentiated cancers was small because the subjects of this study were limited to the patients who had been candidates for ESD. Fourth, the prevalence of the WGA in focal gastritis is unclear. It is as yet un-clear whether the presence of the WGA can make a contribution to the improvement of real-time diagnostic performance in clinical practice. We have begun a prospective study to avoid these limitations (UMIN 000013650).
In conclusion, M-NBI made it possible to visualize the WGA in the stomach. Like IND, which is a possible histological marker specific for EC, the WGA could be a novel endoscopic marker for differentiating between EC and LGA. | 2017-08-28T03:30:49.383Z | 2015-01-14T00:00:00.000 | {
"year": 2015,
"sha1": "fc8cdeb0f602b8e84c1f723e4fb8b9e9c6e66ddc",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0034-1391026.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2badfb6076b388dcd49a9d71c271d7e579372d7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
152282439 | pes2o/s2orc | v3-fos-license | Nose Heat: Exploring Stress-induced Nasal Thermal Variability through Mobile Thermal Imaging
Automatically monitoring and quantifying stress-induced thermal dynamic information in real-world settings is an extremely important but challenging problem. In this paper, we explore whether we can use mobile thermal imaging to measure the rich physiological cues of mental stress that can be deduced from a person's nose temperature. To answer this question we build i) a framework for monitoring nasal thermal variable patterns continuously and ii) a novel set of thermal variability metrics to capture a richness of the dynamic information. We evaluated our approach in a series of studies including laboratory-based psychosocial stress-induction tasks and real-world factory settings. We demonstrate our approach has the potential for assessing stress responses beyond controlled laboratory settings.
INTRODUCTION
As humans are homeothermic, our internal temperature is strongly linked to numerous physiological and psychological mechanisms. Given this, human thermal patterns have been widely explored as a way to improve the understanding of our bodies and minds. Amongst temperature monitoring channels, thermal imaging has been shown to be highly effective. In medical applications, for example, it has been used to detect pathological symptoms and disorders in a contactless manner [1]- [3]. Recent advances in commercial thermal imaging technologies have made it possible to use this approach in human computer interaction (HCI) beyond the highly constrained situation of a medical environment [4], [5]. In particular, it has been used to investigate physiological thermal signatures for assessing a person's psychological states, in particular, affective states [6]- [12].
Among the different physiological activities, vasoconstriction and vasodilation patterns underneath a person's skin can be captured through thermal imaging. Such patterns cause increases or decreases in blood flow, contributing to skin temperature changes. Such physiological activities are influenced not only by ambient temperatures (e.g. local cooling or warming) [13], but also by mental stress [14]. Hence, studies have attempted to capture stress or mental workload-induced thermal directional changes, particularly from facial regions of interest (ROIs), while controlling environmental temperature [7]- [9], [15], [16]. Amongst other facial areas, the nose tip has been shown to be the only consistently reported region where we can monitor significant decreases in its temperature under stress conditions, indicating that the nasal thermal drop could be a stress indicator.
Despite this promising finding and a range of low-cost, small thermal cameras, which are already available in the market, these observations have not caught much attention in real-world applications. This is mainly because the majority of thermal imaging-based studies have drawn upon visual inspections (e.g. manually selecting a pair of thermal images where a person's nose is situated in the same position to compare nose temperature) and also imposed motion constraints (e.g. using a chinrest), which is highly cumbersome [8], [9], [11]. To address these restrictions, [16] used a traditional ROI tracking algorithm; however, participants were still required to keep their head. This inflexibility is one of the main challenges for real-world applications and contributes to keeping thermal imaging from being used in unconstrained, mobile settings.
The use of advanced ROI tracking methods built for mobile thermal imaging in unconstrained situations, such as Thermal Gradient Flow [5], can help to address this challenge as it was preliminary explored for very short period of time in [6]. However, the nose tip area is an area which is difficult to track on thermal images. For instance, Fig. 1 shows examples of thermal images of a person's face collected from an office and the nose tip area as a ROI. The shape of the local facial skin region is often blurred. This does not provide a sufficient number of key facial features which are generally required for ROI tracking [5]. This is due to the homoeothermic metabolism and relatively low thermal conductivity of a person's skin (here, the nose tip) which results in a very narrow range of temperature distributions across the nose tip. Also, the thermal patterns on the nose tip are not consistent during the course of time -they are highly variable in comparison with other facial temperatures [9]. This leads to our first research question: how can we continuously monitor thermal variable patterns of the nose tip in unconstrained settings?
A second challenge is how to enrich the quality of information derived from affect-induced skin temperature changes. In the literature, temperature difference or slope between two time points on a facial ROI (positive or negative direction and its amplitude in temperature change, e.g. average of -0.56°C from the nasal ROI after being exposed to stressors in [7]) has been used as the dominant metric. However, a person's skin temperature could be influenced by many factors. These include: different physiological phenomenon (e.g. nose temperature can be affected by not only vasoconstriction/dilation but also breathing [6]), other types of affective states (e.g. nasal temperature drop under fear conditions [10]), context (e.g. social contact increases nasal temperature [17]) and environmental temperature [6]. All of these have the potential to induce temperature variability rather than just a consistent drop in temperature (ie. a simple directionality change). Hence, using temperature difference as a single metric in unconstrained setting could be too sensitive to other factors beyond stress-induced physiological reactions. Indeed, incongruent results have been reported by studies where the metric was mainly used for monitoring affect-induced temperature changes on facial areas (beyond the nose); for example, Engert et al. [8] reported a significant drop in chin temperature whereas Veltman and Vos [9] did not. As for the nose, the consistent findings support the fact that mental stressors can cause decreases in its temperature [7]- [9], [15], [16]; however, this thermal directional information itself is still hardly useful in quantifying the amount of stress and as not been tested in unconstrained settings. This suggests that there is a need to explore and build a richer set of metrics to compensate for the limited capability of a single metric, which is likely to lose important physiological information of one's mental stress. This leads to a follow-up research question: can we build a rich set of metrics to quantify variations in skin temperature?
In the next section, we propose computational methods to address the challenges mentioned above. We first introduce a strategy to track nose temperature continuously. Following this, we propose a set of metrics to capture richer information from the temperature. These new metrics aim to help capture thermal variability rather than only the thermal directionality on which the existing metric is based [7]- [10], [15], [16].
II.
PROPOSED METHOD Fig. 2 illustrates the pipeline of the proposed method which contains two parts: i) continuous monitoring of nasal variable temperature patterns, ii) extraction of thermal variability metrics.
A. Continuous Monitoring of Nasal Thermal Variability
Using a thermal camera, an image of a user's face (including the nose) is captured and the nose must be extracted. However, given the blurred and inconsistent shape of the nose tip (skin) on a thermal image in Fig. 1, the conversion of a thermal image to a thermal-gradient map (proposed in [5]) is also less likely to help obtain strong feature points from the area, as shown in Fig. 3-right (there is no facial feature inside the small ROI generally used for tracking the nose tip). Given this, we propose to instead select a larger ROI including the nose tip where a more distinct graphical shape of the nose can be obtained for the tracking as visualized in Fig.3-right. The selection of a larger ROI has the advantage that it can be tracked by advanced thermal ROI tracking algorithms. In this paper, we used Thermal Gradient Flow [5] (given its high performance) to continuously track the area and estimate the nose tip temperature. Accordingly, our first study aims to investigate if the use of a larger ROI size (compared to small ROI size) affects the temperature measurements of the nose tip.
Following [7], [8], [16], we compute the spatial average of temperatures in the ROI from every single frame to obtain a one-dimensional time series of thermal data (Fig. 2). Even if a ROI tracking method achieves high performance in tracking, temporary errors (i.e. tracking failure for a couple of frames) are likely to occur due to the presence of blurred images to which the low-resolution, low-cost thermal imaging are prone to producing (e.g. lens-inducing errors, calibration errors produced by a thermal camera [5]). Hence, the next step is to remove such outliers related to temporary tracking errors as shown in Fig.2-middle. This can be done by excluding values beyond a range computed from Tukey's hinge (g=1.5) [18], which has been widely used in outlier rejection. The range can be computed as where Q1 and Q3 are the first and the third quartiles from temperature distribution and g is the Tukey's constant. To compute the range and remove the outliers, we use a sliding window. In the following studies, we set the length of this window to one third of the total length of each thermal data with the minimum length of 30s.
B. Thermal Variability Metrics
Studies have shown that nose tip temperature can be affected by not only vasoconstriction-related cardiovascular The pipe line of the proposed method for continuous monitoring of dynamic temperature on the nose tip and thermal variability metrics: step a) selecting a larger ROI and tracking, step b) computing average of temperatures within the ROI, step c) removing outliers with a sliding window, d) extracting thermal variability metrics both from low-pass filtered (<0.08Hz) and non-filtered signals. responses but also respiratory activities [6], [9]. Based on this, we take two types of thermal variable signals from the tracked signal as illustrated in Fig. 2-right: nonfiltered signal (affected by both activities) and low-pass filtered signal for taking relatively slow vasoconstriction/dilation responses (cut-off frequency of 0.08Hz lower than the lower boundary of the expected respiratory range in healthy people [6]).
From both signals, we extract a possible set of metrics that can help capture stress-induced nasal thermal variability including directional information. Here, we present four basic forms of metrics to represent both thermal variability and directionality. The first metric is derived from the existing, widely used metric, temperature difference [7], [10], [11], [15], [16]: TD Temperature Difference between data from the start and the end The second metric is a slope to capture a global thermal directional trend calculated by a linear polynomial fitting (used in [8], [9]): Inspired by HRV (Heart Rate Variability) metrics [19], two further basic metrics are used to capture physiological thermal variability: We apply the four metrics to both nonfiltered and lowpass filtered signals. As in [6], [20], normalization (min-max feature scaling-based) for each person is considered in this work to further use signals where interpersonal physiological differences are minimized. This leads to a set of sixteen metrics as summarized in Table I, in turn helping take into consideration different perspectives of thermal variability beyond thermal directionality still being the one mainly explored in the literature. The proposed set of metrics are investigated in our second study where psychosocial stressinduction tasks are conducted. Finally, these metrics and the suitability of the media are analyzed in a small third study on a manufacturing shopfloor in a real-life work setting.
III. DATA COLLECTION STUDIES
The main aim of the data collection studies is two-fold: i) to systematically evaluate the use of a large ROI (versus small) to measure the nose temperatures on thermal videos collected in carefully controlled situations and ii) to verify the capability of the proposed metrics in quantifying mental stress in less constrained settings using stress induction sedentary tasks. Studies were approved by the local University Ethics committee. Prior to data acquisition for each study, participants were given the information sheet and informed consent form. Fig. 4 shows the experimental setup for the data collection (the image was taken by a thermal camera). The aim was to systematically compare both nonfiltered and filtered thermal variable signals produced from the selection of a large ROI with reference signals from a small ROI containing the nose tip only. We also aimed to investigate the effect of breathing on measurements of nasal temperature through the both ROIs. Hence, participants' mobility had to be carefully controlled to minimize effects of motion-induced noises on nasal temperature measurements even though the automated tracker was still used in the proposed method (the pipeline in Fig. 2). Following [9], we used a chin rest on which each participant relaxed in order to maintain the position of her/his nose to remain as still as possible.
A. Study I: ROI Coverage and Nasal Temperature (N=10)
10 healthy adults (aged 22-50years, 4 females) participated in this study that took place in a quiet lab room with no distractions (and no room temperature control). Thermal image sequences were recorded using a low-cost thermal camera (FLIR One 2G) integrated into a smartphone which was placed in front of each participant (circa 50cm).
B. Study II: Stress Induction Task using Mathematical Serial Subtraction (N=12)
In this study, we used the mathematical serial subtraction task (denoted as Math) [21] to induce mental stress for the stress induction. We invited 12 healthy adults (6 females) (aged 18-60 years) from the subject pool service of University College London. Ambient temperature and participants' movements were not controlled.
This task was divided into a resting period, experimental (Math Hard) and control (Math Easy) sessions as illustrated in Fig. 5. Before starting the experimental and control sessions, participants were asked to rest for 5 minutes. The experimental condition required the participants to repeatedly subtract a two-digit number from a four-digit number for 5 minutes. As introduced in [6], three external stressors were used to ensure induction of psychosocial stressthey are social evaluative threats, time pressure, unpleasant sound feedback for wrong answers. These stressors were introduced because often used in the literature to ensure stress, but also because the literature only compare control vs a stressing condition without exploring the changes in temperature between two different levels of stress. In the control session, participants were asked to countdown mentally with the aim of inducing significantly lower stress levels. Both control and experimental sessions were counterbalanced. Between the sessions, participants were asked to take a break to fully recover from previous session effects. Every participant selfreported his/her perceived stress levels on a visual analogue scale (VAS, 10cm) after each session. The thermal camera in Study I was used to collect participants' facial thermal videos.
A. Comparison between Small and Large ROIs
From Study I with 10 participants, we collected four sets (nonfiltered and filtered signals from both small and large ROIs) of thermal variable signals of 100s. By resampling, we extracted 4000 samples (4 sets x 10 participants x 100 temperature samples from the resampled signals of 100s). The size of each chosen ROI was: i) height (M=8.6 pixels, SD=2.17), width (M=8.5 pixels, SD=2.37) for Small ROI (only on the nose tip); ii) height (16.2 pixels, SD=4.39), width (M=23.7 pixels, SD=7.82) for Large ROI (including the nose tip and its surrounding area). Fig. 6 shows an example of the four sets taken from one participant's thermal images (P2), which show decreasing nasal temperature (affected by colder room temperature). Here, we correlated each nonfiltered thermal samples from a small and a large ROI and then tested correlations between each filtered sample from both ROIs. Overall, both data from the large ROI maintained a high correlation with the data from the small ROI (r=0.999, p<0.001 for both cases).
As a wide range of temperatures from every participant could bring power in achieving high correlation coefficients, we took a look at individual data and correlated them from each pair (small and large ROIs) for every participant. From this, we obtained 10 Pearson correlation coefficients from all 10 participants (we correlated each pair of 100 samples for each participant). As shown in Fig. 7
B. Effect of Stressors on Thermal Variability Metrics
For Study II, where we mainly aimed to investigate our proposed metrics during stress-induction tasks, we first analyzed self-reported perceived stress scores to test if the harder arithmetic task with external stressors induced significantly higher stress levels than the controlled session and resting period. Boxplots in Fig. 8 On the other hand, there was no significant effect of the session type on participants' VAS scores over the resting and easy math sessions (p=0.388), indicating other components involved in the tasks (e.g. using a mouse to answer questions) did not significantly affect stress levels.
For testing the effect of a session type on the proposed set of metrics in Table I, we gathered 576 metrics values (12 participants x 3 sessions x 16 metrics) from facial thermal videos of 12 participants. The ROI tracking for every video was successful (with no tracking failure). We tested the effect of session type on the proposed metrics using a one-way repeated measures ANOVA. Table II summarizes the statistical results and shows a significant effect of the session type on SDSTV from nonfiltered signals (F(2,22)=7.053, p=0.004, ηp 2 =0.391), STVn from nonfiltered, normalized signals (F(2,22)=3.619, p=0.044, ηp 2 =0.248) and TDLn from low-pass filtered, normalized signals (F(2,22)=5.575, p=0.011, ηp 2 =0.336). Whilst TDn from nonfiltered, normalized signals and STVLn from low-pass filtered, normalized signals approached significance (TDn: F(2,22)=3.240, p=0.058, ηp 2 =0.228; STVLn: F(2,22)=3.362, p=0.053, ηp 2 =0.234). It should be noticed that the widely used existing metrics, TD and STV from nonfiltered signals showed no significant difference across sessions (TD: p=0.255; STV: p=0.173) although they were generally negative for both Math Easy and Math Hard (i.e. temperature decreases) and positive for Rest (i.e. temperature increases). The results also confirmed normalization helped minimizing interpersonal physiological variances, in turn contributing to statistical significance (e.g. TDn and TD in Fig. 9).
The post-hoc paired t-test with Bonferroni correction on each metric showed that SDSTV for Math Hard with external stressors was significantly higher than both Rest (p=0.046) and Math Easy (p=0.031). SDSTV patterns had a certain level of similarity with self-reported perceived mental stress scores (see Fig. 8 and Fig. 9 SDSTV), indicating the importance of capturing thermal variability in assessing mental stress level. TDLn showed a significant difference between Rest and Math Easy (p=0.04) but with no significant differences between other pairs, indicating that nasal temperature could decline during cognitive activity despite lower levels of perceived mental stress. Interestingly, the amount of decreases in nasal temperature during Math Hard was generally lower than Math Easy (Fig. 9 TDLn), indicating that metrics for thermal directionality may be less informative in quantifying the amount of mental stress. Other than these two, there were no significant differences for each pair as shown in Fig. 9.
Case Study: Stress Monitoring of Human Workers in Factory Workplace
A possible application of mobile thermal imaging-based stress monitoring strategy is to use it as an assistive technology to improve the capability and wellbeing of the human workforce by tailoring work schedules or activities towards her/his psychological needs [22]. These needs are likely to be reflected by the amount of mental loading and stress that a worker experiences. For example, when the system detects the factory worker feeling very stressed whilst assembling, it can provide more detailed computer-aided assembly instructions (e.g. on a virtual reality headset).
To assess the capability and feasibility of the proposed method beyond laboratory situations, we visited a factory workplace (ROYO, Spain -furniture manufacturer) and had three skilled workers to use our monitoring system in their routinely furniture assembly tasks. To place a thermal camera near the face during the physical activity, we built a headsetshaped interface shown in Fig. 10, which can also be used together with wearable devices, e.g. Microsoft mixed reality (MR) headset HoloLens in our case. With this setup, we collected 12 thermal videos from the workers during their normal tasks. Given written consents from the manufacturer and workers, we presented external stressors (social evaluative threats by performance observers, time pressure proposed in [6]) during their assembly activities, resulting in producing further 12 thermal videos of the workers with higher stress levels self-reported on 10cm-VAS (Fig. 11). We applied our proposed method to the collected thermal recordings to compute the three metrics that showed significant differences across tasks in the previous section (SDSTV, STVn, TDLn). As shown in Fig. 11, SDSTV values for the assembly task with external stressors (mean: 0.13, SD: 0.03) were generally higher than data collected during the normal assembly task (mean: 0.11, SD: 0.02), which is similar to the results in Study II (see Fig. 9 left). For STVn, TD Ln , their values were generally negative for both sessions (nasal temperature decreased). Interestingly, as in Study II, decreases in nasal temperature were less strong for the session with external stressors than the other although stastical analysis cannot be carried out on this small set of participants.
VI. DISCUSSION AND CONCLUSION
This paper has aimed to contribute to the body of work in thermal imaging-based affective computing [6]- [10], [23]- [26] by building methods that can continuously and reliably capture richer information of stress-induced nasal thermal variability through mobile thermal imaging.
Despite the importance of using a large ROI (the nose tip with its surrounding areas) as a surrogate of the only nose tip for automatic motion tracking in real world applications, the effect of different ROI coverages on nasal temperature measurements has not been explored in the literature [7]- [9], [15], [16]. From Study I, we have found strong correlations between data derived from the large and small (nose tip only) ROIs. This is highly encouraging in that it could be possible to deploy this approach to real world contexts through more robust tracking techniques. We have also confirmed that respiration influences nasal temperature regardless of the ROI size; hence, there is a need to take into account respiratory and vasoconstriction/dilation activities separately when monitoring nasal temperature.
Capturing multiple aspects of physiological variability has been shown to be important in assessing mental stress levels; for example, multiple metrics have been proposed to capture heart rate variability related to mental states [19]. However, this has not yet been explored for physiological thermal variability. Therefore, we have proposed a novel set of metrics with the aim to capture complex aspects of the physiological phenomenon beyond just the dominantly used thermal directionality. Also, our studies investigated this in the experimental settings comparing different levels of stress (control and experimental) against the baseline. Our results showed that variability-based metrics were more helpful in quantifying stress levels than the typically used directionality one. Indeed, whilst nasal temperatures generally declined when a person carried on both sedentary cognitive tasks, thermal directional cues from the two existing metrics (TD, STV in Fig. 9) were not sufficiently sensitive to mental stress levels. This was also the case for normalization (to consider physiological interpersonal variability) and normalization with low-pass filtering although we observed increases in the effect sizes (TD: ηp 2 =.117, TDn: ηp 2 =.228; TDLn: ηp 2 =.336). On the other hand, one of the metrics to capture variability of both vasoconstriction/dilation and respiration, SDSTV, was highly informative of mental stress with high levels of similarity with perceived mental stress scores ( Fig. 8 and 9). Furthermore, we also observed that whilst, nasal temperatures during the more stressful sessions (Math Hard + social pressure; Assembly + social pressure) were less decreased than the less stressful ones (Math Easy; Assembly only). This could be explained by the interaction of the different types of stress (mental overload vs. social pressure). Social pressure (in our case, imposing social evaluative threats) may cause embarrassment that has been shown to lead to increase (rather than decrease) in temperature [17]. Although such thermal responses to embarrassment have been mainly studies in interpersonal touch, still these results call for developing thermal metrics that capture patterns rather than direction to take into account both the physiological phenomena but also the interaction of multiple factors if they have to be used in everyday life. All in all, the proposed set of metrics could compensate for this, highlighting the importance of capturing richer information of thermal variability so as to avoid collapsing its complex phenomena into a single metric.
Despite the above findings, there are still limitations opening up future research opportunities. First, we have used the spatial averaging method to summarize two dimensional thermal information on the nasal area. This could lead to losing important vasoconstriction/dilation induced local thermal variances [5]. Second, despite our proposed metrics helping quantify mental stress from multiple perspectives, Fig. 10. Stress monitoring of a worker in a manufacturing shopfloor, as part of EU H2020 Human project. A low-cost thermal camera was attached to an MR headset (Microsoft HoloLens). Fig. 11. Boxplots (95% confidence interval) of three metrics (SDSTV, STVn, TDLn) values from three factory workers across two type of tasks (Assembly, Assembly with external stressors. In total 24 instances). even carefully hand-engineered metrics can hardly capture all complex aspects of signals, particularly in in-the-wild situations [6]. By addressing them, we expect that the Nose Heat approach can support a wider range of real-world applications as an affective assistive technology. | 2019-05-13T16:59:33.000Z | 2019-05-13T00:00:00.000 | {
"year": 2019,
"sha1": "3777a9f5042db6596b6b6c9c519ed2c941f757cb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1905.05144",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3777a9f5042db6596b6b6c9c519ed2c941f757cb",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221591254 | pes2o/s2orc | v3-fos-license | Execution Time of Optimal Controls in Hard Real Time, a Minimal Execution Time Solution for Nonlinear SDRE
Many engineering fields, such as automotive, aerospace, and the emerging challenges towards industry 4.0, have to deal with Real-Time (RT) or Hard Real Time (HRT) systems, where temporal constraints must be fulfilled, to avoid critical behaviours or unacceptable system failures. For this reason, estimation of code’s Worst-Case Execution Time (WCET) has received lots attention because in RT systems a fundamental requirement is to guarantee at least a temporal upper bound of the code execution for avoiding any drawbacks. However, until now there is no approved method to compute extremely tight WCET. Nowadays, indeed, HRT requirements are solved via hardware, using multi-cores embedded boards that allow the computation of the deterministic Execution Time (ET). The availability of these embedded architectures has encouraged the designers to look towards more computationally demanding optimal control techniques for RT scenarios, and to compare and analyze performances also evaluating a tight WCET. However, this area still lacks deep investigations. This paper has the intent of analysing results regarding the choice between three of the most established optimal controls (LQR, MPC, SDRE), providing the first link between WCET analysis and control algorithms performances. Moreover, this work shows how it is also possible to obtain a minimal ET solution for the nonlinear SDRE controller. The results might be useful for future implementations and for coping with Industry 4.0 emerging challenges. Furthermore, this approach can be useful in control system engineering field, especially in the design stage for RT or HRT systems, where temporal bounds have to be fulfilled jointly with all the other application’s specifications.
I. INTRODUCTION
The embedded hardware systems used for control applications are becoming more and more performing to meet the growing needs, in terms of computational performances, required by the increasing complexity of control tasks, for which they are appointed in real-world applications. Many engineering fields, such as automotive, aerospace, and in The associate editor coordinating the review of this manuscript and approving it for publication was Choon Ki Ahn . recent years also the emerging challenges towards industry 4.0, have been already using Real-Time (RT) or Hard Real-Time (HRT) systems to deal with event-triggered and timetriggered tasks [1]. This is because of the increasing number of sensors and even more sophisticated control algorithms, require that the system cannot lose deadlines, especially if they involve safety concerns. The management and implementation of these algorithms are distributed among different tasks running on the embedded system while RT or HRT temporal constraints must be fulfilled as well, to avoid critical behaviours or unacceptable system failure. Therefore, these HRT systems must be designed according to the resource adequacy policy, providing sufficient computing resources to handle both the specified worst-case load and fault scenario. However, in order to achieve even more demanding objectives to control systems, the linear controllers have to be overcome because in most of the control applications the PID is still the most used controller. For example, in an industrial environment, linear control algorithms are preferred to facilitate the PLC to save resources since it has to manage different architectures and respects different RT constraints. Whereas in the Automotive field, RT or HRT systems are used to guarantee security and performance to the vehicles and they often use nonlinear controllers due to the need to handle complex dynamics models [2]. For example advanced models of two-wheeled vehicles, such as those proposed in [2] for an all-wheel-drive vehicle, or in [3] for single-tracked vehicle, require at least seven coupled nonlinear equations of motion. Nevertheless, to achieve even more sophisticated Industry 4.0 objectives and to control new forthcoming systems, not in trivial fashion, the widely used PID linear controller has to be overcome. To satisfy the temporal constraints and to guarantee correctly working systems, research on timing analysis of RT systems started many years ago and has been focused on the Response Time Analysis (RTA) [4], [5], which refers to the time that a message requires to be sent and received. Indeed, many works have been devoted to the development of methods based on deterministic RTA, for an estimation of the code ET. Even though many factors have been considered, it is still difficult to analyze a real system using those methods. One of the main reason is the unpredictable randomness of the Central Processing Unit (CPU) scheduling which cannot be accurately modelled. Because of these deficiencies, usually, some degrees of pessimism are added to the model as a price for indeterminism, and the Worst-Case Execution Time (WCET) is evaluated. The WCET has received lots of attention because, in RT or HRT critical systems, it is a fundamental requirement to at least guarantee a temporal upper bound for the code execution and avoid any drawbacks. However, until now, there is no approved method to compute extremely tight WCET, as shown in [6]. Nowadays, HRT requirements can also be solved via hardware, using multicores embedded boards that allow the computation of the deterministic Execution Time (ET). In these boards, the running task is considered on a single thread, there is no CPU scheduling and each routine has a dedicated core. There are different companies specialized in HRT embedded boards, amongst them, a few years ago the benefits of XMOS technology have been pointed out in comparing with other boards [7]. This allows us to calculate with deterministic certainty, the Best Case Scenario (BCS) and Worst Case Scenario (WCS) with native tools by the workbench. The availability of this embedded architecture has encouraged the designers to look towards more computationally demanding optimal control techniques for RT scenarios and to compare and analyse performances also evaluating a tight WCET. Indeed, this area still lacks in-depth investigations while plenty of papers have been produced in the literature addressing the implementation of different control laws on single-core embedded architectures [8]. Therefore, the aim of this paper has been primarily to compare optimal control techniques through performance and ET analysis. This was possible by exploiting the HRT board characteristics, and it has been proposed a method to evaluate in a deterministic way the Execution Time required by the controller's implementations. Furthermore, once evaluated both performance and ET of three most used optimal controls, in the light of this novel, proposed analysis, it has been possible to improve the execution time of the State Dependent Riccati Equation (SDRE) nonlinear controller. The improvements are related to implementation issues and it has been proposed a method to achieve a minimal ET solution, useful for control regulation problems, which decrease the computational effort required by the SDRE nonlinear control algorithm. Amongst various control techniques, the optimal controls ranging from linear to advanced linear up to nonlinear, have been considered. In embedded system energy saving is crucial due to limited energy availability and optimal controls allow to accomplish control tasks also taking into account energy minimization. Moreover, optimal controls can handle systems which have strong dynamics couplings, even with significant non-linearities where the need to manoeuvre the systems in a non-trivial fashion to achieve good performances is required. They can also be used to open-loop trajectory optimization, e.g. in robotics, where they are extremely useful for robot's motion-planning [9]. This paper focuses mainly on discrete-time implementations [10], being those of greater interest for practical applications. Discrete-time version of Linear Quadratic Regulator (LQR) [11], Model Predictive Control (MPC) technique [12] and nonlinear technique based on the State Dependent Riccati Equation (SDRE) [13] have been considered and analyzed with an HRT board. The control laws have been tested first in a simulation environment and then applied to an experimental set-up using the HRT board based on XMOS technology. To this purpose, a motorized four-wheeled mobile robot has been built, it is equipped with an inverted pendulum mounted on top of and joined by means of a rotational joint. A control law has to keep the pendulum in balance while having the cart tracking the desired position in space. The system's sensing is accomplished providing data by an optical encoder measuring the angle of the pendulum, and by a distance sonar sensor tracking the cart position w.r.t. a fixed obstacle.
The remainder of this paper is organized as follows: in Section II an overview of related work is presented, Section III introduces the linear and nonlinear discrete models of the cart-pendulum system in state-space form; Section IV recall the theory behind the optimal control laws used to stabilize the system and provides some simulation results; in Section V a new minimal ET solution for nonlinear SDRE controller is presented while Section VI describes the hardware platform and the sensors used in the real control application. Section VII describes the differences regarding the implementation between classic SDRE and the minimal ET solution proposed. The Simulations and Experimental results are shown in Section VIII are shown, and Section IX concludes the paper.
II. RELATED WORK
In this subjects, most relevant works are presented considering the issues discussed above. Engineers of computer science have investigated a lot of methods to achieve a deterministic estimation of the WCET. The determination of upper bounds on ET is a necessary step in Real-time Computing (RTC) which refers to systems subject to RT constraints. Different work [14]- [17], highlight the importance of the predictability's timing behaviour for RT embedded systems. Some work propose static and probabilistic analysis of the WCET as shown in [18], [19] and [20], while the probability response time distribution of periodic tasks on a uniprocessor system was developed in [21]. The approaches can be either pragmatic with a simulation [17] or based on the statistic methods [14]. Another possibility is to use estimation algorithms, to obtain a prediction of the future execution timing [15]. However, most existing analysis methods are based on deterministic RTA, and there is still no approved method to compute extremely tight WCET [14], [15]. Several tools and algorithms are available on the market that allow calculating the ET of the tasks with some degree of pessimism. Nevertheless, these tools can ensure reliability in HRT system only considering a higher limit that exceeds in every task the real WCET [22], [23]. In the field of embedded system, these problems are overcome using micro-controllers developed to HRT problems. Amongst various producers, the XMOS family has been chosen because deemed suitable for the novel analysis carried out in this paper and introduced in the section above. Indeed, until now at best knowledge of the authors, there are no works focused on the WCET of the control algorithms used for RT or HRT applications which aims to improve the controllers' feasibility. In literature, progress based on sensor selection in control design receives a substantial interest in the last few years. For example, in [24] a Linear-Quadratic-Gaussian (LQG) control is applied to a Maglev suspension and it has been pointed out the significance of achieving also, for RT scenario, even more performing solutions to combine multi-objective optimization through an adequate sensor choice. Regarding the control techniques, different works deal with optimal control laws and suggest a discrete implementation on microcontrollers [10]. Some works are focused on the evaluation of the performances of these optimal controllers, mainly in simulation and sometimes also experimentally. In [25] the study was performed on a simulated model of an inverted pendulum, and it has been shown that the LQR algorithm works better for stabilization problems and disturbance rejection, while the MPC controller is more suitable for the trajectory tracking task. In [26], two optimal control techniques such as LQR and SDRE, have been applied to a double inverted pendulum on a cart and these were investigated and compared. In particular, simulations reveal the superior performance of the SDRE over the LQR under strong nonlinear conditions, and some improvements that could be provided by the Neural Networks, which compensates model mismatching in the case of LQR. Differently in [27] was tested a DC motor speed controller. The simulations show that the control efforts are lower in PID and LQR than in MPC, but the MPC outperforms for reference tracking and constraints handling. In [28], assuming the same linear control parameters, the effectiveness of the SDRE control over the LQR control is demonstrated despite a more complex design. Besides, in [29] traditional optimal strategies such as LQR and SDRE are investigated when applied to the control of spacecraft formation flying. In this case, the accuracy and the cost of maintaining the requested orbital configuration are evaluated, and the analysis shows as the SDRE allows to better take into account the real dynamics at an increasing level of approximation. However, despite the above-mentioned analyzes and comparisons performed on these techniques, a timing analysis seems not has been deepened. In this regard, this paper aims to propose a novel approach to evaluate control techniques by a thorough analysis of both the computational performances and the WCET of the optimal control algorithms taking into account the already well-known validated implementations. The analysis carried out on an RT experimental set-up (a cart-pendulum mobile robot depicted in Fig.1) has the intent of showing results regarding the choice between three of the most established optimal controls, providing the first link between WCET analysis and the control algorithms performances. Moreover, a novel minimal ET solution for the nonlinear SDRE controller is proposed. These results might be useful also for future implementations to solve Industry 4.0 challenges, where Multi-Agents Systems have to collaborate and exchange messages without loose deadlines in RT. Indeed, the tendency to bring the calculation of CPS towards the edge computing, and therefore on lowcost and lower performance embedded devices, as described in [30], [31], requires the use of less computationally demanding control algorithms. Otherwise, once designed an SDRE nonlinear controller based on the system model, it is possible to develop more easily a digital twin to monitoring parameters and system's conditions in broader functioning range and to apply a predictive maintenance procedure, as proposed in [32]. However, the proposed analysis could be useful in control systems engineering especially during the control's design stage for RT or HRT systems, where temporal bounds have to be fulfilled jointly with all the other application's specifications.
III. CART-PENDULUM MODELS
In this section, the model of the cart-pendulum robotic system depicted in Figure 1 has been derived. Firstly, the non-linear model is presented, and it is shown how to obtain both the nonlinear State-Dependent Coefficient (SDC) Factorization, and the discrete-time model in state-space form, which have been used in control implementation. Then, the model derived has been trivially linearized for developing the LQR and MPC linear control techniques as well. The different control laws will be recalled in the next sections.
To obtain the above mentioned discrete-time models, the nonlinear continuous-time dynamic equations for the cart-pendulum system in Figure 1 has been developed first. Then, the system has been rewritten in its statespace linear-like form for subsequent nonlinear control implementation.
Meaning and description of the variables and physical parameters in Eq.(1) are listed in Table 1. The system's control input is the motor voltage, here defined as E, it actuates the electric motors connected with the wheels, their dynamics generates external forces and moments, here defined as F (see Fig. 1 ). The modelling details of motors-cart interaction are reported in Appendix B.
B. SDC FACTORIZATION OF NONLINEAR CART-PENDULUM MODEL
The system (1) can be written in linear-like form obtaining the SDC matrix. The system's state is defined as x s = [x,ẋ, φ,φ] T , the continuous-time system's SDC matrix is A c nl (x s ), the system is control-affine, its input matrix is B c nl , the output matrix is C nl while u = E is the control input. Therefore the linear-like form of the system can be written as: The variables x,ẋ represent the cart's distance and the linear velocity respectively, while φ,φ are the pendulum's angular displacement and angular velocity, respectively. Appendix A, explains how to derive a SDC matrix solution. The SDRE control technique, has been chosen for consistency of analysis, wishing to focus the paper on optimal control methods. By manipulating equations (2) following recommendations are given in Appendix A, the system matrices become the follows (3), as shown at the bottom of the next page, where for reasons of space the dependence on the state (A c nl (x s )) of has been here omitted with s 3 = sin (x 3 ), c 3 = cos (x 3 ) and η nl = (J p +mL 2 )(M +m)−m 2 L 2 cos 2 (x 3 ). To ease the reading, the matrices A c nl (x s ), B c nl (x s ) can been arranged as: In order to implement the nonlinear controller on the proposed board for computational performance and WCET analysis, the discrete-time non linear model of (3) is derived by using the Euler method with sampling time T , giving VOLUME 8, 2020 rise to following discrete-time model: with u[k] = E[k] and discrete-time matrices Ta 23 , In (14) the terms a ij ,b ij are the elements of continuous matrices A c nl , B c nl described in the equations (5) to (12).
C. DISCRETE-TIME CART-PENDULUM LINEAR MODEL
This subsection, briefly introduces the discrete-time linear model here used to design the linear optimal control strategies described in the following Section. The model is obtained from the equations of motion (1) under small perturbation hypothesis φ 0 around the upright position, and hence (15), it is trivial to derive the discrete-time state space model using Euler method with sampling time T . It has the form: with
u[k] = E[k] and matrices
where k is the sampling instant and the terms a ij , b ij are elements of the matrices A c nl , B c nl in (3).
IV. OPTIMAL CONTROL LAWS
The present section is devoted to briefly recall the theory behind the three different optimal control techniques considered in this work to regulate the upright pendulum position of the cart-pendulum robotic system. The controllers have been used both in simulation and on the experimental set-up.
In order to ease the paper, some implementation details which require longer description are given in the appendix, while only necessary conceptual steps for a clear understanding are presented below. Moreover, considering the issues related to this work, different control features will be pointed out because useful for further performance's analysis. We refer to the discrete-time version of the three methods. The linear techniques LQR and MPC, make use of the model (16). The nonlinear SDRE control uses model (13).
A. THE DISCRETE-TIME LQR
The LQR design technique is well known in modern optimal control theory and has been widely used in many applications.
1) LQR FUNDAMENTALS
In LQR theory a stabilizable discrete-time linear system: is the state vector and u[k] the system's input, the optimal inear control problem is to determine the linear optimal feedback matrix K opt ∈ R nxm where n, m are the state and input vectors dimension respectively, such that: we refer to (16) as the discrete-time linear plant model (24). In this paper, the state feedback controller is designed using the linear quadratic regulator and the discrete-time linear model of the system (16). Here, the infinite-time regulation problem has been solved to guarantee stability for further performance and WCET analysis. This solution leads to search for optimal control u k which minimizes the following cost function: where Q k , R k are the state and input weighing matrices respectively, they are symmetric and positive defined, and k is the sampling instant. In order to ease the paper readability, the subscript s in the state vector (x s ) has been omitted. It has been demonstrated that the optimal control input u k is given by: where the matrix P is the solution of the Discrete Algebraic Riccati Equation (DARE) problem: Then, once computing off-line the DARE the linear optimal control input is obtained as well.
2) LQR FEATURES
In the conventional LQR design method, the DARE problem (28) is solved using numerical or iterative methods. For regulation problems, the DARE is solved once. Although, the LQR provides a simple solution its application is limited to linear systems or a linear approximation of nonlinear systems. Linear approximations are quite simple to handle in regulation problems, but for trajectory tracking problems, the LQR finds limits when dealing with not negligible nonlinearities. Often the solution can be found linearizing the system in a subset of the trajectory points along the whole trajectory but this is time-consuming whit respect to other techniques involving nonlinear controllers [39].
B. DISCRETE-TIME MPC
MPC has become a widely used control in the last decades, especially in industrial applications because of its versatility and capability to perform industrial process optimization. The linear MPC approach has been here considered to undertake a common comparison framework. A comprehensive theory of the MPC can be found in [12], but it is out of the scope of this paper. Thus, here the main implementation steps have been recalled, while more details are given in Appendix C.
1) MPC FUNDAMENTALS
For discrete-time MPC implementation, we refer to the discrete time plant model (16). The MPC theory is based on the extended model (53) reported in Appendix C, and is derived by defining an augmented state x e [k], wherein the variable u[k] becomes the new input to be controlled in place of u [k]. How to derive the extended model and all the related mathematical manipulations is detailed in the appendix C. As for the other optimal controllers, the optimal control input u[k] is obtained by minimizing a cost function J , defined as follow: where, the first term is related to the minimization between the predictive output Y and the setpoint R s , while the second one U T R U is referred to the size of U. Indeed, R = r ω I , and r ω is used as a tuning parameter for closed-loop performance. The optimal U minimizing the cost function J holds: is the state feedback control within the framework of predictive control, where the F matrix meaning is derived from the extended model formulation given in Appendix C.
2) MPC FEATURES
The optimal input U of equation (30), contains the controls with the receding horizon window (N c ). However, only the first sample of this sequence needs to be implemented, i.e., u[k i ], while ignoring the rest of the sequence. When the next sample period arrives, the more recent measurement is taken to form the state vector x e [k i+1 ] for calculation of the new sequence of the control signal, as shown in the following: where K y is the first element of ( T This procedure, when iterated in real-time gives rise to the receding horizon control law. Moreover, the optimal input u[k i ] implies that a discrete integrator is to be embedded in the closed-loop to derive the right u[k] for the plant. This integration acts as a low-pass filter on the controlled quantity making its transitions smoother; indeed the predicted optimal control input at time-instant k is obtained as The MPC key feature which should be highlighted is the versatility of control input implementation (31). Indeed, the derived implementation is valid both for stabilization and trajectory tracking problems. This means that in terms of computational effort and WCET of the code, having a constant or variable set-point which is changing over time does not affect the VOLUME 8, 2020 design and the computational effort required by the MPC control algorithm. Indeed, in (31) K y is a constant weighting matrix multiplied for the reference value r(k i ) at the k i instant. Thus, for each next instant, a change in the set-point value is taken into consideration producing the respective optimal control input u[k + 1].
C. THE DISCRETE-TIME SDRE
The SDRE strategy has become popular within the control community over the last decade, it provides an effective model-based method for nonlinear feedback control synthesis, by allowing nonlinearities in the system states while additionally offering good design flexibility through statedependent weighting matrices. Different papers have been produced regarding the design and the implementation issues related to the non-uniqueness of the SDC matrix factorization [33], [34], this also for trajectory tracking control problems [35]. It has been demonstrated with a correct SDC factorization strong nonlinear behaviours can be handled as well [36]. However, achieving a new SDC matrix design is out of the scope of this paper and for this reason, a common approach has been used as proposed in [37], and the latter has been discussed in more detail in the dedicated appendix A.
1) SDRE FUNDAMENTALS
The SDRE technique here recalled has been based on the SDC parametrization of the plant model obtained above in (3), which represents a linear-like state space form of the nonlinear system. The cost function to be minimized with respect to the control u[k] is: (32) One of the key advantages offered by SDRE is the tradeoff between control effort and state errors, and it can be achieved through tuning of weighting matrices Q(x) and R(x). These weighting matrices can be chosen to be either constant or state dependent. However, in our application Q and R are assumed to be constant matrices to simplify the implementation and because constant matrices ensure the closed loop performance. In this case, taking into account the finite horizon problem, the sub-optimal control u[k] is given as: where P(x [k]) is computed at each sampling time k as the solution of the following DARE: Thus, unlike the linear case, the DARE equation has to be solved on-line at each control iteration.
A. MET-SDRE BACKGROUND
The MET-SDRE algorithm aims to decrease the code-ET of the discrete-time SDRE control algorithm maintaining the same performance. This is achieved by reducing the computation time required by the DARE (34) in the SDRE algorithm in an adaptive way. This means the need to solve the DARE is evaluated adaptively by introducing a sort of system's distance from the linearity range, which is based on the degree of similarity of the SDC state-matrices between two consecutive sampling times. This result has been found analysing the convergence problem of the P ( Taking into account these considerations, we propose to modify the DARE solver of the SDRE control algorithm making it adaptive with respect to the system's state. This means allowing it to have a faster convergence for the DARE solution. In the following, the implementation details of the proposed solution are explained. The computational solution implements two basic steps that are: 1) evaluation of matrices' similarity and 2) adaptive P initialization.
1) EVALUATION OF MATRICES' SIMILARITY
In order to evaluate the similarity (or distance) between two numerical matrices, the tools proposed in [40] have been used. Given a discrete-time nonlinear system represented in the linear-like form (3), considering two consecutive instants k and k + 1, and the the corresponding consecutive numerical SDC matrices, namely A d nl (x[k]) and A d nl (x[k + 1]), two similarity criteria (d A ; d λ A ) can be established as follow: where d A represents the pure distance between two numerical SDC matrices in two time intervals, while d λ A represents the distance between the eigenvalues of the frozen matrices in the two time intervals.
Thus, out of the linear-range the biggest numerical difference between two consecutive matrices can be measured with the above introduced metrics, the bigger the distance the larger the number of backward iterations for solving the DARE.
2) ADAPTIVE P INITIALIZATION
As explained in the above Section, the P chosen to initialize the SDRE algorithm affects the convergence speed of the DARE solution P(x[k]). For this reason, an adaptive P initialization based on the current system state x[k] can improve the ET required by the SDRE algorithm. A possible solution is to calculate off-line P(x[k]) for different working ranges (e.g. linear and nonlinear ranges) and to define different SDRE algorithm initialization conditions. Trivially, the proposed methodology should be evaluated depending on the application. In this paper, the proposed solution is applied to the cartpendulum mobile robot and is detailed in the Experimental results Section VIII-C.
VI. HARDWARE SET-UP
In this Section are presented the hardware characteristics of the whole system and the control strategies implemented in the HRT board to stabilize the pendulum in upright position while the cart is maintaining the desired distance from obstacles.
A. CART-PENDULUM ROBOTIC SYSTEM
As discussed in the Introduction section, the hardware chosen to control the cart-pendulum system consists of the XMOS XK-1A board, a low cost development platform intended for exploring parallel computation. The XK-1A comprises 128KBytes SPI FLASH memory, four LEDs, and two pressbutton switches. An XTAG-2 debug adapter can be connected to a PC to debug the XK-1A operations. The XK-1A is based VOLUME 8, 2020 on a single XS1-L1 device in a 128TQFP package. The XS1-L1 hosts four deterministic cores operating at 100Mhz and it provides a tightly integrated general purpose I/O pins and 64 KBytes of on-chip RAM. The YUMO E6A2-CWZ3E Rotary Encoder sensor provides the measure of the pendulum angle while the HC-SR04 ultrasonic distance sensor provides precise, no contact distance measurements up to about 3 meters from the reference obstacle. It is simple to connect the sensor to micro-controllers requiring only one I/O pin. The DC motors are operated by the shared output voltage provided by a Sabertooth system 2 × 12 motor driver. This circuit can be powered with voltage ranging from 6 to 24 V, with up to 12 A continuous per channel. The board is used in R/C Mode/Microcontroller mode with a PWM from 1000us to 2000us, respectively corresponding from −12V to +12V. The whole system is powered by a LiPo battery, 2200mAh, 3S (3 cells), with discharge rate 65C and 12V.
B. HRT CONTROL STRATEGIES
The XK-1A board of the XMOS family has been used to implement the control algorithms described above. This technology was developed a few years ago, and currently new more performing boards are available from the XMOS company but the XK-1A hardware characteristics are still appropriated for this paper. The closed-loop controls are based on the direct measures of the cart position (x) and the pendulum angle (φ) provided by the sensors. At each sample time the controllers compute the feedback control law and the optimal voltage (u[k]) to be applied to the motors driver (i.e. the sabertooth device), in order to keep the pendulum in balance (φ = 0) while maintaining the cart to the desired x position. The sensors are directly connected to different I/O pins of the board. The three different control strategies have been entirely coded on the XK-1A XMOS board: where at each of the four deterministic cores has been assigned a task. The core 1 runs the task 1 that manages the gathering of data from the position sensor. The core 2 hosts the task 2 that manages the gathering of data from the encoder for the feedback on the φ angle. Both tasks are in charge of making available their data, through the respective channels, to the task 3. This latter runs on core 3 and retrieves the sensor data from the proper channel every 20ms, within this time laps task 3 must compute the control law and send the message to the task 4 running on core 4 which generates the PWM to drive the motors. All the tasks are parallel and their ET is computed statically by means of the TimeAnalyzer, a feature provided by the development environment XTimeComposer of XMOS. The synchronization between the cores operations has been achieved by calculating statically the computing time of each task.
VII. CLASSIC SDRE AND MET-SDRE NONLINEAR IMPLEMENTATIONS
In this subsection the implementation of the algorithms regarding the classic SDRE and the proposed MET-SDRE algorithm solution, in the HRT board are explained in more detail.
A. CLASSIC SDRE ALGORITHM
Regarding the classic SDRE algorithm, common features have been already explained in Subsection IV-C2. In this part, the classic SDRE implementation used for stabilization problems on the cart-pendulum mobile robot is analyzed in more detail. Further implementation issues and feasibility aspects have been deeper discussed in [33], [38], where are given also the guidelines regarding the construction of the SDC matrix when the SDRE solvability condition is violated. The implementation steps are the following: i) The DARE (34) is solved off-line until the P(x[k]) off reaches the convergence imposing specific system condition (e.g. φ = 0.06 rad andφ = 0 rad/s).
ii) The necessary backward iterations for on-line implementation are found as discussed in Subsection IV-C2. It has been experimentally found that within 20 backward iterations the norm error criteria imposed as P(x[k]) off −P(x[k]) 20 < with = 0.1 was satisfied. This test, was repeated for different points of work of the cart-pendulum mobile robot to guarantee a consistent control law u(x[k]) during the experimental phase.
iii) Every sample time k, the SDRE control algorithm solves the DARE (34) 20 times backwards iteratively.
B. MET-SDRE ALGORITHM
The idea of this MET-SDRE solution has been achieved through the WCET analysis experienced in this work and it aims to improve the classic SDRE approach in terms of ET. In the following, the implementation steps are listed below: i) The linear working range of the system has been estimated through experimental tests, e.g. in our case the linear working range of the cart-pendulum is φ m ≤ φ ≤ φ M , namely boundary conditions, where in our experimental setup φ m = −4 • and φ M = 4 • .
ii) It has been calculated off-line a convergence solution P(x[k]) 1000 in 1000 backward iterations for the three different boundary conditions of the system φ = −4 • , 0 • , 4 • . These solutions have been named iii) The previous solutions have been used to initialize the adaptive DSDRE control algorithm to reach a faster convergence speed depending on the system's working range.
In Figure 2, the flow chart of the MET-SDRE control algorithm is shown. The initialization of the Algorithm is composed by the reading of the initial system's output (Y 0 ) and the computation of the state (X 0 ), then the matrix P 0 is chosen based on the initial state (X 0 ). The periodic control task in this implementation last 20ms, it consists of the computation of the current state based on the output acquisition Y k . In order to compute the DARE convergent solution P( in Section V. Finally, the control input u(X k ) is computed, and the periodic task stops upon termination decided by the user.
VIII. PERFORMANCE'S ANALYSIS AND WCET OF THE OPTIMAL CONTROL ALGORITHMS
In this section, the simulation and experimental results regarding the cart-pendulum robotic system (Fig. 3) subject to the three above mentioned controllers are shown. The control performance is evaluated under different working condition of the real robot. The control parameters are introduced in Subsection VIII-A, while in Subsection VIII-B the simulations are compared with experimental results to demonstrate the validity of controllers' implementation. Lastly in Subsection VIII-C the experimental results are presented and discussed, with related performances and WCET analysis.
A. CONTROL PARAMETERS
Each controller has been designed and tuned to stabilize the cart-pendulum around the pendulum's vertical position while maintaining a certain distance from obstacles detected by the sonar and for disturbance rejection. All the tuning parameters are chosen to obtain a control system which first achieves as small as possible oscillations around the vertical position, secondly it maintains the desired x position and finally to have it also has a certain reactivity to disturbances. The values of the weighting matrices and tuned parameters used by the Figure 4 shows the comparative results of the pendulum's φ angle when the SDRE controller is applied. In Figure 5 the comparative results for the cart position x are showed, lastly in Figure 6, the control effort U in both cases are compared. The trend over time shown in the figures, highlights similar oscillations in term of amplitude, frequency and phase. Furthermore, small deviations between simulated and real system can be noticed in cart and pendulum direction changes, this may be attributed to the not perfect modelling of the tyre-floor contact point (tyre slip conditions). However, despite the small predictable mismatches between model and reality, by focusing on Figure 6 it can be noticed as the control input seems to be the same in both cases. This proves the effectiveness and consistency of the controller implementation in the HRT board. Besides, it is appropriate to point out that the achievement of perfect model matching with reality is out of the scope of this work.
C. EXPERIMENTAL RESULTS
This subsection is devoted to describe the experimental results obtained by applying optimal control laws in different working conditions of the experimental set-up in a RT scenario. The experiments concern the performance evaluation for the regulation problem of a cart-pendulum system. The challenge is: balancing the pendulum in a vertical position (pitch angle φ = 0) while the cart maintains a desired position in space (x). In the design phase, the regulation around a fixed set-point is the primary goal to achieve by any control system. The second one is often the trajectory tracking, in which the control law forces the closed-loop response to follow a specified reference, generally time varying. Though the trajectory tracking is not the subject of this investigation, it will be shown how some considerations on it can be inferred in terms of WCET, considering the implementation of the algorithms adopted. For evaluating performance and WCET of the considered optimal controllers on regulation problems two experiments have been carried out. In the first one, the system works within the linearity range of the pendulum's vertical unstable equilibrium point, which is φ = 0. For our system, it has been experimentally proofed that the range is The second experiment aims to evaluate the performance of the optimal controllers when nonlinear behaviours arise. In our case this happens by choosing the initial condition of the φ angle out of the linear region. During experiments, continuous monitoring of the WCET is performed by implementing the control Algorithms in the XMOS board. Furthermore, computational efforts required to the microcontroller by the control laws are taken into account.
1) TEST 1 -LINEAR WORKING CONDITIONS
The Figures 7, 8, 9 show the comparison of control results respectively for the pitch angle φ, the cart position x and control input U in linear working conditions. Overall, all the controllers accomplish the regulation task stabilizing the system around φ = 0 with small oscillations mainly due to physical limits of the experimental set-up. A comparative analysis of the controllers in terms of a) ET, and b) performance, is detailed as follows. Regarding the point a), Table 2 reports the code execution BCS and WCS values, which have been computed using the XMOS development environment xTimeComposer for each of the task executed by the board, including those related to the control algorithms. The WCS provides the values of the WCET for each task, this analysis gives the possibility of setting the temporal upper bound limit for RT or HRT applications. A first aspect to be noticed from Table 2, even if it is quite obvious, but here it has been quantified, is that for regulation problems LQR and MPC algorithms require less time to be executed. The SDRE requires instead almost four times the WCET in respect to the previous ones. This first result suggests that attention must be taken, in terms of ET, in using nonlinear algorithms, as the SDRE, for linear tasks. This becomes even more notable in higher order MIMO systems because the ET of the DARE solution is related to the system's order, as well as with the initial conditions. In classical applications where a single microcontroller must execute more tasks (e.g. sensors acquisition, control, etc.), a high WCET of the control algorithm could lead to critical situations. These temporal constraints are crucial in HRT applications. Differently, using the MET-SDRE controller previously introduced the algorithm ET decrease. In particular as long as the system is working within linear conditions the BCS value obtained can be comparable to the linear controllers and with respect to classic SDRE it decreases significantly. Regarding the point b), related to the performance analysis, it can be noticed that under linear conditions the LQR performs similar to the SDRE disregarding the ET, while the MET-SDRE keeps the same performance of SDRE as expected. For visualization reason, here the MET-SDRE performances, are not plotted because they match with the classic SDRE and they will be given for the second test. Figure 7 shows how the stabilization is reached for both the LQR controller and SDRE controller. However, focusing on Figure 8 it is possible to notice that, when using LQR controller, the cart position x is kept with a constant bias from the set-point. It happens because the LQR controller gives priority to the angle stabilization, indeed it has been demonstrated that in nonlinear systems when the state has coupling dynamics (as e.g. in our case where the φ and x behaviours are strictly related) a linear LQR cannot manage them perfectly [28]. However, some works sustain that in terms of performance SDRE approach provide at best a quite limited and case dependent, benefit over LQR [29]. Therefore LQR is often preferable due to its less computational effort especially in application where system's nonlinearities are negligible. Finally, the MPC takes more time to reach the steady-state condition with respect to the others and when reached is not able to well regulate the cart position x within 25 seconds. This may be because of determining the value of the control signal, the MPC analyzes the whole prediction horizon (N p ). As a result, the control effort is smoother but the system is also less reactive because the disturbance rejection task is averaged for the entire time horizon. Therefore the prediction horizon should not be too long, because it slows down the reaction of a controller to disturbances in the system, though this feature is interesting in some cases. Indeed, the control signal of MPC controller is smoother than the others (Fig. 9) and this may be important, especially in an industrial environment, during the start-up phases of plants and actuators, because these systems are adversely affected by sudden changes in the control signal. Moreover, this control strategy has less impact on energy consumption because the power required for a hypothetical actuator is less. Therefore, the prediction horizon N p is an important tuning parameter that must be set case by case. Lastly, regarding trajectory tracking tasks, the MPC controller has advantages in terms of design and ET. This because the MPC control algorithm implementation is valid both for constant and variable system's reference signals. Therefore, the WCET of the MPC calculated in Table 2 for control regulation problem would remain the same also for the trajectory tracking problem. Differently, the LQR design phase in case of trajectory tracking issues is pursued [27], [39]. Figures 10, 11, 12 show the trends over time of the pitch angle φ, the cart position x and the control input U . In this case, not all the controllers can accomplish the regulation task stabilizing the system around φ = 0, for this reason in Figures 10, 11, 12 the divergent trends for the linear controllers are obtained. Also in this second test, a comparative analysis of the controllers in terms of a) ET, and are proven to be the only control strategies able to stabilize the system while LQR and MPC lose control. At the same time, in the second scenario, the MET-SDRE decreases the algorithm's WCET of about 35% with respect to the classic SDRE (as in Table 2), by maintaining the same performance as well. This result is due to its adaptive nature, indeed, as discussed in section V, the number of backward iterations of the DARE (34) and then the ET of the control algorithm are based on the current state of the system. Then, when nonlinear behaviours arises the biggest difference between two consecutive SDC matrices A d nl (x[k]) and A d nl (x[k + 1]) requires solving the DARE (34) with more iterations to obtain a convergent solution P(x[k]). Regarding the point b) instead, which is related to the performance analysis, the system nonlinearities are triggered by imposing to the pendulum an angle φ = 11 • out of the linear range. The linear controllers (LQR, MPC) are obviously not able to perform the regulation task. Indeed, in Figure 10 it can be noticed how the MPC tries to approach the reference value (φ = 0) but the control law is not fast enough and the cart diverges from the position x = 0, as shown in Fig. 11. A similar result is obtained with the LQR even if the controller tries within 5 seconds to stabilize the pendulum but because of the large oscillations (Fig. 10) the control is lost. Differently, the SDRE and the MET-SDRE, within 2 seconds are able to bring the system near to the equilibrium point and stabilize the pendulum (with small oscillation around φ = 0) in 5 seconds. The performance of these two nonlinear controllers are shown separately from the others to evaluate better the similarity of their behaviour. They are presented in Figures 13, 14, 15, where respectively are compared the pitch angle φ, the cart position x and the control input U for the classic SDRE and the proposed MET-SDRE. It can be noticed in these figures, that the MET-SDRE achieves the same performance as classic SDRE implementation but decreases the algorithm's WCET. There is a small time-shift on the Figures 13, 14, 15 which depends on three factors: on the acquisition instant, on the small error which might be between the two initial angles ( φ = 11 • ) and on the possible reading lag introduced by the sonar. However, it is trivial to see how the MET-SDRE performances are consistent with its classic version. The performance analysis of SDRE, LQR and MPC instead, are shown in the Figures 10,11,12. Here, only the nonlinear controller (SDRE) is able to manage the increasing dynamics coupling of the system under nonlinear conditions. This is expected, indeed, when the systems dynamics become complex or the disturbances forces the system to work out of the linear range. In this case the MPC and LQR performance are overcome by the SDRE. Indeed, the SDRE taking into account system's nonlinearities can better handle the system dynamics as also experienced in [28], [34].
IX. CONCLUSION
The advancement in electronics, computing and communication technologies have made it feasible to extend the application of embedded systems to more critical applications, such VOLUME 8, 2020 as automotive, avionic and many others. Often, they involve Hard Real Time (HRT) requirements, where systems must be designed according to the resource adequacy policy and must provide sufficient computing resources to handle the specified worst-case computational load and fault scenario. The HRT embedded resources, as shown in this work, mostly are required by the control algorithms and the implementation of nonlinear control laws are often computationally demanding. For this reason in some cases, the performances are not suitable because of specification on application constraints. However, in order to achieve even more demanding objectives to control systems, the linear controllers have to be overcome because in most of control applications the PID is still the most used controller. In this work, a method has been provided to evaluate in a deterministic way the Execution Time required by the controllers' implementations for embedded devices with HRT characteristics. Implementation solutions for discrete-time linear and nonlinear optimal control techniques have been carried out with encouraging perspectives. Once evaluated both performance and ET for three of the most used optimal controllers, this work shows how to improve the execution time of the SDRE nonlinear controller, by proposing a new method named MET-SDRE. With the MET-SDRE, in terms of ET, a Best Case Scenario (BCS) comparable with linear controllers have been achieved. The Worst Case Scenario (WCS) has been improved by 35% with respect to the common approach maintaining the same performance. In general, the results obtained, could be useful in the filed of control systems engineering especially during the design stage in RT and HRT systems, where temporal bounds have to be fulfilled also taking into account all the other functional specifications. In future, further investigation is required in order to prove the performance of these solutions on higher-order systems.
APPENDIX A SDC FACTORIZATION FOR SDRE DESIGN
In this appendix, the design flexibility for the SDC matrix is discussed, and general recommendations are provided to derive a correct factorization. The extended linearization, also known as SDC factorization is the process of factorizing a nonlinear system into a linear-like structure which contains SDC matrices. Considering the cart-pendulum model in (2), this can be generally written as: where f (x) = A(x)x. Without loss of generality, the origin x = 0 can be assumed as an equilibrium point, such that f (0) = 0. Under this assumption, a continuous nonlinear matrix-valued function A(x) always exists such that is a n × n matrix found by mathematical factorization and it is nonunique when n > 1. Because of this non-uniqueness of the A(x) matrix, in literature different works have been proposed methods to improve the system stability performances through the factorization process. For example, in [37] a method has been proposed to design the SDC matrix when working with a conforming system. In [36], it has been studied how a different factorization of the A(x) matrix can affect stability performances. Another possible SDC parameterization solution is given in [34] and [28], where in both cases an inverted pendulum has been controlled with SDRE for both the swinging-up and stabilization. However, in this work, the focus is not to investigate a new parameterization method but to carry out a comparative analysis of control algorithms execution time, in particular LQR, MPC and SDRE. Consequently, a classic approach regarding the design flexibility has been followed, as proposed in [37]. The parameterization steps adopted to derive the SDC matrix in (3) are presented below with related motivations. Considering a nonlinear systems and under the assumption of x = 0 being an equilibrium point. The SDC A(x) matrix terms can be state-independent or state-dependent. In case of state-independent terms, named also ''bias'', it can be handled to satisfy the assumption f (0) = A(0) x(0) = 0 by augmenting the system with a stable state: so that the bias can be factorized as: which converges to zero only when z = 0. Differently, regarding state-dependent terms non-unique solutions can be adopted. Defining A(x, α)x as an infinite family of SDC parameterization, in general, terms which do not converge to zero as the state approaches zero, violates the fundamental condition f (0) = 0 outlined above. Like biases, these terms prevent a direct factorization of f (x) into A(x, α)x but they can be handled as discussed above. However, it is more desirable to capture their state dependency in the proper element of the matrix A(x, α). For example, supposing to have a system with two state variables, where equations are given by:ẋ where β can be considered as a bias term, discussed above, the SDC matrix A(x) has the form: The cosine term of the A(x, α) matrix it is desirable to be nonzero to reflect the state dependency. A solution could be to re-arrange the cosine function as: where [cos(x 1 ) − 1] approaches to the origin when x 1 goes to zero and can be factorized as: and the remaining term +1 of equation (43) can be handled like a bias.
APPENDIX B THE MOTOR MODEL
The four-wheeled robot is powered by four identical motors which provide the total force F acting on it. The first order model in Laplace domain has been derived by means of an identification process which has provided the following transfer function: Knowing thatω = τ m J = F R r J and ω =ẋ R r the (46) can be rewritten w.r.t. the force acting on the robot as: Then, the force equation is replaced in the system model (1) where the term β = (J K p ) (τ p R r ) , and γ = J τ p R 2 r . The parameters K p and τ p are found through the identification process, R r is the wheel radius. The rotor inertia J is calculated by adopting the reduced order model of the DC motor, where the electrical time constant is neglected. Since K e and R a are measured, and K t is computed as a good approximation as τ stall I stall , it is possible to compute the viscous friction B m with: Then J can be computed with: the physical and electrical parameters are completely defined and listed in table 3.
APPENDIX C MPC EXTENDED MODEL DERIVATION
In order to derive the extended model let consider the following extended state vector x e : Using the iteration of the model (16) where the variable N p is the length of the prediction window and N c is the length of the receding window with N c ≤ N p . It can be proven by using the model (54) iteratively that the output vector Y can be expressed in compact form as: For a given set-point signal r(k i ) at sample time k i , within a prediction horizon the objective of the predictive control VOLUME 8, 2020 system is to bring the predicted output as close as possible to the set-point signal, where we assume that the set-point signal remains constant in the optimization window. This objective is then translated into a design to find the 'best' control parameter vector U such that an error function between the set-point and the predicted output is minimized. Assuming the data-vector which contains the set-point information as: The cost function J that reflects the control objective is defined as: where the first term is related to the minimization between the predictive output and the set-point, while the second is referred to the size of U.
ANDREA BONCI (Member, IEEE) is currently an Assistant Professor in automatic control and the Head of the Automation Laboratory with Università Politecnica delle Marche. His research interests include modeling and control of dynamic systems, system and control theory, vehicle dynamics, automotive systems, modeling and control of autonomous systems, mechatronic, robotics, embedded systems applications, industrial automation, cyber-physical systems, and smart manufacturing.
SAURO LONGHI (Senior Member, IEEE) is currently a Full Professor in robot technologies and a Rector with Università Politecnica delle Marche. His main research interests include modeling, identification and control of linear and nonlinear systems, control of mobile robots, service robots for assistive applications supporting mobility and cognitive actions, home and building automation, and automatic fault detection and isolation.
GIACOMO NABISSI (Graduate Student Member, IEEE) received the master's degree (Hons.) in computer and automation engineering from Università Politecnica delle Marche, in 2019, where he is currently pursuing the Ph.D. degree. He also holds research fellow contract for the Regional platform for Industry 4.0. His research interest includes Industry 4.0 related issues. In particular, he works on predictive maintenance applied to electrical motor-operated devices and on robotics concerning in modeling, planning, and control.
GIUSEPPE ANTONIO SCALA received the master's degree in computer and automation engineering from the Università Politecnica delle Marche, in 2018, where he is currently pursuing the Ph.D. degree in automation engineering with the Dipartimento di Ingegneria dell'Informazione. His main research interests include robotics, nonlinear system analysis and control, virtual simulations, and computer science. VOLUME 8, 2020 | 2020-09-03T09:12:40.728Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "aafdaf3d0d90630bbf1945310285a0f7afc826b0",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09178724.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "c8fe1688e4880b545c02cad0c643e6f64d187741",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
246697568 | pes2o/s2orc | v3-fos-license | Health Promoter’s Role in School Settings in African Francophone Countries
The aim of this study is to describe the context, resources and procedures for planning/implementing health promotion initiatives targeting the children and youth population in African Francophone countries. The method of work used multiple case studies with an online survey (n = 11) and individual interviews (n = 6) (2017–2018). Strategies to mobilize/use community’s available resources and assets were influenced by gender and professional status, as well as the stakeholder’s valorization and degree of community interest and engagement in the proposed health promotion initiatives. Major social impacts relate to the support provided by the community stakeholders with individual and collective assets. Evidence uncovered professional networking, collaboration and exchange that could help regional health promoters.
Background
The World Health Organization (WHO) health promotion (HP) strategy for the African region outlines several objectives and priority interventions to develop and support HP in Africa (WHO, 2013).It also outlines a set of target actions to be met by 2018, one of which aims to establish a national network of HP practitioners in at least 15 additional African countries.In 2015, responding to this strategy, a consortium of eight international partners -Alliance des Ligues Francophones Africaines et Méditerranéennes contre le cancer (ALIAM), Ligue tunisienne de lutte contre le cancer (LTLC, Tunis), Ligue nationale contre le cancer (LNCC, France), Université Senghor (Egypt), Faculté de Médicine de Sousse (Tunis), Université Numérique Francophone Mondiale (UNFM; France), Université Saint Christopher (Liban) and Union International for Cancer Control (UICC) -launched an HP initiative in Africa with the financial support of the UICC.This HP initiative consisted of the creation of a training programme for social services and health professionals to educate them to be responsible for HP in school settings.The programme introduced the philosophy of HP and the concepts of social determinants of health (SDH), pedagogical methods and strategies within a participatory learning approach.The Ryerson University's (RU) team joined the partners to perform a remote impact evaluation research with trainees in their home countries.
HP was targeted in this training programme because it has been recognized as a socially justifiable strategy to combat the increasing prevalence of non-communicable diseases (NCDs) in African countries, as well as in school settings (WHO and UNESCO, 2018).If implemented appropriately, these initiatives have the potential to produce effective results, allowing individuals to have more control over their health, and to improve it.
The ever-increasing burden of NCDs on low-to middle-income countries causes a disproportionate number of global NCD deaths (more than 75%), thus worsening poverty (Witter et al., 2020).This constant strain on governments and health care systems calls into question the effectiveness and appropriateness of the current HP strategies for the targeted populations.The state-ofthe-art evidence on health system challenges for NCD prevention and control in low-to middle-income countries is limited, particularly in those of sub-Saharan Africa and post-conflict settings.Evidence is scarce and limited to the prevalence of NCDs (Witter et al., 2020).
For the purpose of this paper, one key target outlined by the WHO and UNESCO (2018) was considered: the development of HP education in academic systems.Schools are strategic platforms for delivering upstream primary health care (PHC) services, thus impacting a significant portion of the community.The WHO (2020) offers a warning about the overall relevance of youth health in low-and middle-income countries.They state that school health programmes have been shown to be a cost-effective approach to influencing healthy behaviours in youth of low-and middle-income countries.As it currently stands, 80% of youth lack adequate levels of physical activity, and obesity rates have increased 10-fold in the past four decades responding for high mortality for youth.The alarmingly high rates of poor youth health validate an urgent call to increase and improve HP through all school and community settings.Investments in youth health can have positive economic and social benefits, particularly in the aforementioned countries.
Research questions
RQ1.How did the process of identification, mobilization and use of the community forms of capital unfold in its interaction between the professionals and the health and education decisionmakers, as well as other community stakeholders, to implement their new role and work for HP in the local schools?
RQ2.
To what extent did professionals' reflections on their skills, achievements and current practice uncover the effectiveness of their work with local decision-makers and stakeholders?
RQ3.What are the professionals' views about their accomplishments and plans for HP having children/youth as social agents of change?
Research objective
The objective of this study is to describe the context, used resources and the procedures for planning and implementing HP initiatives targeting the children and youth population in schools located in African Francophone countries.
Conceptual framework
This study was framed by an original conceptual framework, comprising three major components: the context, the action and the features of community forms of capital used in the impact evaluation (see Figure 1).The inspirational contextual factors for the conception and deliverance of the HP training programme responded to the growing burden of NCDs and related HP needs (WHO, 2013).The training meant an action at the community level aiming to encourage further HP action at a governmental level.The evaluation considered trainees' HP actions (planned, implemented and intended) in a collaborative participatory approach altogether to mobilize their community forms of capital.
Methods
This study used a multiple, temporal case study design (Yin, 2009), as it helps when researchers use a survey asking several questions simultaneously, such as who, what, where, how, how many and how much, and when researchers focus on contemporary social phenomena.The design was used to explore the scope of experiences, reflections and actions of the trainees in their attempts to implement the training programme in a variety of municipal schools in the region.Case study was the appropriate research design to study the aforementioned emerging new HP role in the region, because it contributes to the researchers' understanding of individual, organizational, social and political phenomena.Having the features for planning and implementation as major variables, we used a simple time-series method, combining quantitative data and qualitative findings to get an insightful analytical perspective.
Population
The target population of 75 trainees originated from 14 out of 27 countries members of the ALIAM.A long (2 weeks) and a short (5 days) programme modalities had been implemented in Tunis (long: May 2015), Chad (short: October 2015) and Algeria (short: March 2016).
Sampling and recruitment
All trainees were informed that there would be a follow-up evaluation study by the RU researchers.Further contact would be made by email and postings in the alumnae community Facebook page.Trainees were informed that the Programme Coordination would support the research team in their attempts to reach prospective participants who may not have Internet access.In that case, participants knew that those trainees would be invited to go to the local health authority office to use their computer in order to participate in the evaluation study.Therefore, all trainees acknowledged their commitment to ensure the success of the study.Moreover, in order to avoid any direct influence of the social leadership and respect held by ALIAM and UNFM, the principal investigator used measures to prevent any issue of social desirability by being the only contact person.In order to ensure free participation, the recruitment used two strategies: (1) she sent out an invitation email and (2) she posted an invitation to the study on a Facebook page.
Criteria of inclusion
The inclusion criteria are as follows: (1) have attended professional training in becoming responsible for HP in a school setting; (2) have at least 2 months of field experience related to planning for the implementation of the programme in the municipalities' schools, or have at least 2 months of field experience of an effective implementation of the programme in the municipalities' schools; and (3) have voluntarily decided to participate in the study by responding to an online questionnaire.
Criteria of exclusion
The exclusion criteria are as follows: (1) had not attended the training, (2) did not have the minimum required 2 months of experience in planning of the school programme, (3) did not have the minimum required time of 3 months of experience in the implementation of the school programme and (4) intended to participate in this study in a non-voluntary manner.
Data collection
Data collection started in 2017 after the short-and long-programme delivery, which required 6 and 12 months of training completion, respectively.Each set of online questionnaires relating to a specific country constituted an individual case to be added to the sample of studied cases.Of interest was the emerging patterns of systemic barriers and limitations as related to the planning phase and the possible, major, underpinning political and financial issues (e.g.governance, social commitment, government priorities).The choice for the online survey responded to the particularities of Internet access across the African continent, which mainly consists of a cell phone, rather than a desktop or laptop computer.By doing so, all prospective participants were reached in their actual locations, including distant and rural areas.The online survey questionnaire operated through RU's Opinio Survey was the main method of data collection.Online questionnaires have advantages when compared to postal questionnaires, and have been shown to have a lower participant nonresponse rate in comparison to postal questionnaires, reducing the non-response bias (Smith et al., 2013).
To expand data collection, an invitation for an individual interview was sent out.During the interviews, technical barriers to accessing social media platforms, and difficulties in typing long text using participants' cell phones, since laptops and desktops were not available to them, were discovered, which could explain the lower rate of participation in the narrative responses.
Instrumentation
Data collection was done through an online survey comprising 30 questions, including 23 closedended questions and 7 open-ended questions, divided into four sections: sociodemographic identification, review of current practice, reflection of skills and practice, and prospective actions as well as testimonial.The questionnaire originally in English was translated and validated in its cultural perspective by three bilingual (French-English), Francophone African Master in International Health students at the University of Senghor.These students verified the words' clarity and appropriateness to the prospective Francophone respondents.They then reviewed the French version and discussed the grammatical review of the survey.Questions in French language were pilot tested.The data collected from the online questionnaires were compiled as descriptive statistics (Grove et al., 2013).It is noteworthy to say that the questionnaire had five questions about testimonies and reflections about participants' practice; none of them were answered.These questions were explored in individual interviews in French language by telephone.
Thematic analysis was used to analyse the open-ended question responses obtained from telephone individual interviews.Thematic analysis (Paillé and Mucchielli, 2016) guided the core idea explored in each of these questions.The analysis explored groups of descriptive ideas, classified them according to clusters of meaning aiming to create new conceptualizations and refined themes to respond to the research questions.Five predefined themes guided the analysis: (1) targeting HP actions with children and youth in schools as future health literate and health knowledge disseminating individuals; (2) learning due to unexpected results and achievements; (3) ideas for future evaluation of HP programmes; (4) bases for a regional, national and inter-African network for HP; and (5) 5 years' career plan progression as a professional and advocate for social rights for HP in schools.
The results presented in the next section display the integration of evidence gathered from the responses to the online questionnaire and individual interviews.
Results
Out of 75 individuals who completed the training, we were able to contact 35.The survey was sent to all attendees, but only those who attended the 2015 training in Tunisia responded; 27 participants provided consent to complete the online survey, and only 11 participants actually provided responses to all the survey's questions.In total, 26 out of the 36 questions were completed.All 26 questions were completed by 8 participants, while 17 of the 26 questions were answered by a mode of 11 participants, as it was not compulsory to answer all questions (see Table 1).
For the online survey, the participation consisted of three participants working in the République Démocratique du Congo (RDC) and eight participants working in the following countries (one in each): Republic of the Congo, Bénin, Ivory Coast, Mali, Niger, Algeria, Mauritania and Morocco.All participants stated their thoughts about the influence of their masculine identity in their HP programme planning and implementation.It is noteworthy to say that 10 of them were men.Similar thoughts were shared about the influence of their professional title and status in such activities, with 81.8% affirming as much.For the participants, the access to stakeholders and the acceptance of their work by the community, decision-makers and the target population were at stake.
Interactions with community stakeholders and decision-makers during identification and implementation of forms of capital
All participants reported the following as major competencies for the work with stakeholders and decision-makers: (1) competencies to reinforce collaboration with health and education professionals (100%), (2) competencies for community education to enhance decision-makers' knowledge about the HP pertinence (100%) and (3) competence to reinforce collaboration with parents and tutors (90%).The implementation of HP programmes required the mobilization and use of community forms of capital (see Table 2), as the types of interactions with stakeholders and decision-makers can allow an understanding of the effectiveness and efficiency of each form of community capital.
The planning of HP projects involved actions with medical and health associations, local offices of non-governmental organizations (NGOs), volunteer groups and local media (5.7% each), and less collaboration with youth groups and universities (2.9% each).Two other clusters of collaborators for the implementation were medical and health associations (6.8%), as well as with less frequency (5.1%), the community associations, students' associations, volunteer groups, parents' groups and NGOs added to agencies of social development, local enterprises, local media, international organizations and advocacy groups (all 3.4% each).Interestingly, participants barely set up collaborations with youth groups, research groups, financial organizations, health organizations and universities (1.7% each).
To develop teaching strategies for children and youth, participants mobilized the communities' cultural capital (see Table 3).Disclosures also included less interest in building local autonomy (12%), and strengthening personal capacity through political awareness to (8%) and initiating participatory learning about major health matters -tuberculosis, sexually transmitted diseases and so on (4%).Financial stakeholders also supported the acquisition of informatics supplies, development of new educational resources, maintenance and Internet fees, and actions against food insecurity (5.9% each).Religious groups had less participation in the aforementioned projects (10%).
There was less use of arts youth groups, religious groups for children and youth, and social media influencers (4.8% each); however, among the non-durable actions, the NGOs' work (77.78%) was listed at the top, followed by less consideration for the health and social development research projects implemented by foreign universities (11.11% each).Implementation actions unfolded in six major phases, addressing 31 multidimensional features (see Table 4) that characterized the complexity of HP work in African Francophone sociocultural settings.
Professionals' satisfaction and self-reflection and how they are shaped by issues related to successful/unsuccessful approach in their work with decision-makers/ stakeholders
Participants justified related satisfaction with facts regarding the school context, such as (1) be accepted by school personnel (12.28%), by youth (12.28%) and work with educational authorities (12.28%), and (2) have work accepted (10.53%) and (3) work with health authorities (10.53%), as well as with the community representatives (8.77%).Participants self-evaluated their work as health promoters (see Table 5), revealing that their knowledge was expanded and reinforced with easiness in concept application, increased critical thinking, as well as the mastering, applying and transferring of knowledge.See Table 5 for specific statements of self-evaluation.
Self-evaluations at several phases of the HP projects were helpful in determining the specific changes needed to ensure success in future endeavours.Using the professionals' strengths and unique abilities, as well as addressing their weaknesses, helped improve their plans and lead to the creation of a successful inter-African HP network.
Professionals' view about the future of the programme to tackle NCDs, using youth as the main social agent of change for HP After the implementation of an HP programme, it was important to consider its sustainability.In doing so, its effects would be more prevalent within the given communities, encouraging further collective, social changes for the health of current and future generations.Table 6 details the different features of the HP programmes' sustainability.
Six male participants volunteered for the individual telephone interview (see Table 7 for participants' identification).The interviews lasted an average of 50 minutes.
The upcoming paragraphs introduce participants' accounts by the predefined analytical themes, as well as specific contents portraying multiple case studies.Their narratives cover features of the planning and implementation of the HP projects within the local context (where), profile of the promoter (who), situations (issues, problems), barriers/obstacles, decisions (what), actions (solutions), evaluation/appraisal, understanding/learning and unsolved issues.
Targeting HP actions with children and youth in schools as a future health literacy and health knowledge disseminator
Case #1: Bénin.Benoît developed and implemented HP programmes at 16 schools, each lasting for 2 months, with a total of 250 students.The Ministry of Health provided doctors and nutritionists.With a focus on NCDs, a consultation with the school administrations explored local health needs, and the chosen topic was the promotion of physical activity and high nutritional education.Afterwards, the students who engaged in the programme demonstrated changes in attitudes (free translation): . . .[they] take care of their health, they become actors of their own health instead of being observers.
Although the programme itself was conducted very extensively, time constraints were identified as the main rationale for a better evaluation.Evaluation reports were sent to the Ministry of Health and the Ministry of Education, but Benoît (free translation) 'normally does not [expect to] receive any response'.This lack of responsiveness from the local political stakeholders can hinder the HP programme from being widespread and effective.The involvement of media and NGOs can enhance effectiveness, by prompting the allocation of required material to allow the Beninese HP professionals to access teaching material (overhead projector and computer), and the financial support to travel.Sine qua non conditions are teachers being trained on health matters to accurately transmit health information to the students, and students' receptiveness to it.
Case #2: Mali.With financial support from the NGO, Maurice provided HP training to 24 health workers, as well as teachers' and students' representatives.Acknowledging the Malian culture, Maurice appealed to the individuals' emotions by using family bonds as a justification to teach about smoking-related dangers threatening their family unit.Martin used a similar awarenessraising approach, which also involved putting children and the youth at the forefront of HP, both hoped to address their awareness issues related to HP in schools.
After correcting inappropriate health habits, both interviewees reported that parents began to cease to smoke in front of children, thus becoming parental role models (free translation): 'And in front of [Maurice], people have broken cigarettes'.
Both interviewees aimed for a wider political awareness through youth training to become a trainer for others in their social environments.Due to the high number of illiterate youth not enrolled in schools, this social action might have a positive impact on health literacy and life expectancy, due to the increase of health knowledge about smoking-related dangers.Through effective dialogue and communication, the number of people who smoke can decrease, as Martin stated (free translation): Case #3: Niger.Through an epidemiological lens, Nico reiterated that sedentary lifestyles and low income, paired with poor diets, result in high rates of early-life diabetes or hypertension, and child obesity.The over-accessibility to students of unhealthy, low-price and unhygienic foods also presents a significant risk for NCDs to tackle these negative factors, suggesting that the school curriculum should have HP contents.In Niger, the Ministry of National Education implemented an HP programme, which was discontinued.Nico intended to relaunch that programme by relying on the national strategic plan and was considering the involvement of other global partners.While acknowledging the importance of curative care, Nico expressed his feelings regarding the importance of HP (free translation): We must first prevent [health issues] . . .Health should no longer be the sole concern of health workers.But it must be seen in a global framework.
Case #4: RDC.Similarly, in RDC, diabetes, hypertension and cardiovascular disease (CVD) plague the community.Actions for HP must be integrated and multisectoral (health and education), in order for there to be successful dissemination among students.To the Ministry of Health, Diego proposed launching a prevention-focused HP programme at a national university.With the approval and support of the Ministry, it would be piloted in four out of RDC's 26 provinces.First, the programme would train certain university executives at the intermediate level, nursing staff at the provincial level and educational staff, allowing for the effective implementation of the project.Two NGOs and parents' associations were the financial stakeholders.Politicians were informed about the programme implementation but did not express a commitment to it.Roger outlined his attempt at nutritional awareness among kindergartners by establishing a small HP project.He focused on teaching children how to make informed choices regarding the kind of food and drinks they consume (free translation): 'it was a small project that I wanted to put into action'.
In addition, other children were taught the importance of fighting the use of tobacco, so when they see their parents smoking, they have the ability to prohibit it.The lack of resources available to Roger made it difficult to expand the project and to involve those school staff in charge of HP.
Learning due to unexpected results and achievements
Case #1: Bénin.Benoît witnessed post-intervention behavioural changes in a number of students.Once children learned about hand washing, they asked their schoolteachers for hand washing stations to make it more accessible to all students.In addition, their teachers also better understood the necessary actions to implement health teachings.
Case #2: Mali.Despite the fact that the audience consisted of smokers, they understood the danger smoking presented to community health.After learning about smoking-related risks, the community suggested wide dissemination of this information to schools and health centres.Maurice was especially surprised when actions were taken by close relatives to stop smoking.Discussions had in an attempt to identify specific ways in which HP can be implemented among the youth population.Remarkably, Maurice received support from the community, which was very interested in setting up training activities for HP.
Case #4: RDC.Diego described how important it is to be persistent, and that even if there are not many resources, the HP project could start off small, using what was available.Diego also had an advantage in having connections with the school, and a site for the implementation of HP activities, allowing for the successful dissemination of health teachings.
Ideas for the future evaluation of HP programmes
It was reported that in Bénin, when the awareness training ended, the Ministry stopped offering human resources, but the HP project can continue by utilizing the knowledge previously shared by the physicians and nutritionists.In Mali, to evaluate the future HP projects, more robust data and observations will be made to document youth smoking reduction and cessation.For Niger, through the development of focus groups, participants can be invited to respond to surveys.New ideas for implementation in RDC include the professional media raising awareness to encourage the acceptance of the HP efforts that could affect future results.Nationally, radio broadcasts are easily accessible to everyone, but a lack of resources persists.Furthermore, evaluation could benefit from ALIAM's sociopolitical leadership by becoming an intermediary political stakeholder between the Ministry and the professionals involved in the HP programmes.ALIAM would mobilize professionals in HP roles and present programme results to the government, furthering political support.
Bases for a regional, national and inter-African network for HP Case #1: Bénin.ALIAM pursues their leadership actions by working with health promoters to organize other HP training courses.After training in Tunisia, all African trainees remained connected through a Facebook page to discuss further training opportunities and exchange information.Benoît created an NGO to legally congregate them in 23 countries, which makes them eligible for international funding and can assist in their future HP endeavours.
Case #2: Mali.Maurice ensured constant communication through emails to reach out to other trainees and the facilitators of the training programme.This networking allows for an inter-African collaboration.In fact, the implementation of the training programmes for HP initiatives enabled the creation of an inter-African HP network in the regional Francophone countries.Martin stated that participation at the International Junior Chamber (an NGO) allowed him and others to identify small community problems and design-specific interventions.Finding a viable partner for initiatives is always an issue.
Case #3: Niger.Nico stressed the importance of establishing global networks to support advocacy actions and the comparison of HP-related results by governments' undertaken actions.Creating networks with other trainees enabled discussions of national health policy.He foresaw that the creation of future national HP directorates, in collaboration with the inter-African and global networks, would result in more robust advocacy for better health policies and an increased transparency in the governments' actions.
Case #4: RDC.Diego and Roger received multiple support from a director of a health programme, United Nations Children's Fund (UNICEF), private donors and the Ministry of Health for initiating their HP project.This favourable context allowed for a large-scale implementation with fewer obstacles.For that, they established a regional network, including about 20 professionals from certain African regions extensive to Central Africa ones.The work ensured a more sustainable and effective networking, because all participants were aware of the outlined goals in relation to HP, with a high understanding of plans of action.The aforementioned participants indicated that ALIAM could play a pivotal role in HP in the region, in strict collaboration with UICC. Figure 2 displays a representation of key ideas shared by them in such projected futures.
Five years' career plan progression as a professional and advocate for social rights for HP in schools
Benoît expressed his interests in consolidating his professional future towards HP and prevention, since his successful creation of an HP NGO to educate other health promoters oriented towards school health.Due to his background as a sociologist-anthropologist, he expressed how being a representative of an NGO would allow him and others to tackle issues of social power in negotiation with governments.Nico intends to set up HP activities to bring the HP programme to the level of the Minister of Public Health.This implies a career redirection from maternity care to a return to an HP branch to work with the Ministry and other sectors.Diego's main goal was the successful implementation of HP in schools across the country, in all 26 provinces.Roger expressed his interest in a master's degree in social economy and solidarity to provide tools for the set-up of a social project, including a new role of an HP officer in a school environment.
Discussion
It is noticeable that the most support among all four forms of community capital was provided by professionals in the fields of health or education, and the least support was provided by parents.In comparing the support received by both stakeholders and decision-makers through social capital, it can be concluded that the decision-makers provided support to a greater extent, although by a slight margin.This is not to devalue the support provided by the stakeholders, but rather to better understand the sources of the contributions as they relate to each form of capital, in order to both evaluate the effectiveness of the support received by each party and how best to navigate future interactions.The results illustrate how, as a continent, Africa is facing challenges in delivering timely and appropriate health care services to citizens but is also challenged by the inability to tailor health services to the specific communities' greatest needs.Results also confirmed that types of HP strategies should be considered when engaging with community members, which can have a significant impact on minimizing the severity and spread of communicable diseases (Laverack and Manoncourt, 2016).Witter et al. (2020) found that, although community members had relatively high levels of knowledge of NCD risks, they lacked the time and knowledge required to improve these issues.A lack of evidence of efficacy regarding NCD HP interventions can lead to the misallocation and mismanagement of valuable resources delivered in low-to middleincome countries (Jeet et al., 2018).Poor health policy oversight makes it challenging for governments and health staff to capably monitor and manage policy processes and their performance (Lane et al., 2020).
Results confirmed that, in addition to providing opportunities for school communities, strong school health policies also serve as positive modelling behaviours for the larger scale community (WHO, 2020).To achieve stakeholder's buy-in, students, teachers, parents and the community should be involved in the implementation and analysis of healthy school policies that stretch beyond the curriculum.An emphasis should be placed on life skills educators receiving extensive training in utilizing participatory approaches, as they can have an important impact in the development of healthy behaviours and the capacities needed for improved life skills teaching (WHO, 2020).HP has the potential to positively impact large populations, owing to the fact that the population of youth under 18 currently represents 50% of Africa's population, a statistic that is expected to increase to 40% of the global youth population in 2050 (UNICEF, 2017).
Future initiatives may consider the development of HP programmes that rely on elements of popular culture that is appreciated by youth.It can be done by the mobilization of local artistic companies, martial arts, circus arts, production of music videos, creation of e-apps and so on.As examples, the readers are invited to watch: https://www.youtube.com/watch?v=4Qk22iVm1HI&t=1s, https://www.youtube.com/watch?v=BtulL3oArQw, https://www.youtube.com/watch?v=wGoodWEtV8c, https:// hhph.org/repository/
Limitations
The unknown context of the new role being implemented and the lack of feasibility of being immersed in the African context of HP have been taken into consideration as limitations.We have made a conscious attempt to reach a theoretically significant interpretation of the results, identify rival explanations and minimize threats to the internal validity that was not reached due to the paucity of in-depth evidences.Limited (not a minimum of three participants per country) and superficial evidences per countries restricted an in-depth data analysis.It is noteworthy to say that the minimum number of three participants per country does not allow for the procedure of intragroup comparison as a qualitative analysis procedure.Therefore, this methodological limitation restrains the transferability of findings (Paillé and Mucchielli, 2016).
Contribution to the field
This study was necessary to evaluate the impact that health education and teachings directed at health professionals in the public school system could have on the implementation of HP in Francophone African countries.Its contribution to the field of HP relies on the identification of key community forms of capital to be mobilized and invested, mainly the community and professional stakeholders.
Conclusion
This evaluation disclosed that the major social impacts of HP projects relate to the support provided by the community stakeholders mobilizing their own individual and collective assets.The contextualized HP actions unfolded according to WHO objectives for youth health for the region, intertwining deeds in partnerships and new alliances to tackle compromised SDH for NCDs.Despite limited support from political stakeholders (justified by historical, philosophical, and political gaps and misdeeds) at the frontline level, the health promoters were somehow successful in awakening youth's, children's and parents' awareness for the pressing need to adopt HP behaviours.Financial and political stakeholders should ensure the feasibility of HP projects in school settings.The evidence also uncovered professional networking, collaboration and exchange that could help regional health promoters to make their professional plans in the field of HP a reality.A larger number of cases would have made the reached conclusions more tenable.A future follow-up study with a larger sample should be conducted.
improvement of the health and quality of life of their children and youth 6 14.29 Local health authority perceives the professionals' HP programme as an innovation to treat youth-specific health problems 6 14.29 Local educational authority wants to include health education as part of their curriculum 5 11.9NGOs recognize the professionals' HP teaching strategies as a bridge among local society's sponsor recognizes the high engagement of international sponsors for the professional HP promotion; NCD: non-communicable disease.a Only up to five top chosen options are reported; multiple choice was possible.
Table 1 .
Sociodemographic profile of participants.
Table 2 .
Mobilized community capital to develop HP projects.
HP: health promotion; NGO: non-governmental organization.a Only up to five top chosen options are reported; multiple choice was possible.
Table 3 .
Actual and potential communities' social resources used for the development educational strategies.
a Only up to five top chosen options are reported; multiple choice was possible.
Table 4 .
The phases and features of the HP projects cited by participants.
Table 5 .
Self-assessment as a health promoter.
. . . it depends on how you speak to them and . . .whether you are convinced of the message you want to get across.
Table 6 .
Future of the HP programme.
HP: health promotion; NGOs: non-governmental organizations.a Only up to five top chosen options are reported; multiple choice was possible. | 2022-02-10T16:19:16.621Z | 2022-02-07T00:00:00.000 | {
"year": 2022,
"sha1": "bbee7343ed71057748cefb67c04be46dd864acfd",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/00219096221076108",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "d44cf9703c3c01cb8aa3af246fcffd3a88609a95",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": []
} |
8646812 | pes2o/s2orc | v3-fos-license | Conflict of interests : None Interference of the linguistic variant in the repair strategies used during the phonological acquisition process
Purpose: To investigate and compare the use of repair strategies in the acquisition of /R/ in simple onset, produced by children with typical phonological acquisition. Methods: Speech data containing the /R/ from 120 children with typical phonological acquisition (60 male and 60 female) from Santa Maria (RS) and Crissiumal (RS), Brazil, aged between 1 year and 6 months and 4 years were used. To analyze the repair strategies, the following dependent variables were considered: omission, semivocalization, and liquid substitution; as well as the following independent variables: gender, age, precedent and following context, grammatical class, tonicity, number of syllables, and position in the word. The VARBRUL program was used for statistical analysis. Results: The statistical program selected as significant for omission in Santa Maria the variables tonicity and gender, and in Crissiumal, tonicity and age. For semivocalization in Santa Maria, the program selected the variable gender, and in Crissiumal, tonicity. For lateral liquid substitution in Santa Maria the statistical program did not select any variable. However, in Crissiumal, the variables position in the word, gender, and age were selected. Conclusion: It was possible to observe that the repair strategies can diverge according to the dialect being used. Hence, it is important to consider the dialectal variation to make the phonological therapy more effective.
INTRODUCTION
In general, the typical phonological domain occurs when children are about five years old.The liquid consonants are the ones later acquired (1) .These consonants are a complex sound class, both in the acoustic-articulatory and the phonological point of view (2) .Thus, the liquids class of Brazilian Portuguese, which are /l/, /ʎ/, /R/ e /r/, also present more repair strategies during its acquisition.
During phonological acquisition, children's initial speech productions are different from the adult pattern.However, they are also not disorientated and chaotic.The attempts of speech production, initially, present characteristics that demonstrate which strategies children are using to produce certain sounds, what are the difficulties found, and what is the level of phonological awareness.These production attempts are not asystematic and they may show the presence of a phonological subsystem and, hence, knowledge being built (4) .During phonological acquisition, there are some repair strategies that should disappear with development, and there are specific processes expected to each age group (2) .Regarding the types of repair strategies that occur during the acquisition of the rhotics, a study mentions that substitutions by plosive consonants, substitutions by lateral liquids, and semivocalizations may occur (5,6) .
In this study, it was considered that the repair strategies can be different, according to the linguistic variant in use.According to a study (7) , phonetic variability is part of the linguistic system, and may or may not lead to linguistic change.The linguistic variants may dispute space with each other when they represent certain phonemes; in this case, there is change in progress.However, the variation might also present continuous characteristics, without decline or increase of a linguistic form over the other.Hence, a stable variation is observed.
It seems that because of the fact that in Rio Grande do Sul, Brazil there is more than one variant to "strong-R", with different phonetic and articulatory characteristics, these features should be analyzed when evaluating cases of phonological disorders, so that therapy practices are more effective.Besides, because the variants which are used in both studied cities are not the same, it is expected that the repair strategies used by children are not the same either.
Thus, this study had the purpose to investigate and compare the use of repair strategies during the acquisition of the non--lateral liquid /R/ in simple onset position by children with typical phonological acquisition that live in Crissiumal and Santa Maria (RS), Brazil.This phoneme was selected because it presents both dialectal and individual variation in this position.The "strong-R" can be produced as velar or glottal fricative (Santa Maria) and multiple (3) or simple vibrant (Crissiumal).As in simple onset the segments "strong-R" and "weak-R" present phonological distinctiveness (2) , the use of the simple vibrant instead of the "strong-R" may cause the loss of this distinctiveness in some dialects.
METHODS
It was used speech data from 120 children with typical phonological acquisition, 60 from Santa Maria (RS), Brazil, and 60 from Crissiumal (RS), Brazil.These children were matched considering the variable gender, that is, there were 30 boys and 30 girls from each city, all monolingual speakers of Brazilian Portuguese.Their ages varied from 1 year and 6 months to 4 years.The age groups were divided in every two months, with a total of 15 age groups per city.In each age group, speech data from two boys and two girls were used.
The speech samples from Santa Maria and Crissiumal are part of two data bases created after research projects approved by the Research Ethics Committee of Universidade Federal de Santa Maria, under numbers 064/2004 and 23081.011800/2010-89.
In both cities, the parents or legal guardians of the subjects were informed of the purposes and procedures of the research, and agreed to their participation by signing the Free and Informed Consent.
In addition, in both cities, the subjects were submitted to speech-language and hearing screening, in order to confirm if they presented typical phonological development.Moreover, they should not present evident neurological, cognitive or psychological impairments.
To form the data bases, speech samples were transversally collected based on the instrument Child's Phonological Assessment (CPA) (8) .This instrument propitiates spontaneous naming of 125 words, through five thematic pictures.The CPA was applied individually to each child, and the speech data were digitally recorded.After that, the data were transcribed through broad phonetic transcription and reviewed by two experienced judges, separately.
In Crissiumal, each child was individually evaluated by the researcher.Data collection consisted of two steps.In the first step, parents and teachers were interviewed in order to identify the used variant, to represent the children's input.In the second step, speech data were collected, using the same method described for Santa Maria, that is, the CPA.A list of 30 words containing the "strong-R" in medial and initial onset was also used.
The words collected from the Santa Maria data basis (corpus of 259 words) and the words collected in Crissiumal (corpus of 388 words) containing the "strong-R" (e.g.: rato -mouse, cachorro -dog) were classified as they were produced.For this, dependent and independent, linguistic and extra-linguistic variables were considered.
As variants of the dependent variable, in Santa Maria, the following strategies were considered: omission (carro (car) -['kaw]), substitution by the glides [w] and [j] (carro (car) -['kaju] or ['kawu]), substitution by [ɾ] or [l] (carro (car) -['kaɾo] or ['kalu]).In Crissiumal, the strategies considered were: omission, substitution by the glides [w] and [j] and substitution by [l].The substitution by the non-lateral liquid [ɾ] is not a repair strategy in Crissiumal, but rather a correct production, because the regional dialect presents this variant as a way to produce the "strong-R".
To analyze the repair strategies used during the "strong-R" acquisition in simple onset position, the extra-linguistic variables gender and age were considered; the linguistic variables Linguistic variant and repair strategies J Soc Bras Fonoaudiol.2012;24(3):239-47 precedent and following context, tonicity, number of syllables, and word position were also considered.
For an efficient analysis of the variable age, 15 age groups per city were observed every two months, as previously mentioned.Regarding the variable gender, speech analysis of 30 boys and 30 girls from each city was accomplished.This aspect was considered because it was already mentioned in other studies as a distinguishing factor in language acquisition (9,10) .
Children's productions were classified and categorized according to the variables and variants previously described.This categorization was typed in the program Microsoft Office Access 2003, which was the entrance to the statistical program.
For statistical analysis, it was used the statistical program VARBRUL.This group of programs is broadly used in sociolinguistic analyses (11)(12)(13) .However, the program has been used with success, since the 1990s, analyzing language acquisition data (9,10,14) .The program VARBRUL was chosen due to the characteristics and purposes of this study, as well as because it provides frequencies and probabilities, and selects variables with statistical difference.The program makes probabilistic analysis in binary form.This means that this program, through statistical calculation, attributes relative weights to the variants of the independent variables, regarding both variants (correct and incorrect production) of the studied linguistic phenomenon, represented by the dependent variable.It is important to emphasize that the VARBRUL attributes significance values to the linguistic and extra-linguistic variables through interaction among them (gender versus age; tonicity versus number of syllables).Hence, it does not attribute p-values to the variants within a variable.For instance, the program VARBRUL does not generate a significance value when comparing the genders male and female.For these variants, relative weights are attributed, that is, probabilities with higher or lower interference of the variants in the production of /R/ in simple onset.
The relative weights or occurrence probabilities of /R/ in simple onset come from the statistic interaction containing all the variables selected by the program.Values with relative weight lower than 0.50 were considered unfavorable, probabilistic values between 0.50 and 0.59 were considered neutral, and values equal or higher than 0.60 were considered favorable.
RESULTS
It was possible to observe differences about the repair strategies used in Santa Maria and in Crissiumal.In Santa Maria the omission was the most frequent, while in Crissiumal, the most used strategy was the substitution by lateral liquid.
For the strategy omission, in Santa Maria, the statistical program selected the variables tonicity (tonic syllable) and gender (male).In Crissiumal the selected variables were tonicity (pre-tonic) and age group.In Crissiumal, the highest relative weights, which are favorable for omission and the highest frequencies, appear in alternating age groups (Table 1), while in Santa Maria the highest frequencies are in intermediate age groups (Table 2).
For the strategy substitution by [j] and [w], in Santa Maria, the statistical program selected the male gender as favorable.In Crissiumal, the mentioned gender was not selected, but the male gender presented the highest frequency.
In Crissiumal, still regarding substitution by [l] and [w], the variable tonicity was selected, and the variant post-tonic (e.g.: carro -car) was favorable for this repair strategy, what agrees with the highest frequency observed in Santa Maria for tonicity.However, if it is considered the total number of words, this strategy did not appear in many subjects in both cities.
About the strategy substitution by [l], in Santa Maria there were only two cases of use.That is the reason why the statistical program did not select any variable for this strategy.Nevertheless, in Crissiumal, this was the most used repair strategy.In Crissiumal, the statistical program selected, for the variable substitution by [l], word position, gender and age group.In relation to the variables word position and gender, no variable was favorable for this substitution, with no probability values equal or higher than 0.60.Even so, the words in medial onset and the male gender presented the highest frequencies and relative weights.About age group, the groups which were considered as favorable for substitution by [l] were in alternating groups.
Regarding the extra-linguistic variables which were not selected (Table 2) in Crissiumal, the female gender presented the highest frequency of omission, although the frequencies of both genders were close.Santa Maria, as previously mentioned, the gender presented the highest frequency of omission, with difference when compared with the female gender.The variable age group was not selected in Santa Maria for omission, but, as mentioned, the highest frequencies appeared in intermediate groups.
Considering the substitution by [j] and [w], the statistical program did not select the variable gender in Crissiumal, but the highest frequencies were found for the male gender, what agrees with the findings from Santa Maria.Regarding the same variable, the variant age group was selected neither in Santa Maria nor in Crissiumal.Nevertheless, in Santa Maria the highest frequencies were found in intermediate groups and in Crissiumal the highest frequencies were found in alternating groups.
The variables gender and age group were not selected in Santa Maria for substitution by the liquids [ɾ] and [l], which were amalgamated in this city because of the reduced number of occurrences.However, the male gender presented the highest frequency and the children from the initial age groups used this strategy more often.
In Crissiumal, only the substitution by [l] was considered by the statistical program.It happened because, as previously quoted, the /R/ in simple onset produced as simple vibrant is considered as correct production in this city.In addition, and the variables gender and age were selected by the statistical program (Table 1).
In relation to the linguistic variables which were not selected (Table 2) in Santa Maria, it was observed that the frequencies of omission of the used variants occurred more often in the precedent context with coronal vowel (e.g.: erro -error), while in Crissiumal the null context appeared more often.About following context, it was observed the highest frequencies in the coronal variant for both cities.
Still about omission, as Santa Maria as Crissiumal presented similar frequencies regarding number of syllables, with the highest frequencies for trisylables (e.g.: cachorro -dog).Only in monosyllable words (e.g.: rio -river), in Santa Maria, there was no case of omission.The children who live in Santa Maria omitted /R/ with similar frequencies as in initial onset (e.g.: rato -mouse), as in medial onset (e.g.: carro -car), which presents mildly higher frequency.In Crissiumal, there were more cases of omission in words with /R/ in initial onset.
The substitutions by the glides [j] and [w], as previously stated, appeared in a few cases, in both cities.However, in Crissiumal the labial/dorsal vowel (e.g.: cachorro -dog) was the most frequent in precedent context.In Santa Maria, the mentioned substitution did not occur for this variant.In following context, in Santa Maria, the labial/dorsal vowel (e.g.: carro -car) was the most frequent one.In Crissiumal, there was no substitution by the glides [j] and [w] in following context.As in Crissiumal, as in Santa Maria the glides [j] and [w] appeared more frequently in post-tonic syllables (e.g.: carro -car) and they did not appear in monosyllable words (e.g.: rio -river).Words with /R/ in medial onset (e.g.: carro -car) were the ones with the highest frequencies in both cities.
Regarding the substitution by the liquids [l] and [ɾ], in Santa
DISCUSSION
After the obtained results, it was observed that the repair strategies used by the children from Santa Maria were, respectively, omission, substitution by the glides [j] and [w] and substitution by the liquids [l] and [ɾ].In Crissiumal, the used repair strategies presented the following sequence of frequency: substitution by the lateral liquid [l], omission and substitution by the glides [j] and [w].The findings agree with a study in which the semivocalization of liquid and the substitution of liquid also occurred in different stages of the liquid consonants acquisition (15) .There is another research which perceived that strategies such as segment omission and substitutions are used by children during the phonological acquisition (16,17) .
In Santa Maria, the selected variables for omission were tonicity and gender.It indicated that the tonic syllable and the male gender are favorable for omission.A study stated that, about the "strong-R" production, the syllable which was the most affected in cases of omission was the strong syllables of the metrical foot (rato (mouse)) (18) .
In Crissiumal, when analyzing omission, the statistical program selected the variables tonicity (pre-tonic) and age group.A study that used a sample of 36 subjects with typical phonological development and 12 subjects with phonological disorders, in order to describe and to analyze the repair strategies used in simple onset, selected the pre-tonic variant (relative weight 0.62) as favorable for omission (19) .About age group, it was observed that the omission occurred in random groups in Crissiumal, what indicated the phenomenon called "U"-shape curve.This phenomenon, when drawn in a graphic representing percentages and ages, appears as a developmental curve in shape of "U" (5) .It means that the acquisition is non-linear.
In the substitution by [j] and [w] in Santa Maria, the selected variable was gender.In Crissiumal, the selected variable in this strategy was was favorable for this strategy.A study verifies that the favorable elements for semivocalization concluded that the post-tonic variable is the variant with the highest frequency of semivocalization (20) .
Regarding the substitution by the lateral liquid [l], in Santa Maria, no variable was selected.However, in Crissiumal the selected variables were word position (medial onset), gender (male) and age group (alternated groups).A study mentions that, among the possible phonological processes during phonological acquisition, the liquid substitution is the most frequent, reaching pre-school age, sometimes (21) .In another study performed with children between one year old and a half and two the lateral liquid emerged in the speech data and it was used correctly once in medial onset when the child was two years and a half, but it was not acquired (9) .So, the medial onset emerged before the medial onset.
In relation to the variable gender, it was selected twice in Santa Maria (omission and substitution by [j] and [w]) and once in Crissiumal (substitution by the lateral liquid [l]).In those cases, the male gender always obtained the highest frequencies of repair strategies use.It is possible to verify that between men and women, there are not only external anatomic differences of the primary and secondary features, but also of the way they acquire the linguistic system, particularly the phonology of their language is different.Confirming this difference, it is possible to state that girls speak earlier and with less grammar mistakes than boys, being more precocious to acquire linguistic abilities (22) .
About the variable gender, when it was not selected by the statistical program (omission and substitution by [j] and [w] in Crissiumal and substitution by [l[ and [r] in Santa Maria), only in Crissiumal the male gender does not present the highest frequency of omission.It means higher use of repair strategies by boys (22,23) , who acquire the speech sounds later (24) .This information does not agree with a study which observed that girls present more errors than boys (25) .The male gender, according to another study, presents better performance in tasks of phonological awareness (26) .There are other studies in which the variable gender is neutral regarding the use of repair strategies and the order of acquisition of phonemes by subjects with typical (1,27) and atypical (28) phonological development.
About the extra-linguistic variable age group, when it was not selected, it also presented decrease of production, what is normal during the phonological acquisition.It can occur because the child is improving some ability, such as monitoring of the hearing control in speech to synesthetic information, so that children can create more efficient strategies to correctly produce the language sounds (9) .
In relation to tonicity, even when it was not selected by the statistical program, there was higher frequency of post-tonic syllables to the use of repair strategies.A study refers that there is a phenomenon called unstressed syllable erase which usually occurs in words with more than one syllable (trisyllables or polysyllables) (4) .
A study confirms the findings of the present research, which reports that omission occurs more frequently in trisyllable words and the repair strategies, in general, even when they are not selected by the statistical program, they occur in trisyllable and polysyllable words.Besides, post-tonic syllables are unstressed.Based on this idea, another research with children with typical phonological development selected trisyllable words as favorable for semivocalization, confirming the results from Crissiumal (Table 3).The same research selected polysyllable words as favorable for liquid substitution, confirming the findings from Santa Maria and Crissiumal (Table 3).Other variants which were observed as favorable for omission in that research were empty context in precedent context, as in Crissiumal, and labial vowel in following context, as in Santa Maria (Table 3) (19) .
So, through general analysis of the information which was found in this study, it is possible to verify that during the /R/ acquisition process, children who are exposed to different variants of the studied phoneme use different repair strategies.In a research about linguistic variation and language acquisition (29) , the authors refer that it is undeniable that children develop their phonological knowledge, or part of it, through the phonetic substance to which they are exposed.The result of this study shows that it is necessary to take into account the patterns of variability of the adult community, to have a correct evaluation of the targets which should be reached by children.Thus, children need to correctly reproduce the sociolinguistic variants which are proper in their community.So, not to consider the linguistic variability as improper learning, it is important to discover the input received by the evaluated children, before the diagnosis of atypical phonological development.
CONCLUSION
It was possible to observe that the repair strategies may be different according to the sociolinguistic variant being used.In Santa Maria, the most used repair strategy was omission.In Crissiumal, the most used repair strategy was substitution by the lateral liquid [l], followed by omission.Besides, it was verified that the children from Crissiumal used more repair strategies during the analyzed age groups, because in Santa Maria there were more cases of correct production.
The hypothesis of this study could be confirmed.It means that the repair strategies used by the children who are exposed to both studied dialects are different.So, differences which were found should be observed to analyze the cases of atypical phonological development.
This research is justified as it can help the speech language therapists to distinguish cased of phonological disorders and dialectal variation, considered as typical phonological development.Thus, it will avoid unnecessary therapy in cases of linguistic variation.
Figure 1 .
Figure 1.Occurrence of repair strategies in Santa Maria
Figure 2 .
Figure 2. Occurrence of repair strategies in Crissiumal
Table 1 .
Selected variables in Santa Maria and in Crissiumal Statistical program VARBRUL (p<0.05)Note: F = frequency; P= probability Linguistic variant and repair strategies J Bras Fonoaudiol.2012;24(3):239-47 not selected by the statistical program in Santa Maria and Crissiumal Maria, there were no significant results, because of the reduced number of occurrences.Nevertheless, the words with labial/ dorsal vowel in precedent context (e.g.: cachorro -dog) and in following context (e.g.: cachorro -dog) presented the highest frequencies, although they were not selected by the statistic program.In Crissiumal, it was possible to observe a higher number of occurrences of all vowels, as in precedent context as in following context, except the precedent context with coronal vowel (e.g.: erro -error).The highest frequency in precedent context was completed by the dorsal vowel (e.g.: arroz -rice) and, in following context, by the labial/dorsal vowel (e.g.: carro -car).In relation to tonicity, in Santa Maria, the tonic syllables (e.g.: rádio -radio) presented the highest frequency and in Crissiumal, the post-tonic syllables (e.g.: carro -car) presented the highest frequency.As in Santa Maria as in Crissiumal, the polysyllable words (e.g.: arrumando -organizing) presented the highest frequencies of substitution by the liquids [ɾ] and/ or [l], but they appeared in only a few cases in both cities.In Santa Maria, even not selected, the words with /R/ in medial onset (e.g.: correndo -running) appeared more frequently, as well as in Crissiumal, as earlier mentioned.
Table 3 .
Linguistic variables not selected by the statistical program in Santa Maria and Crissiumal | 2017-06-18T04:55:52.564Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "2316c10b023bed5683bc2d77e1a43b75046c59cc",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/jsbf/a/n3CPdvpGYqjVkzYJT7R3Fcv/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2316c10b023bed5683bc2d77e1a43b75046c59cc",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": []
} |
8796933 | pes2o/s2orc | v3-fos-license | Tuple spaces implementations and their efficiency
Among the paradigms for parallel and distributed computing, the one popularized with Linda, and based on tuple spaces, is one of the least used, despite the fact of being intuitive, easy to understand and to use. A tuple space is a repository, where processes can add, withdraw or read tuples by means of atomic operations. Tuples may contain different values, and processes can inspect their content via pattern matching. The lack of a reference implementation for this paradigm has prevented its widespread. In this paper, first we perform an extensive analysis of a number of actual implementations of the tuple space paradigm and summarise their main features. Then, we select four such implementations and compare their performances on four different case studies that aim at stressing different aspects of computing such as communication, data manipulation, and cpu usage. After reasoning on strengths and weaknesses of the four implementations, we conclude with some recommendations for future work towards building an effective implementation of the tuple space paradigm.
INTRODUCTION
Distributed computing is getting increasingly pervasive, with demands from various application domains and with highly diverse underlying architectures that range from the multitude of tiny devices to the very large cloud-based systems. Several paradigms for programming parallel and distributed computing have been proposed so far. Among them we can list: distributed shared memory [28] (with shared objects and tuple spaces [20] built on it) remote procedure call (RPC [7]), remote method invocation (RMI [30]) and message passing [1] (with actors [4] and MPI [5] based on it). Nowadays, the most used paradigm seems to be message passing while the least popular one seems to be the one based on tuple spaces that was proposed by David Gelernter for the Linda coordination model [19].
As the name suggests, message passing permits coordination by allowing exchanges of messages among distributed processes, with message delivery often mediated via brokers. In its simplest incarnation, message-passing provides a rather low-level programming abstraction for building distributed systems. Linda instead provides a higher level of abstraction by defining operations for synchronization and exchange of values between different programs that can share information by accessing common repositories named tuple spaces. The Linda interaction model provides time and space decoupling [18], since tuple producers and consumers do not need to know each other.
The key ingredient of Linda is a small number of basic operations which can be embedded into different programming languages to enrich them with communication and synchronization facilities. Three atomic operations are used for writing (out), withdrawing (in), and reading (rd) tuples into/from a tuple space. Another operation eval is used to spawn new processes. the wanted data are not available. Writing is instead performed by asynchronous output of the information for interacting partners. Figure 1 illustrates an example of a tuples space with different, structured, values. For example tuple "goofy", 4, 10.4 is produced by a process via the out( "goofy", 4, 10.4 ) operation, and is read by the operation rd( "goofy", , ) after pattern-matching: that is the process reads any tuple of three elements whose first one is exactly the string "goofy". Moreover, tuple 10, . . . is consumed (atomically retracted) by operation in( 10, x ) which consumes a tuple whose first element is 10 and binds its second element (whatever it is) to the variable x. Patterns are sometimes called templates.
The simplicity of this coordination model makes it very intuitive and easy to use. Some synchronization primitives, e.g. semaphores or barrier synchronization, can be implemented easily in Linda (cf. [10], Chapter 3). Unfortunately, Linda's implementations of tuple spaces have turned out to be quite inefficient, and this has led researchers to opt for different approaches such OpenMP or MPI, which are nowadays offered, as libraries, for many programming languages. When considering distributed applications, the limited use of the Linda coordination model is also due to the need of guaranteeing consistency of different tuple spaces. In fact, in this case, control mechanisms that can significantly affect scalability are needed [12].
In our view, tuple spaces can be effectively exploited as a basis for the broad range of the distributed applications with different domains (from lightweight applications to large cloud based systems). However, in order to be effective, we need to take into account that performances of a tuple space system may vary depending on the system architecture and on the type of interaction between its components. Although the concept of tuple spaces is rather simple, the main challenge to face when implementing it is to devise the best data structure to deal with a possibly distributed multiset of tuples, where operations on it (e.g. patternmatching, insertion and removal) are optimized. Moreover, it has to support efficient parallel tuples' processing and data distribution. Depending on how these aspects are implemented, performances of an application can be positively or negatively affected.
The aim of this paper is to examine the current implementations of tuple spaces and to evaluate their strengths and weaknesses. We plan to use this information as directions for the building more efficient implementation of distributed tuple space.
We start by cataloging the existing implementations according to their features, then we focus on the most recent Linda based systems that are still maintained while paying specific attention to those offering decentralized tuples space. We compare the performances of the selected systems on four different case studies that aim at stressing different aspects of computing such as communication, data manipulation, and cpu usage. After reasoning on strength and weakness of the four implementations, we conclude with some recommendations for future work towards building an effective implementation of the tuple space paradigm.
The rest of the paper is organized as follows. In Section 2 we survey existing tuple spaces systems and choose some of them for the practical examination. The description of case studies, main principles of their implementation, and the results of the experiments are presented in Section 3. Section 4 concludes the paper by collecting some remarks and highlighting some directions for future work. This paper is a revised and extended version of [8]; it contains an additional case study, the thorough evaluation of a new tuple space system and more extensive experiments.
TUPLE SPACE SYSTEMS
Since the first publication on Linda [20], there have been a plenty of implementations of its coordination model in different languages. Our purpose is to review the most significant and recent ones, that are possibly still maintained, avoiding toy implementations or the one shot paper implementations. To this end, we have chosen: JavaSpaces [26] and TSpaces [24] which are two industrial proposals of tuple spaces for Java; GigaSpaces [21] which is a commercial implementation of tuple spaces; Tupleware [3] featuring an adaptive search mechanism based on communication history; Grinda [9], Blossom [33], DTuples [22] featuring distributed tuple spaces; LuaTS [23] which mixes reactive models with tuple spaces; Klaim [15] and MozartSpaces [14] which are two academic implementations with a good record of research papers based on them.
In this Section, first we review the above mentioned tuple space systems by briefly describing each of them, and single out the main features of their implementations, then we summarise these features in Table I. Later, we focus on the implementations that enjoy the characteristics we consider important for a tuple space implementation: code mobility, distribution of tuples and flexible tuples manipulation. All tuple space systems are enumerated in order they were first mentioned in publications.
Blossom. Blossom [33] is a C++ implementation of Linda which was developed to achieve high performance and correctness of the programs using the Linda model. In Blossom all tuple spaces are homogeneous with a predefined structure that demands less time for type comparison during the tuple lookup. Blossom was designed as a distributed tuple space and can be considered as a distributed hash table. To improve scalability each tuple can be assigned to a particular place (a machine or a processor) on the basis of its values. The selection of the correspondence of the tuple and the machine is based on the following condition: for every tuple the field access pattern is defined, that determines which fields always contain value (also for templates); values of these fields can be hashed to obtain a number which determines the place where a tuple has to be stored. Conversely, using the data from the template, it is possible to find the exact place where a required tuple is potentially stored. Prefetching allows a process to send an asynchronous (i.e. non-blocking) request for a tuple and to continue its work while the search is performed. When the requested tuple is needed, if found, it is received without waiting.
TSpaces. TSpaces [24] is an implementation of the Linda model developed at the IBM Almaden Research Center. It combines asynchronous messaging with database features. TSpaces provides a transactional support and a mechanism of tuple aging. Moreover, the embedded mechanism for access control to tuple spaces is based on access permission. It checks whether a client is able to perform specific operations in the specific tuples space. Pattern matching is performed using either standard equals method or compareTo method. It can use also SQL-like query that allows matching tuples regardless of their structure, e.g., ignoring the order in which fields are stored.
Klaim. Klaim [15] (A Kernel Language for Agents Interaction and Mobility) is an extension of Linda supporting distribution and processes mobility. Processes, like any other data, can be moved from one locality to another and can be executed at any locality. Klava [6] is a Java implementation of Klaim that supports multiple tuple spaces and permits operating with explicit localities where processes and tuples are allocated. In this way, several tuples can be grouped and stored in one locality. Moreover, all the operations on tuple spaces are parameterized with a locality. The emphasis is put also on access control which is important for mobile applications. For this reason, Klaim introduces a type system which allows checking whether a process is allowed to perform specific operations at specific localities.
JavaSpaces. JavaSpaces [26] is one of the first implementations of tuple spaces developed by Sun Microsystems. It is based on a number of Java technologies (e.g., Jini and RMI). Like TSpaces, JavaSpaces supports transactions and a mechanism of tuple aging. A tuple, called entry in JavaSpaces, is an instance of a Java class and its fields are the public properties of the class. This means that tuples are restricted to contain only objects and not primitive values. The tuple space is implemented by using a simple Java collection. Pattern matching is performed on the byte-level, and the byte-level comparison of data supports object-oriented polymorphism.
GigaSpaces. GigaSpaces [21] is a contemporary commercial implementation of tuple spaces. Nowadays, the core of this system is GigaSpaces XAP, a scale-out application server; user applications should interact with the server to create and use their own tuple space. The main areas where GigaSpaces is applied are those concerned with big data analytics. Its main features are linear scalability, optimization of RAM usage, synchronization with databases and several database-like features such as complex queries, transactions, and replication. [23] is a reactive event-driven tuple space system written in Lua. Its main features are the associative mechanism of tuple retrieving, fully asynchronous operations and the support of code mobility. LuaTS provides centralized management of the tuple space which can be logically partitioned into several parts using indexing. LuaTS combines the Linda model with the event-driven programming paradigm. This paradigm was chosen to simplify program development since it allows avoiding the use of synchronization mechanisms for tuple retrieval and makes more transparent programming and debugging of multi-thread programs. Tuples can contain any data which can be serialized in Lua. To obtain a more flexible and intelligent search of tuples, processes can send to the server code that once executed returns the matched tuples. The reactive tuple space is implemented as a hash table, in which data are stored along with the information supporting the reactive nature of that tuple space (templates, client addresses, callbacks and so on). [14] is a Java implementation of the space-based approach [27]. The implementation was initially based on the eXtensible Virtual Shared Memory (XVSM) technology, developed at the Space Based Computing Group, Institute of Computer Languages, Vienna University of Technology. The basic idea of XVSM is related to the concept of coordinator : an object defining how tuples (called entries) are stored. For the retrieval, each coordinator is associated with a selector, an object that defines how entries can be fetched. There are several predefined coordinators such as FIFO, LIFO, Label (each tuple is identified by a label, which can be used to retrieve it), Linda (corresponding to the classic tuple matching mechanism), Query (search can be performed via a query-like language) and many others. Along with them, a programmer can define a new coordinator or use a combination of different coordinators (e.g. FIFO and Label). MozartSpaces provides also transactional support and a role based access control model [13].
MozartSpaces. MozartSpaces
DTuples. DTuples [22] is designed for peer-to-peer networks and based on distributed hash tables (DHT), a scalable and efficient approach. Key features of DHT are autonomy and decentralization. There is no central server and each node of the DHT is in charge of storing a part of the hash table and of keeping routing information about other nodes. As the basis of the DTH's implementation DTuples uses FreePastry * . DTuples supports transactions and guarantees fault-tolerance via replication mechanisms. Moreover, it supports multi tuple spaces and allows for two kinds of tuple space: public and subject. A public tuple space is shared among all the processes and all of them can perform any operation on it. A subject tuple space is a private space accessible only by the processes that are bound to it. Any subject space can be bound to several processes and can be removed if no process is bound to it. Due to the two types of tuple spaces, pattern matching is specific for each of them. Templates in the subject tuple space can match tuples in the same subject tuple space and in the common tuple space. However, the templates in the common tuple space cannot match the tuple in the subject tuple spaces.
Grinda. Grinda [9] is a distributed tuple space which was designed for large scale infrastructures. It combines the Linda coordination model with grid architectures aiming at improving the performance of distributed tuple spaces, especially with a lot of tuples. To boost the search of tuples, Grinda utilizes spatial indexing schemes (X-Tree, Pyramid) which are usually used in spatial databases and Geographical Information Systems. Distribution of tuple spaces is based on the grid architecture and implemented using structured P2P network (based on Content Addressable Network and tree based).
Tupleware. Tupleware [3] is specially designed for array-based applications in which an array is decomposed into several parts each of which can be processed in parallel. It aims at developing a scalable distributed tuple space with good performances on a computing cluster and provides simple programming facilities to deal with both distributed and centralized tuple space. The tuple space is implemented as a hashtable, containing pairs consisting of a key and a vector of tuples. Since synchronization lock on Java hashtable is done at the level of the hash element, it is possible to access concurrently to several elements of the table. To speed up the search in the distributed tuple space, the system uses an algorithm based on the history of communication. Its main aim is to minimize the number of communications for tuples retrieval. The algorithm uses success factor, a real number between 0 and 1, expressing the likelihood of the fact that a node can find a tuple in the tuple space of other nodes. Each instance of Tupleware calculates success factor on the basis of previous attempts and first searches tuples in nodes with greater success factor.
In order to compare the implementations of the different variants of Linda that we have considered so far, we have singled out two groups of criteria.
The first group refers to criteria which we consider fundamental for any tuple space system: eval operation This criterion denotes whether the tuple space system has implemented the eval operation and, therefore, allows using code mobility. It is worth mentioning that the original eval operation was about asynchronous evaluation and not code mobility, but in the scope of a distributed tuple space, it makes programming data manipulation more flexible. Tuples clustering This criterion determines whether some tuples are grouped by particular parameters that can be used to determine where to store them in the network.
Absence of domain specificity Many implementations have been developed having a particular application domain in mind. On the one hand, this implies that domainspecific implementations outperform the general purpose one, but on the other hand, this can be considered as a limitation if one aims at generality.
Security This criterion specifies whether an implementation has security features or not. For instance, a tuple space can require an authorization and regulate the access to its tuples, for some of them, the access can be limited to performing specific operations (e.g. only writes or read).
The second group, of criteria, gathers features which are desirable for any fully distributed implementation that runs over a computer network, does not rely on a single node of control or management and is scalable.
Distributed tuple space This criterion denotes whether tuple spaces are stored in one single node of the distributed network or they are spread across the network.
Decentralized management Distributed systems rely on a node that controls the others or the control is shared among several nodes. Usually, systems with the centralized control have bottlenecks which limit their performance.
Scalability This criterion implies that system based on particular Linda implementation can cope with the increasing amount of data and nodes while maintaining acceptable performance. means that the implementation enjoys the property and ? means that we were not able to provide an answer due to the lack of source code and/or documentation.
After considering the results in Table I, to perform our detailed experiments we have chosen: Tupleware which enjoys most of the wished features; Klaim since it offers distribution and code mobility; MozartSpaces since it satisfies two important criteria of the second group (fully distribution) and is one of the most recent implementation. Finally, we have chosen GigaSpaces because it is the most modern among the commercial systems; it will be used as a yardstick to compare the performance of the others. We would like to add that DTuples has not been considered for the more detailed comparison because we have not been able to obtain its libraries or source code and that Grinda has been dropped because it seems to be the less maintained one.
In all our implementations of the case studies, we have structured the systems by assigning each process a local tuple space. Because GigaSpaces is a centralized tuple space, in order to satisfy this rule we do not use it as centralized one, but as distributed: each process is assigned its own tuple space in the GigaSpaces server.
EXPERIMENTS
In order to compare four different tuple space systems we consider four different case studies: Password search, Sorting, Ocean model and Matrix multiplication. We describe them below.
Introducing case studies
The first case study is of interest since it deals with a large number of tuples and requires to perform a huge number of write and read operations. This helps us understand how efficiently an implementation performs operations on local tuple spaces with a large number of tuples. The second case study is computation intensive since each node spends more time for sorting elements than on communicating with the others. This case study has been considered because it needs structured tuples that contain both basic values (with primitive type) and complex data structures that impacts on the speed of the inter-process communication. The third case has been taken into account since it introduces particular dependencies among nodes, which if exploited can improve the application performances. This was considered to check whether adapting a tuple space system to the specific inter-process interaction pattern of a specific class of the applications could lead to significative performance improvements. The last case study is a communication-intensive task and it requires much reading on local and remote tuple spaces. All case studies are implemented using the master-worker paradigm [10] because among other design patterns (e.g., Pipeline, SPMD, Fork-join) [25] it fits well with all our case studies and allows us to implement them in a similar way. We briefly describe all the case studies in the rest of this subsection.
Password search. The main aim of this application is to find a password using its hashed value in a predefined "database" distributed among processes. Such a database is a set of files containing pairs (password, hashed value). The application creates a master process and several worker processes ( Figure 2): the master keeps asking the workers for passwords corresponding to a specific hashed values, by issuing tuples of the form: Each worker first loads its portion of the distributed database and then obtains from the master a task to look for the password corresponding to a hash value. Once it has found the password, it sends the result back to the master, with a tuple of the form: "found password", dd157c03313e452ae4a7a5b72407b3a9, 7723567 For multiple tuple space implementations, it is necessary to start searching in one local tuple space and then to check the tuple spaces of other workers. The application terminates its execution when all the tasks have been processed and the master has received all required results. Sorting. This distributed application consists of sorting an array of integers. The master is responsible for loading initial data and for collecting the final sorted data, while workers are directly responsible for the sorting. At the beginning, the master loads predefined initial data to be sorted and sends them to one worker to start the sorting process. Afterward, the master waits for the sorted arrays from the workers: when any sub-array is sorted the master receives it and when all sub-arrays are collected builds the whole sorted sequence. An example of the sorting is shown in Figure 3 where we have the initial array of 8 elements. For the sake of simplicity, the Figure illustrates the case in which arrays are always divided into equal parts and sorted when the size of each part equals 2 elements, while in the real application it is parametric to a threshold. In the end, we need to reconstruct a sorted array from already sorted parts of smaller size. The behavior of workers is different; when they are instantiated, each of them starts searching for the unsorted data in local and remote tuple spaces. When a worker finds a tuple with unsorted data, it checks whether the size of such data is below the predetermined threshold and in such a case it computes and sends the result to the master; then it continues by searching for other unsorted data. Otherwise, the worker splits the array into two parts: one part is stored into its local tuple space while the other is processed. 8 Ocean model. The ocean model is a simulation of the enclosed body of water that was considered in [3]. The two-dimensional (2-D) surface of the water in the model is represented as a 2-D grid and each cell of the grid represents one point of the water. The parameters of the model are current velocity and surface elevation which are based on a given wind velocity and bathymetry. In order to parallelize the computation, the whole grid is divided into vertical panels (Figure 4), and each worker owns one panel and computes its parameters. The parts of the panels, which are located on the border between them are colored. Since the surface of the water is continuous, the state of each point depends on the states of the points close to it. Thus, the information about bordering parts of panels should be taken into account.
The aim of the case study is to simulate the body of water during several time-steps. At each time-step, a worker recomputes the state (parameters) of its panel by exploiting parameters of the adjacent panels. The missions of the master and workers are similar to the previous case studies. In the application the master instantiates the whole grid, divides it into parts and sends them to the workers. When all the iterations are completed, it collects all parts of the grid. Each worker receives its share of the grid and at each iteration it communicates with workers which have adjacent grid parts in order to update and recompute the parameters of its model. When all the iterations are completed, each worker sends its data to the master.
Matrix multiplication. The case study is designed to multiply two square matrices of the same order. The algorithm of multiplication [31] operates with rows of two matrices A and B and put the result in matrix C. The latter is obtained via subtasks where each row is computed in parallel. At the j-th step of a task the i-th task, the element, a ij , of A is multiplied by all the elements of the j-th row of B; the obtained vector is added to the current i-th row of C. The computation stops when all subtasks terminate. Figure 5 shows how the first row of C is computed if A and B are 2 × 2 matrices. In the first step, the element a 1,1 is multiplied first by b 1,1 then by b 1,2 , to obtain the first partial value of the first row. In the second step, the same operation is performed with a 1,2 , b 2,1 and b 2,2 and the obtained vector is added to the first row of C thus obtaining its final value.
Initially, the master distributes the matrices A and B among the workers. In our case study we have considered two alternatives: (i) the rows of both A and B are spread uniformly, (ii) the rows of A are spread uniformly while B is entirely assigned to a single worker. This helped us in understanding how the behavior of the tuple space and its performances change when only the location of some tuples changes.
Implementing the case studies
All the chosen implementations are Java-based, the most used language (according to TIOBE Programming Community Index [32] and [11]). It guarantees also the possibility of performing better comparisons of the time performances exhibited by the different tuple systems which could significantly depend on the chosen language.
Another key-point of using the same programming language for all the implementations, is that case studies can be written as skeletons: the code remains the same for all the implementations while only the invocations of different library methods do change. In order to implement the skeleton for the case study, we have developed several wrappers for each of four tuple space systems we have chosen and each wrapper implements interface ITupleSpace. This interface defines the basic operations of a tuple space (e.g. initialization/destruction, I/O operations and so on). Since the tuple space systems have a different set of operations on tuple space we have chosen those operations which have the same semantics for all systems and can be unified. It is worth to notice that all I/O operations on a tuple space are wrapped and placed in the class TupleOperation.
To show the principle of skeleton we take a look at the skeleton of the case study Password search. Master behavior is implemented by class DistributedSearchMaster shown in Listing 1. The complete master code along with the workers one can be found in Appendix A. Listing 1 contains just an excerpt of the code, reporting just the salient parts of this case study. The class (Listing 1) is generic with respect to an object extending class ITupleSpace. This class is a wrapper/interface for the different tuple space systems.
The logic of the master and worker process follows the description of the case study given above. The master process first initializes its local tuple space of the system given by parameter of the class (lines [21][22]. After that, it waits until all the workers are ready (line 25), load all data they need and starts logging the execution time (line [26][27][28][29]. Then the process creates the tasks for the workers and waits for the results (lines 37-41). When all the results are gathered, the master notifies the workers that they can finish their work, stops counting the time of execution and saves the data of profiling (lines 44-53). Let us note, that thanks to the use of generics (e.g lines 32-34), the master code abstracts away from how the different tuple spaces systems implement the operations on tuples.
There is a difference on how the tuple spaces systems implement the search among distributed tuple spaces. Tupleware has a built-in operation with notification mechanism: Listing 1: Password search. Excerpt of the master process it searches in local and remote tuple spaces at once (i.e. in broadcast fashion) and then waits for the notification that the desired tuple has been found. Mimicking the behavior of this operation for the other tuple space systems requires to continuously check each tuple space until the required tuple is found.
Assessment Methodology
All the conducted experiments are parametric with respect to two values. The first one is the number of workers w, w ∈ {1, 5, 10, 15}. This parameter is used to test the scalability of the different implementations. The second parameter is application specific, but it aims at testing the implementations when the workload increases.
• Password search we vary the number of the entries in the database (10000, 1000000, 1 million passwords) where it is necessary to search a password. This parameter directly affects the number of local entries each worker has. Moreover, for this case study the number of passwords to search was fixed to 100. • Sorting case, we vary the size of the array to be sorted (100000, 1 million, 10 million elements). In this case the number of elements does not correspond to the number of tuples because parts of the array are transferred also as arrays of smaller size. • Ocean model we vary the grid size (300, 600 and 1200) which is related to the computational size of the initial task. • Matrix multiplication we vary the order of a square matrix (50, 100).
Measured metrics.
For the measurement of metrics we have created a profiler which is similar to Clarkware Profiler † . Clarkware Profiler calculates just the average time for the time series, while ours also calculates other statistics (e.g., standard deviation). Moreover, our profiler was designed also for analyzing tests carried out on more than one machine. For that reason, each process writes raw profiling data on a specific file; all files are then collected and used by specific software to calculate required metrics.
We use the manual method of profiling and insert methods begin(label) and end(label) into program code surrounding parts of the code we are interested in order to begin and stop counting time respectively. For each metrics the label is different and it is possible to use several of them simultaneously. This sequence of the actions can be repeated many times and eventually, all the data are stored on disk for the further analysis.
Each set of experiments has been conducted 10 times with randomly generated input and computed an average value and a standard deviation of each metrics. To extensively compare the different implementations, we have collected the following measures: Local writing time: time required to write one tuple into a local tuple space.
Local reading time: time required to read or take one tuple from a local tuple space using a template. This metrics checks how fast pattern matching works.
Remote writing time: time required to communicate with a remote tuple space and to perform write operation on it.
Remote reading time: time required to communicate with a remote tuple space and to perform read or take operation on it.
Search time: time required to search a tuple in a set of remote tuple spaces. † The profiler was written by Mike Clark; the source code is available on GitHub: https://github.com/ akatkinson/Tupleware/tree/master/src/com/clarkware/profiler Total time: total execution time. The time does not include an initialization of tuple spaces.
Number of visited nodes: number of visited tuple spaces before a necessary tuple was found.
Experimental Results
Please notice that all plots used in the paper report results of our experiments in a logarithmic scale. When describing the outcome, we have only used those plots which are more relevant to evidence the difference between the four tuple space systems.
Password search. In Figures 6-7 is reported the trend of the total execution time as the number of workers and size of considered database increase. In Figure 6 the size of the database is 100 thousand entries, while Figure 7 reports the case in which the database contains 1 million of elements. From the plot, it is evident that GigaSpaces exhibits better performances than the other systems. Figure 8 depicts the local writing time for each implementation with different numbers of workers. As we can see, by increasing the number of workers (that implies reducing the amount of local data to consider), the local writing time decreases. This is more evident for Tupleware, which really suffers when a big number of tuples (e.g. 1 million) is stored in a single local tuple space. The writing time of Klaim is the lowest among other systems and does not change significantly during any variation in the experiments. The local writing time of MozartSpaces remains almost the same when the number of workers increases. Nonetheless, its local time is bigger with respect to the other systems, especially when the number of workers is equal or greater than 10. Local reading time is shown in Figure 9 and Klaim is the one that exhibits the worst performance for searching in a local space. Indeed, if there is just one worker, the local reading time is 10 times greater than Tupleware. This can be ascribed to the pattern matching mechanism of Klaim which is less effective than others. By increasing the number of workers the difference becomes less evident and approximately equal to MozartSpaces time that does not change considerably but always remains much greater than the time of Tupleware and GigaSpaces. Since this case study requires little synchronization among workers, performance improves when the level of parallelism (the number of workers) increases.
To better clarify the behaviors of Klaim and Tupleware (the only implementations for which the code is available) we can look at how local tuple spaces are implemented. Klaim is based on Vector that provides a very fast insertion with the complexity O(1) if it is performed at the end of the vector and a slow lookup with the complexity O(n). Tupleware has Hashtable as a container for tuples but the use of it depends on specific types of templates that we do not satisfy in our skeleton-implementation (namely, for Tupleware first several fields of the template should contain values that is not always the case for our skeleton). Therefore, in our case all the tuples with passwords are stored in one vector meaning that the behavior is similar to one of Klaim. However, Tupleware insert every new tuple in the begging of the vector that slows down the writing and a simplified comparison (based on a comparison of strings) for the lookup that makes it faster. The search time is similar to the local reading time but takes into account searching in remote tuple spaces. When considering just one worker, the search time is the same as the reading time in a local tuple space, however, when the number of workers increases the search time of Tupleware and Klaim grows faster than the time of GigaSpaces. Figure 10 shows that GigaSpaces and MozartSpaces are more sensitive to the number of tuples than to the number of accesses to the tuple space.
Summing up, we can remark that the local tuple spaces of the four systems exhibit different performances depending on the operation on them: the writing time of Klaim is always significantly smaller than the others, while the pattern matching mechanism of Tupleware allows for faster local searching. The performance of MozartSpaces mostly depends on the number of involved workers: it exhibits average time for local operations when one worker is involved, while it shows the worst time with 15 workers.
Sorting. Figure 11 shows that GigaSpaces exhibits significantly better execution time when the number of elements to sort is 1 million. As shown in Figure12 when 10 million elements are considered and several workers are involved, Tupleware exhibits a more efficient parallelization and thus requires less time. For the experiment with more than 10 workers and array size of 10 million, we could not get results of MozartSpaces because some data were lost during the execution making it not possible to obtain the sorted array. This is why in Figures 12, 13 and 14 some data of MozartSpaces are missing. This is caused by a race condition bug when two processes try to simultaneously write data to a third process since we experienced this data loss when sorted sub-array were returned to the master. Other tuple space systems have not shown such a misbehavior. This case study is computation intensive but requires also an exchange of structured data and, although in the experiments a considerable part of the time is spent for sorting, we noticed that performance does not significantly improve when the number of workers increases. The performance of Klaim is visibly worse than others even for one worker. In this case, the profiling of the Klaim application showed that a considerable amount of time was spent for passing initial data from the master to the worker. Inefficient implementation of data transmission seems to be the reason the total time of Klaim differs from the total time of Tupleware. Figures 8 and 13, we see that when the number of workers increases, GigaSpaces and Klaim suffer more from synchronization in the current case study than in the previous one. As shown in Figure 14, search time directly depends on the number of the workers and grows with it. Taking into account that Klaim and Tupleware spend more time accessing remote tuple space, GigaSpaces suffers more because of synchronization. Klaim has the same problem, but its inefficiency is hampered by data transmission cost.
By comparing
Ocean model. This case study was chosen to examine the behavior of tuple space systems when specific patterns of interactions come into play. Out of the four considered systems, only Tupleware has a method for reducing the number of visited nodes during search operation which helps in lowering search time. Figure 15 depicts the number of visited nodes for different grid sizes and a different number of workers (for this case study in all figures we consider only 5, 10, 15 workers because for one worker generally tuple space is not used). The curve depends weakly on the size of the grid for all systems and much more on the number of workers. Indeed, from Figure 15 we can appreciate that Tupleware performs a smaller number of nodes visits and that when the number of workers increases the difference is even more evident ‡ . The difference in the number of visited nodes does not affect significantly the total time of execution for different values of the grid size (Figure 16-17) mostly because the case study requires many read operations from remote tuple spaces ( Figure 18).
As shown in Figure 18 the time of remote operation varies for different tuple space systems. For this case study, we can neglect the time of the pattern matching and consider that this time is equal to the time of communication. For Klaim and Tupleware these times were similar and significantly greater than those of GigaSpaces and MozartSpaces. Klaim and ‡ Figure 15, the curves for Klaim and GigaSpaces are overlapping and purple wins over blue.
Tupleware communications rely on TCP and to handle any remote tuple space one needs to use exact addresses and ports. GigaSpaces, that has a centralized implementation, most likely does not use TCP for data exchange but relies on a more efficient memory-based approach. The communication time of MozartSpaces is in the middle (in the plot with logarithmic scale) but close to GigaSpaces by its value: for GigaSpaces this time varies in the range of 0.0188 to 0.0597 ms, for MozartSpaces in the range of 2.0341 to 3.0108 ms, and for Tupleware and Klaim it exceeds 190 ms.Therefore, as it was mentioned before, GigaSpaces and MozartSpaces implements a read operation differently from Tupleware and Klaim and it is more effective when working on a single host. Figure 16 provides evidence of the effectiveness of Tupleware when its total execution time is compared with the Klaim one. Indeed, Klaim visits more nodes and spends more time for each read operation, and the difference increases when the grid size grows and more data have to be transmitted as it is shown in Figure 17. Matrix multiplication. This case study mostly consists of searching tuples in remote tuple spaces, and this implies that the number of remote read operations is by far bigger than the other operations. Therefore, GigaSpaces and MozartSpaces outperform other tuple space systems total execution time ( Figure 19). As discussed in Section 3.1 we will consider two variants of this case study: one in which matrix B is uniformly distributed among the workers (as the matrix A), and one in which the whole matrix is assigned to one worker. In the following plots, solid lines correspond to the experiments with uniform distribution and dashed lines correspond to ones with the second type of distribution (we name series with this kind of distribution with a label ending with B-1). Figure 20 depicts the average number of the nodes that it is necessary to visit in order to find a tuple for each worker. When considering experiments with more than one worker all tuple space systems except Tupleware demonstrate similar behavior: the total time almost coincides for both types of the distribution. However, for the uniform distribution Tupleware exhibits always greater values and for the second type of distribution, the values are significantly lower. The second case reaffirms the results of the previous case study because in this case all workers know where to search the rows of the matrix B almost from the very beginning that leads to the reduction of the amount of communication, affects directly the search time ( Figure 21) and, in addition, implicitly leads to the lower remote reading time ( Figure 22, the remote reading time is not displayed for one worker because only the local tuple space of the worker is used). In contrast, for the uniform distribution Tupleware performs worse because of the same mechanism which helps it in the previous case: when it needs to iterate over all the rows one by one it always starts the checking from the tuple spaces which were already checked at the previous time and which do not store required rows. Therefore, every time it checks roughly all tuple spaces. As shown in Figure 19, the runs with the uniform distribution outperform (i.e., they show a lower search time) the others, except the ones where Tupleware is used. This is more evident in the case of Klaim, where the difference in execution time is up to two times. To explain this behavior we looked at the time logs for one of the experiments (matrix order 50, workers 5) which consist of several files for each worker (e.g. Java Thread) and paid attention to the search time that mostly affects the execution time. The search time of each search operation performed during the execution of the case study for Klaim and GigaSpaces is shown in Figure 23 and Figure 24 respectively (these two figures are presented not in a logarithmic scale). Every colored line represents one of five threads of workers and shows how the search time changes when the program executes. As we can see, although the search time of GigaSpaces is much less than the time of Klaim, there is a specific regularity for both The results of this case study are generally consistent with the previous ones: remote operations of GigaSpaces and MozartSpaces are much faster and better fits to the application with frequent inter-process communication; Tupleware continues to have an advantage in the application with a specific pattern of communication. At the same time, we revealed that in some cases this feature of Tupleware had the side-effect that negatively affected its performance. We then have substituted the part of Klaim responsible for sending and receiving tuples. It was based on Java IO, the package containing classes for the data transmission over the network. For the renewed part, we have opted to use Java NIO, non-blocking IO [29], which is a modern version of IO and in some cases allows for an efficient use of resources. Java NIO is beneficial when used to program applications dealing with many incoming connections. Moreover, for synchronization purposes, we used a more recent package (java.util.concurrent) instead of synchronization methods of the previous generation.
To evaluate the effectiveness of the modified Klaim we tested it with the Matrix multiplication case study since it depends on remote operations more than other case study and benefits of the modification are clearer. We just show the results for the case in which matrix B is uniformly distributed among the workers since the other case shows similar results. As shown in Figure 25 the remote writing time decreased significantly. The remote reading time of the modified Klaim is close to the one of MozartSpaces ( Figure 26) and demonstrates similar behavior. In Figure 26 the remote reading time for the runs with one worker is not shown since in this case just the local tuple space of the worker is used. The remote reading operations mostly determine the total time and that is why graphics of modified Klaim and MozartSpaces in Figure 27 are similar. Therefore, we have modified Klaim in order to decrease the time of interprocess communication and our changes of Klaim provide a significantly lower time of remote operations and lead to better performance of the tuple space system. Experiments with several host machines. The results of the previous experiments which were conducted using only one host machine provide us evidence that GigaSpaces has a more efficient implementation of communication and that is very beneficial when many operations on remote tuple spaces are used. Since we do not have access to its source code we conjecture that GigaSpaces uses an efficient inter-process communicating mechanism and do not resort to socket communications as the other implementations. To check whether GigaSpaces continues to be so efficient when the networking has to be used, we have created a network of several identical host machines using Oracle VM VirtualBox § . The configuration of the hosts differs from the one of the previous experiments: 1 CPU, 1 GB RAM and installed Linux Mint 17.3 Xfce. The master and each worker process were launched in their own hosts. Each set of experiments was conducted 10 times and after the results of the execution were collected and analyzed.
We conducted experiments for two case studies: Sorting and Matrix multiplication (an implementation with the uniform distribution). We just focused on remote reading time, since it is the most frequently used operations in these two case studies. In all the tests for the networked version, the remote reading time exceeds significantly the time measured in one host version showing that for the one host case study implementation GigaSpaces does not use network protocols. In addition, by comparing two case studies we can notice the following. First, we compare the ratio of the remote reading time of networked version to one of one host version and for two different case studies the ratio was completely different: for Sorting it is around 20 (Table II), for Matrix multiplication it is around 100 (Table III). This discrepancy is related to the difference in type and size of transmitted data. Second, the use of several separate hosts affects the total time differently: for instance, considering Sorting, that leads to the acceleration of the execution (Table II
CONCLUSIONS
Distributed computing is getting increasingly pervasive, with demands from various applications domains and highly diverse underlying architectures from the multitude of tiny things to the very large cloud-based systems. Tuple spaces certainly offer valuable tools and methodologies to help develop scalable distributed applications/systems. This paper has first surveyed and evaluated a number of tuple space systems, then it has analyzed more closely four different systems. We considered GigaSpaces, because it is one of the few currently used commercial products, Klaim, because it guarantees code mobility and flexible manipulation of tuple spaces, MozartSpaces as the most recent implementation that satisfies the main criteria we do consider essential for tuple based programming, and Tupleware, because it is the one that turned out to be the best in our initial evaluation. We have then compared the four system by evaluating their performances over four case studies: one testing performance of local tuple space, a communication-intensive one, a computational-intensive one, and one demanding a specific communication pattern.
Our work follows the lines of [34] but we have chosen more recent implementations and conducted more extensive experiments. On purpose, we ignored implementations of systems that have been directly inspired by those considered in the paper. Thus, we did not consider jRESP ¶ a Java runtime environment, that provides a tool for developing autonomic and adaptive systems according to the SCEL approach [16,17].
After analyzing the outcome of the experiments, it became clear what are the critical aspects of a tuple space system that deserve specific attention to obtain efficient implementations. The critical choices are concerned with inter-process and inter-machine communication and local tuple space management.
The first aspect is related to data transmission and is influenced by the choice of algorithms that reduce communication. For instance, the commercial system GigaSpaces differs from the other systems that we considered for the technique used for data exchange, exploiting memory based inter-process communication, that guarantees a considerably smaller access time to data. Therefore, the use of this mechanism on a single machine does increase efficiency. However, when working with networked machines, it is not possible to use the same mechanism and we need to resort to other approaches (e.g. the Tupleware one) to reduce inter-machine communication and to have more effective communications. To compare GigaSpaces with the other tuple space systems under similar conditions and thus to check whether it remains efficient also in the case of distributed computing, we have carried out experiments using a network where workers and masters processes are hosted on different machines. The results of these experiments show that, although the remote operations are much slower, the overall performance of GigaSpaces remains high.
The second aspect, concerned with the implementation of local tuple spaces, is heavily influenced by the data structure chosen to represent tuples, the corresponding data matching algorithms and by the lock mechanisms used to prevent conflicts when accessing the tuple space. In our experiments the performance of different operations on tuple spaces varies considerably; for example, Klaim provides fast writing and slow reading, whereas Tupleware exhibits high writing time and fast reading time. The performances of a tuple space system would depend also on the chosen system architectures which determine the kind of interaction between their components. Indeed, it is evident that all the issues should be tackled together because they are closely interdependent. Another interesting experiment would be to use one of the classical database systems that offer fast I/O operations to model tuple spaces and their operations, in order to assess their performances over our case studies. The same experiment can be carried out by considering modern no-sql databases, such as Redis or MongoDB * * .
Together with the experiments, we have started modifying the implementation of one of the considered systems, namely Klaim. We have focused on the part which evidently damages its overall performance, i.e., the one concerned with data transmission over the network. Our experiments have shown an interesting outcome. The time of remote writing and reading becomes comparable to that of MozartSpaces and is significantly shorter than that required by the previous implementation of Klaim. Indeed, the modified version allows much faster execution of the tasks where many inter-process communications are considered.
We plan to use the results of this work as the basis for designing an efficient tuple space system which offers programmers the possibility of selecting (e.g. via a dashboard) the desired features of the tuple space according to the specific application. In this way, one could envisage a distributed middleware with different tuple spaces implementations each of them targeted to specific classes of systems and devised to guarantee the most efficient execution. The set of configuration options will be a key factor of the work. One of such options, that we consider important for improving performances is data replication. In this respect, we plan to exploit the results of RepliKlaim [2] which enriched Klaim with primitives for replica-aware coordination. Indeed, we will use the current implementation of Klaim as a starting point for the re-engineering of tuple space middleware.
A. PASSWORD SEARCH LISTINGS
Listing 1 shows that the worker process also starts its local tuple space and checks the connection to the master and other workers (lines [27][28][29][30][31]. Then the process loads predefined data into its local space (line 38) and taking tasks from the master begins to search for necessary tuples in its local and remote tuple spaces (lines 58-78). At the end, when all tasks are accomplished the worker saves the data of profiling ((line 76)). | 2016-12-09T11:22:14.000Z | 2016-06-06T00:00:00.000 | {
"year": 2016,
"sha1": "cb80e753036886d10d230bf7974168ce72dd0680",
"oa_license": null,
"oa_url": "http://eprints.imtlucca.it/3506/1/main.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e6a32f6e14cb356c7f9ef5e1558b009ebe78af38",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
244213102 | pes2o/s2orc | v3-fos-license | Fulminant necrotizing streptococcal myositis with dramatic outcome – a rare case report
Necrotizing myositis represents a rare, aggressive form of bacterial-induced soft tissue necrotizing infection. We present a fulminant case of a 44-year-old patient with a necrotizing soft tissue infection and a history of rheumatoid arthritis transferred to our service, Cluj-Napoca Emergency County Hospital, from a local hospital where he had been admitted two days before with chills and light-headedness after an accidental minor blunt trauma in the right thigh region. After admission to our hospital and first assessment, broad spectrum antibiotherapy was started with Meropenem, Vancomycin and Metronidazole along with surgical debridement. The evolution was fulminant with rapid development of multiple organ dysfunction syndrome, therefore he was transferred to the intensive care unit, intubated, and started the volemic resuscitation and vasopressor therapy. The blood culture was positive for group A beta-hemolytic streptococcus (GAS) and high dose Penicillin G was added to the therapeutic scheme. Despite all efforts, the patient developed disseminated intravascular coagulation syndrome and died in the next hours. The clinical picture together with the findings from the autopsy were suggestive for a streptococcal toxic shock syndrome developed as a complication of GAS induced necrotizing myositis.
Introduction
Necrotizing myositis represents a fulminant, life-threatening form of necrotizing soft tissue infection (NSTI) that involves fascia, subcutaneous tissue and muscle. It is typically associated with the presence of group A beta-hemolytic streptococcus (GAS, Streptococcus pyogenes), but also other 'flesh-eating bacteria' such as Peptostreptocous spp., Fusobacterium spp., Bacteroides spp., and Enterobacterales [1]. Clinically it is characterized by fast, massive tissue destruction and signs of systemic toxicity. Bacteria cross the skin barrier through small entry points caused by scratches and punctures that could occur even without noticing during daily activities. Thus, the most commonly affected parts are usually the lower limbs.
The initial signs are often non-specific with minimum inflammatory signs, hence a high level of clinical suspicion must be in place to diagnose early stages of disease and to boost survival chances. In general, patients have predisposing conditions like diabetes mellitus or various forms of immunodeficiencies, acquired or iatrogenic. The most common clinical presentation of these patients are septicemia, cellulitis or abscess. The standard of care is emergency surgical intervention with massive debridement of all necrotic tissues and broad spectrum antibiotherapy. Mortality rates are high, currently even with the most recent advances in medicine mortality is around 20% and is touching 100% in the absence of immediate medical intervention [2,3].
Case report
A 44 years old Caucasian male with a previous history of rheumatoid arthritis for which he was receiving immunosuppressive treatment with Methotrexate and Tocilizumab presented to a local rheumatology ward with chills and light-headedness. Two days prior to hospital presentation the patient admitted having an accidental blunt trauma in the right thigh region while working in the yard. There were apparently no skin breaches, or subcutaneous crepitus at the trauma site. Soon the general state altered with high fever, local muscle pain and right lower limb edema accompanied by blisters formation and the development of a bluish-black coloration in the trauma cutaneous corresponding area. Therefore, the patient was urgently transferred from a local hospital to our facility at the Cluj-Napoca Emergency County Hospital. The initial assessment suggested a necrotizing soft tissue infection which lead to the urgent initiation of the mixed pharmacological and surgical treatment with broad spectrum antibiotherapy with Meropenem, Vancomycin and Metronidazole was started along with three decompressive fasciotomies. During the surgical assessment the superficial muscle fascia seemed viable, there were no fetid discharges, but the right and medial vastus femoral muscles had incipient necrotic areas for which debridement was performed. The patient became rapidly obnubilated, dyspneic, with blood gas analysis showing hypoxemia, marked metabolic acidosis with high lactate levels, which required immediate orotracheal intubation. The patient started to develop multiple organ dysfunction syndrome, he became hypotensive (BP= 60/34 mmHg) and tachycardic (160 beats/minute). He was transferred to the intensive care unit where aggressive volemic resuscitation was started along with vasopressor therapy. The patient was febrile (t=38.3 o C), severely pancytopenic (L=1530/ µL, RBC=1.8×10 6 /µL, Tr=15000/µL), hypoproteinemic (total proteins=2.7 g/dL), presenting low serum fibrinogen, spontaneously prolonged coagulation (INR=3), marked renal dysfunction (creatinine 4.25 mg/dL, BUN=120 mg/ dL) and increased markers of inflammation (CRP=13 mg/ dL). Severe rhabdomyolysis was present (CK=22178 U/L, almost 130 times above normal value), along with increasing levels of transaminases (ALAT=1715 U/L, ASAT=6381 U/L), serum bilirubin (total bilirubin 4,58 mg/dL) and lactate dehydrogenase (LDH=3969 U/L). A constant oozing hemorrhage could be noted at the site of the surgical wound. Blood transfusions comprising erythrocytes and platelets concentrates, fresh frozen plasma, cryoprecipitate, as well as administration of prothrombin complex and activated factor VII were undertaken considering the presence of intravascular disseminated coagulation syndrome with profound bleeding. Intermittent hemodiafiltration was instituted. Both blood and tissue cultures grew GAS, so high dose Penicillin G was added to the therapeutic plan. Local surgical hemostasis was attempted in the first instance, but soon afterwards the patient returned to the operating theatre given the rapid extension of tissue necrosis. Extensive debridement comprising skin, subcutaneous fatty tissues, superficial fascia, thigh muscles leaving femoral periosteum exposed, scrotum and perineal tissues was undertaken. Despite vigorous medical efforts the patient's death occurred less than 48 hours after the arrival in our facility. Autopsy was performed and findings included multiple elevated vesicular lesions on the skin of arms and left thigh (Figure 1). The scrotum was removed and the testicles exposed due to surgical intervention. Inflammatory lymphadenopathy was noted in the femoral triangle ( Figure 2). The right hip and thigh were severely affected by the progressive disease and due to extensive debridement of the skin and subjacent tissues the muscles were exposed (Figure 3).
Histological examination of the surgical specimens yielded the presence of extensive necrosis including hypodermic fat, fascia and muscle tissue, with widespread vascular thrombosis, bacterial colonies and gas bubbles distorting normal tissue architecture (Figure 4).
Discussion
Necrotizing myositis represents a life-threatening condition, requiring a high level of suspicion in order to perform a quick diagnosis, which is the sine qua non condition to improving prognosis [4]. The main etiologic agent described in literature is GAS (Streptoccus pyogenes), although other pathogens like Clostridium, Staphylococcus, Vibrio, Aeromonas and Pasteurella have been identified [5,6]. Major risk factors associated with invasiveness are minor trauma, use of nonsteroidal antiinflammatory drugs, recent surgery, obesity, poor socioeconomic status, malignancy and immunosuppression [7]. In the particular case of our patient both minor local trauma and pharmacological immunosuppression could be identified as known risk factors for NSTI. Patient treatment consisted in a combination of Methotrexate and Tocilizumab which have been previously individually described as risk factors for NSTI [8,9]. In our case we speculate that this combination of two immune system suppressants, could be the favoring factor for the fulminant evolution leading to death. Pathogenesis is incompletely understood and implies complex interactions between the human host defense mechanisms and specific bacterial virulence factors. Of those we mention the cell surface M protein and streptococcal exotoxins [10,11]. The increasing prevalence of M1 and M3 subtypes seems to be responsible for the increasing number of severe streptococcal invasive infections in the last years [12,13]. Also, these strains have been more often associated with the streptococcal toxic shock syndrome (STSS). STSS represents a feared complication of GAS invasive infections, occurring in up to one third of cases [14]. It is defined by hypotension and multiple organ failure, with renal impairment, coagulopathy, liver and respiratory failure [7]. The streptococcal exotoxins are the center of its pathogenesis by inducing the release of inflammatory cytokines leading to tissue damage and increased capillary leak. Treatment resides in fluid management, respiratory ventilation, vasopressor support, renal replacement therapy, along with antibiotics, draining off the source of infection and adjunctive therapies like intravenous immunoglobulins. Despite energetic efforts, STSS is characterized by extremely high mortality rates.
Laboratory findings are nonspecific and positive diagnosis relies on quick, thorough surgical examination corelated with microbiological tissue or secretions result and the histopathological exam of the necrotic debridement specimens [15]. Imagistic approaches are also of little value to diagnosis as many findings are once again nonspecific [16]. Therapeutic pillars consist in aggressive surgical debridement until reaching healthy tissue, empiric broad spectrum antibiotherapy that is later tailored to culture results and hemodynamic resuscitation. Multiple interventions might be needed to control the spread of infection, which could lead to limb amputation as a last solution that should not be overlooked. Antibiotherapy should cover the spectrum of causative microorganisms: Streptococcus pyogenes, Staphylococcus aureus, Methicillin-resistant Staphylococcus aureus, Gram-negative aerobes and anaerobes [17]. Two adjuvant therapies currently used are: hyperbaric oxygen therapy (HBOT) and intravenous immunoglobulins (IVIG). HBOT aims to improve tissue oxygenation, thus preventing an anaerobiotic environment, but has failed to prove a mortality benefit [18]. IVIGs are favored in the setting of NSTIs associated with toxic shock syndrome in order to neutralize bacterial exotoxins and limit systemic inflammation. If the patient survives, plastic surgery with skin and grafts is needed in an attempt to restore functionality and esthetics.
Conclusion
We described a fulminant case of necrotizing myositis in a young, but pharmacologically immunosuppressed underlining the extreme aggression of soft tissue streptococcal infections. Integrating the clinical, paraclinical and autopsy findings we conclude that this case was a GAS induced necrotizing myositis complicated with STSS which was responsible for the systemic manifestations. This entity should be considered by surgeons, dermatologists and emergency medicine professionals especially in patients undergoing immunosuppressive treatment. Furthermore, double immunosuppression, as seen in this case, should be evaluated with care as it may pose patients to an additional risk. Clinicians must be aware of this entity and carefully screen potential patients for it to ensure best outcomes for those affected. In these diseases time is both flesh and life. | 2021-10-18T18:17:35.230Z | 2021-09-24T00:00:00.000 | {
"year": 2021,
"sha1": "24d0c690d458f8bef777152fc1bc3188564f8af2",
"oa_license": "CCBYND",
"oa_url": "https://medpharmareports.com/index.php/mpr/article/download/1866/2824",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b5b088ebec983544b963bbcaa10462354fc14fc7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
88506725 | pes2o/s2orc | v3-fos-license | Elaboration and characterization of environmental properties of TiO 2 plasma sprayed coatings
Titanium dioxide (TiO2) is an attractive material for numerous technological applications such as photocatalytical applications. These materials can in some conditions have the ability to allow the environmental purification of air and water by the decomposition and removal of harmful substances, such as volatile organic compounds (VOC), benzene compounds, NOx, SO2, etc. Our work was focused on the elaboration and the evaluation of the environmental properties of titanium dioxide coatings by plasma spray techniques. The principle of plasma spraying consists by the injection in an enthalpic source (plasma) of the powder of one material to be sprayed. The molten powder is transported and accelerated by the plasmaproducing gas flow and crushed on the target substrate, where the particles of material solidify with high speeds, thus forming the coating. The advantages of thermal spraying consist in the fact that the coating has stability, durability, adherence and cohesion. For this study, the initial powder material was an anatase TiO2. The photocatalyst coating was realized by a few kinds of thermal spray method: gas flame, APS (atmospheric plasma), VPS (vacuum plasma) and HVOF (high velocity oxygen fuel). The microstructures of the deposits, as a function of the coating process, are analysed by optical microscopy, scanning electronic microscopy, and the X-rays diffraction. To carry out the step of validation of these surfaces for their environmental functionalities, we used a control test process for the photocatalytic effectiveness with respect to nitrogen oxides. For that an original test chamber has been developed. Ultraviolet rays irradiated the coating specimens and the efficiency of NOx elimination has been controlled using a gas analyser. We studied the photocatalytical properties of different obtained coatings as a function of various parameters (porosity, thickness, ratio anatase/rutile).
INTRODUCTION
Environmental pollution and destruction on a global scale have drawn attention to the necessity of new, safe and clean chemical technologies and processes.The reduction of pollutants in our environment is one of the most ambitious challenges for the scientific world.Among strong contenders-environmentally friendly are the photocatalysts [1,2], which can operate at room temperature in a cleaner and safer manner.The most important photocatalyst is the titanium dioxide (TiO 2 ) [3][4][5].It is biologically and chemically inert, photo-stable, non-toxic and un-expensive and allows the decomposition of toxicological organic compounds in the water and the harmful gases in the atmosphere.
Our work focused on the elaboration and the evaluation of the environmental properties of titanium dioxide coatings by thermal spraying technique.
MATERIALS AND EXPERIMENTAL TECHNIQUES
2.1.TiO 2 powder.The TiO 2 presents three crystalline phases: rutile, the most stable phase, anatase that by annealing at the temperature higher than 625 K changes in rutile structure and brookite phase.The two TiO 2 phases, anatase and rutile take part at photocatalytic reaction, but the anatase provides better photocatalytical properties.Several results [6] obtained in the literature show that the TiO 2 could contain both phases anatase and rutile, but the relationship between the two phases and their photocatalytic effects are not yet well defined.
Our laboratory permits the elaboration of powder by the spray drying processes.One of the main interests of the spray drying process is the possibility to change the architectural features of powder.The anatase TiO 2 powder elaborated in the framework of N. Keller's thesis, presents the spherical particles and characteristics that permitted their use in the thermal spraying process.The size distribution is +10-44 µm.Figure 1 shows the shapes of the anatase powder observed by scanning electron microscopy (SEM).Figure 2 presents the X-ray diffraction pattern of the powder showing the single anatase crystalline structure.In the plasma spraying (Figure 3), an inert gas (generally argon, nitrogen) enters a direct-current arc between a tungsten cathode and a cooper anode that makes up the nozzle and becomes thermal plasma (by ionisation and heat).The temperature of the plasma just outside the nozzle exit is about 10000 K.The powder suspended in a caring gas is injected into the plasma and as result it is melted and accelerated with a velocity that can reach 300 m/s.A small amount of a secondary gas such as hydrogen or helium is mixed with the primary plasma gas to increase the thermal energy or the conductivity of the plasma.
In the flame spraying (Figure 4), a combustion reaction between air or oxygen and a variety of fuel (e.g.acetylene, propane, hydrogen) allows to melt and accelerate the molten particles.The flame temperature is lower than the plasma temperature, around 3000 K and the velocity of particles reaches 100 m/s.The deposit is built-up by successive pilings up of individual flattened particles or splats and resulting in a lamellar structure (Figure 5).The sprayed coatings show some surfaces particularities as porosity, interlamellar joints, cracks, oxides and un-melted particles.The advantages of thermal spraying are cleaning, stability, durability, adherence and cohesion of the sprayed coatings.
Experimental parameters.
In the APS, the spray powder is injected by an argon stream inside different mixtures of plasma gas based on the Ar, H 2 , He system.Table 1 presents the different experimental parameters used.
Using the hydrogen in the plasma jets allows obtaining very energetic plasmas; thermal exchange between the plasma plume and particles is very high, so the particles can be easily molten and thus a higher spraying efficiency is obtained.When helium is added to the plasma gas the particle velocity is higher but thermal exchange is reduced.During the spraying, the specimens were cooled with compressed air and CO 2 .In the flame spraying, the combustion reaction was performed between the O 2 (50 NL/min), air (60 NL/min) and acetylene (20 NL/min).The spray distance was fixed at 150 mm and the coating was cooled with compressed air.
In all cases, the stainless steel plates (70 × 25 × 2 mm) were used as substrate material.
Coating characterization.
The morphology of the sprayed coatings was observed by optical microscopy.Figure 6 shows the microstructure of the coatings for the B and C conditions.
The density of titanium dioxide coatings exhibits a strong dependency on the spraying conditions.The coating C (APS Ar: 40−H 2 : 3−He: 40) provides a higher porosity (62%), while for the other coatings the porosity is around 15%.
By X-ray diffraction (XRD) the crystalline structure of the coatings was determined.The main XRD patterns (Figure 7) show that all coatings consist of a mixture of rutile and anatase structure.
The concentration of anatase depends on the experimental parameters.In the case of the APS process, the rate in anatase is about 20-25%.The higher concentration in anatase is obtained for the C coating where a low flow rate of hydrogen and a high flow rate of helium were used.The amount in anatase phase for flame coating is less than for the plasma coating
Photocatalytical test for NOx removal.
In the frame of L. Toma's thesis to verify the efficiency of coatings for NOx removal, we have developed a test chamber.The experimental test can be divided in three parts: chamber reaction, environmental chamber and instrumentation (Figure 8).
In a chamber reaction we have prepared the NOx (NO and NO 2 ) by the chemical reaction between cooper powder (size distribution 30-55 µm) and a dilute solution of nitric acid.The NOx are sent using a peristaltic pump into the environmental chamber, which allows at constant temperature and pressure to obtain a volumic concentration of NOx between 1-2 ppm (1 ppm NO ≈ 1,24 mg/m 3 , 1 ppm NO 2 ≈ 2 mg/m 3 ).A homogenisation fan ensures an equivalent repartition of NOx in the environmental chamber.Inside the chamber is placed the photocatalytic reactor (a polycarbonate box, 10 × 10 × 5 cm), on which a Plexiglas window is fixed to allow the light passage from a daylight lamp with UV fraction through 30% UVA and 4% UVB in the spectrum.After the passage through the photoreactor, the NOx are sent towards a NOx chemiluminescence analyser.
In the environmental chamber, the NOx concentration decreases according to kinetics, which was observed.Then, when the concentration is stable, the photoreactor with the TiO 2 photocatalyst (powder or coating) is placed in the chamber and crossed by the NOx flow (the flow rate is 1.8 NL/min); then the gas is analysed by the NOx chemiluminescence analyser.The concentration is followed continuously and after 10 to 20 minutes the light lamp is turned on.Immediately the photocatalysis begins and the variation of NOx concentration is measured.
3.2.
Photocatalytic properties-preliminary results.We present only preliminary results occur- ring the photocatalytic properties for the thermal sprayed coatings.Some experimental observations can be noted.
When we have turned on the lamp we have observed that the NO concentration has decreased rapidly for a few minutes; then, the NO diminution has become slower in time (Figure 9).When we switched the lamp off, the NO concentration increased.After the exposition of the photocatalysis (when we have turned the lamp off), we have been observed that the concentration of NO is less than the concentration who is giving by the kinetic decrease of NOx without the photocatalysis.This observation argues the activity of TiO 2 as photocatalyst.During the photocatalysis, a weak increase in the NO 2 concentration was observed which could be explained by the oxidation of NO at NO 2 on the TiO 2 surface.
The photocatalytical tests were realised under the anatase powder and the deposited coatings (Figure 10).For the test, 0.4 g anatase powder with a specific surface around 10 m 2 /g has been used.The anatase powder provides better photocatalytical activity than the sprayed coatings.Indeed, this effect can be explained considering that the coatings present a weaker reactive surface and the amount in anatase is more reduced than in the powder.Moreover, the coating with a higher rate of porosity is better for NOx removal.
CONCLUSIONS AND FUTURE
According to the preliminary results that we have obtained, some conclusions can be reported.The crystalline structure of the coatings is an important parameter in the photocatalysis.Varying the thermal spray conditions we can obtain anatase-rich TiO 2 coatings for environmental applications.The porosity of the coating is also considered to be a key parameter in the photocatalytic decomposition.
One thing remains: to optimise the technique of the deposit to obtain better structural characteristics (more significant rate in anatase and in porosity) to increase the reactive surfaces and the photocatalysis efficiency.
Modifications in the TiO 2 matrix will be realized by doping with different oxides, as Fe 2 O 3 , in order to modify the levels of photocatalytic sensitivity by bringing them closer towards visible light.
Once the deposits optimised for the NOx removal, we will try to evaluate their effectiveness for other pollutants, in particular the decomposition of volatile organic compounds (VOC).
Figure 5 .
Figure 5. Lamellar structure of the sprayed coating.
Figure 10 .
Figure10.NO reduction on anatase and coating.
Table 1 .
Parameters used for APS spraying of the TiO 2 anatase powder.XRD of the TiO 2 anatase powder. | 2019-01-07T15:40:18.196Z | 2003-01-01T00:00:00.000 | {
"year": 2003,
"sha1": "d0a1522a8f66e9e63b24329daaa77e19a51a23fa",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijp/2003/189365.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d0a1522a8f66e9e63b24329daaa77e19a51a23fa",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
225411668 | pes2o/s2orc | v3-fos-license | Disjoint connectivity of wireless sensor network
Growing ramification in WSN contemplations are not restricted to routing, construction of protocols, dynamic of mobile nodes and infrastructure of the network. Although transcended to geometric level demonstrated as computational and dynamical geometries in spite of graph theory. In this paper we present step forward recognition features of a network devoted to solve the problem of reconstructing the disconnected network by connecting any disconnected chains. It considers geometrical properties of random depletion nodes deviated from unit grid. Number of chains and number of nodes in each chain are calculated with the average number of connections to a total nodes and longest chain. Histogram represented number of chains and numbers of nodes for each chain are used to show fragmentation of the network. Algorithm included a method to translate adjacent matrix to chain matrix and vice versa to check the agreement of initial case with the results. The amount of chain deviation and average connections per node for longest connected chain and for the total network are drawn as bar charts and conduct interpretations.
INTRODUCTION
WSN is one of active and operational subjects, although the extended care to apply it [1,2], many levels of studies are considered like infrastructure technology, electronics, microprocessors, digital signal processors DSP, communications, battery production technology and others of power sources [3][4][5][6]. Additional subjects are involved as network management, routing, types of protocols applied. Estimating capacity of transmitted data, network lifetime, maintenance, and fault tolerance are topics of important role in studding and implementing WSN [7][8][9].
Infrastructure of network includes methods of how the node distributed in the range of interest (RoI) and sometimes accesses the exact coordinates of each node (two or three dimensions). Knowing exact coordinates of nodes plays an important role especially in applications like indicating fire in a forest or a leakage in a petroleum pipe. The mechanism of allocated node position called localization while applying statistical information about node position, the number of connections between nodes may give approximate information about the efficiency and capacity of the network. The way of depletion nodes across RoI may play an important factor to all performants of the network and affects connectivity, coverage, routing and traffic of packets as the capacity of the node system. WSN from geometric point of view is important like other studies, almost all scientists or engineers working on WSN do not give geometry of network any consideration according to replace the whole routing problem to topology and tree connection between nodes or depend on statistical information and leave geometry as a losing end especially when position of node is relevant to the application [10][11][12]. Although geometry of a network is connected to a graph theory and computational geometry, these branches of mathematics lead to dynamic geometry that will bring network geometry to new fields and interact with more general aspects and concepts in both ways [13,14]. Beginning with localization is a keystone to solve problems of network. Ubiquitous software is applied for simulation of WSN, like opnet, Cþþ and MATLAB programs, they are working over different levels of obligations and complexity. Estimating packet transfer capacity acting with different structures of protocols and verity of node connection methods are one level. Simulating the details of electronic block diagram circuit defined each WSN node, reading communication patterns and capability of transmission, then time response and synchronization of all that are another level [15,16].
Prim topics in WSN are connectivity and coverage. Connectivity is about determining the connection between nodeshomogeneous or heterogeneousand finding losing end chains or isolated nodes, although calculating area coverage is a complementary topic to connectivity [6,17] and a way to notice the efficiency of WSN to handle, calculating area coverage need to discriminate chains and isolated nodes, which is the topic of this paper, then find area coverage for each.
In this paper we encounter estimating number of chains (connected nodes but disjoint from other nodes), number of nodes in each chain and average connections to a longest one and compare that with the average connections to all nodes in the network, this is done with different number of nodes, different densities and verity of maximum transmission distances. Within many years of research, we did not find any independent attempt to find geometric properties of WSN. Studding included chains in the network is an algorithm used to predict efficient routing; this algorithm recognizes many chains in the network with predefined number of nodes, then selects suitable node within each chain and discriminates it as prime node. It is used as a connected node to main node or connected to a sink node. This algorithm simplifies routing techniques for the network [18,19]. Defined number of chains and the length of each chain gives important parameters to evaluate the properties of network and it is a step to word find algorithms to connect these chains in a proper way, also estimating average connection of nodes in chain and in a total graph indicate side of the properties of the network [20]. Detecting many chains in the network and the longest chain length is small fraction of the total nodes (diversity is about one) this network is of poor efficiency and cannot be used to any communication tasks, while if a diversity near zero indicates that more nodes are connected to main chain then transmission rate will be better. If network is disconnected a suitable technique may be used like network control, increasing transmission radius, or repositioning mobile nodes to establish connections to fragmented or insulated parts, the two methods are actually used [21].
Algorithm attended
To find the longest chain in connected graph of simulated wireless sensor network algorithm that is written to simulate random depletion of nodes and find connected graph within maximum transmission radius then new method is written to find connected chains of nodes and estimate the longest one, and the algorithm is used to find statistical results for connected nodes in random distributed WSN.
The details of algorithm are explained in the following steps: Step1: Estimate arbitrary coordinate matrix XY0: (a) Set uniform grid x & y coordinates of a predefined grid step. (b) Add random normal distribution of x & y coordinates of zero mean value and variation of 1 or 2 grid step.
Step2: Find connected matrix by calculating distance between nodes and connect (a ij 5 1) them if distance (d ij ) < maximum transmission radius (R 0 ) to generate connected matrix [22]. Step3: Find connected chain matrix by constructing a sequence of connected nodes to current node, these groups of nodes may be connected to each other and construct a chain Part of Mat-LAB m-file to generate chain matrix from connected [23]. Step4: Find optimization connected group matrix by checking each sequence with the previous, if any common node combining the two sequences without repeated nodes. Step5: Change maximum transmission radius (R 0 ) and repeat from step2. Step8: Calculate: diversity factor ¼ 1 À no: nodes in longest chain no: of node in total graph (1) Average connections in graph ¼ no: connections in network no: of nodes in total network (2) Average con:s in longest chain ¼ no: con:s in longest chain no: of nodes in longest chain (3) Step9: Using Microsoft Excel program to collect data of diversity factor and average connections for total network and longest chain and draw bar charts for collected data for each collected case with different maximum transmission radius and deviations.
Applied algorithm
Algorithm created above is achieved using MatLab m-files and executed with:
THE RESULTS OF FINDING LONGEST CHAIN ALGORITHM
First: The average connections of nodes in longest chain are always more (or equal only when all nodes in network are connected to longest chain) than average connections of the network, see Table 1. This is self-evident, since in network many nodes are single or with few connections, contrary to connected chain. Second: Deviations of chains in a network is big when maximum transmission radius is less than grid step. It seems that a dramatic change will occur when radius of maximum transmission is greater than three fourth of the grid step, R 0 >0:75*DG as shown in Fig. 2a, b, c, and d.
Third: Histograms of the four cases previously mentioned are drawn, these histograms emphasize clearly what graph plots of connected nodes show. Many different node number short chains occurr when maximum transmission radius is small compared to grid step and many single nodes also as shown in Fig. 3a and b. While at maximum transmission radius greater than three fourth of grid step a clear long chain of nodes occurs, as shown in Fig. 3c and d. Forth: Diversity factor (the percentage ratio of number of connected nodes in longest chain to the total number of overall nodes in area) and average number of connections per node for longest chain and overall nodes are estimated according to Eqs (1) and (2) with respect to different maximum transmission radius. Bar chart created by Excel table shows that a large disperse of chains occurs when small maximum transmission radius is used. While, when radius of transmission being larger than three fourth of grid step disperses will be rapidly decreases and connections of longest chain may cover all nodes especially when maximum transmission radius approaches full step of grid. When the magnitude of deviation about grid crosses tends to change the behavior of connections are the same for different diversities with small differences. In large deviation more nodes on edges of the area may be deviated far from age and not connected to their neighbors, as shown in the first series in Fig. 4 for deviation of two units, although maximum transition radius is large, deviation still has effected the connections.
Any longer transmission radius will case more and more power dissipation and shorter battery and network life. Also, there will be more and more average connections to longest chain in the network, so the average of number of connections of longest chain also depends on measuring the quality of services (QoS) of the network. Average number of connections per connected node in a longest chain is recorded and drawn as an Excel bar chart. Figure 5 shows that when depletion deviation is equal to one unit, results will be medium compared with smaller or larger deviations. Also connections will be greater if the deviations of nodes are smaller when maximum transmission radius is at three fourth of grid step, that is local factors act between nodes, while when maximum transmission radius is bigger, this case will disappear, and adjacent nodes of different grid crossing have the domination effects as shown in Fig. 6.
Fifth: For dense depletion nodes a smaller maximum transmission radius is needed. Since connections between nodes depend on the mean distances between them, the expectation maximum transmission radius will be reduced as a square root of a uniform increasing in depletion density especially the area under consideration is squared. So to keep comparisons between radii of different density depletions it Where R 0 is maximum transmission radius for depletion density Dn 5 1 and R max is equivalent maximum transmission radius for larger densities. Basic R max and related radii for different densities obeying relation 7 are shown in Table 1.
Network with 4 nodes per unit squared density and different maximum transmitted radios are plotted as shown in Fig. 7.
Network with 9 nodes per unit squared density and different maximum transmitted radios are also plotted as shown in Fig. 8.
The Longest connected chain in both densities in graphs show the same behavior as in lower node density where the connections are dominant for maximum radius of transmission greater than grid spacing divided by square root of node density as in Eq. (7). Then scaling of node densities can be confirmed for the same area of depletion.
Also, no significant signs are found and the minor differences in graph due to deviations may also be scaled. Number of nodes connected are scaled, while average connections per node about the same values as shown in Figs 9 and 10.
CONCLUSIONS AND FUTURE WORKS
Wireless Sensor Networks are still a hot topic especially in mobile network and because it is part of more sophisticated articles dealing with the Internet of thing (IoT), (FOG) and CLOUD systems and technologies, since multiple objects must be connected with each other, and these relations must be geometrical, routing and smart connections. So that studding wireless sensor network as theoretical and geometrical properties will always support manufacturing and applying WSN in real word. Estimating the longest chain in connected network as a simple geometric configuration is one of many in this trained to study behavior and efficiency of network therefore to control, optimize network and so on. Knowledge of node coordinates in random depletion method is a partly solved problem and needs to be more researched in the future. The project simulates network as a simple disk model, while almost all WSNs are clustering configurations and it is important to upgrade model to deal with clustering configuration by using different maximum transmission radii for different nodes. Using different network models can be done in future works and may include elliptical or multi-hop transmission patterns.
No real efforts are found for finding longest node connected chain as a target method to compare, this trained of searching need to consider especially for data analysis and pattern recognition, this paper tries to give chain properties of nodes in a network more attentions.
Random distributed method in use is deviate X and Y coordinates from uniform grid with predefined step length, using different depletion methods can be helpful for more realistic presentations.
Find solutions to connect discriminated chains by inserting, mobile few nodes or increasing maximum transition range of specific nearest nodes between disconnected chains, these solutions may the topic of future projects. | 2020-08-06T09:07:50.572Z | 2020-07-28T00:00:00.000 | {
"year": 2020,
"sha1": "546fb9f86894135ddb3b47b32de33b024ae08e73",
"oa_license": "CCBYNC",
"oa_url": "https://akjournals.com/downloadpdf/journals/1848/11/2/article-p107.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f6058b640d87f0a61620b4eb84180e5fe6c9cf8a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
247776169 | pes2o/s2orc | v3-fos-license | Understanding opioid use within a Southwestern American Indian Reservation Community: A qualitative study
Abstract Purpose Morbidity and mortality due to nonprescription use of opioids has been well documented following the significant increase in the availability of prescription opioids in the early 2000s. The aim of this paper is to explore community beliefs about correlates of opioid risk, protective factors, and behavioral functions of opioid misuse among American Indian youth and young adults living on or near a reservation. Methods Qualitative in‐depth interviews were conducted with N = 18 youth and young adults who were enrolled in a parent research trial focused on American Indian youth suicide prevention. Participants were eligible if they endorsed the use of opioids themselves or by close friends or family members at any point during their trial participation. Findings Major themes discussed include: (1) description of opioid use and those who use opioids; (2) acquisition; (3) initiation; (4) motivation to continue using; (5) consequences; and (6) possibilities for intervention. Family played an important role in the initiation of use, but was also highlighted as an important factor in treatment and recovery. A need for upstream prevention methods, including increased employment and after‐school activities, was described. Conclusions The insights gained through this work could help to inform treatment and prevention programs in the community. This work is timely due to the pressing urgency of the opioid epidemic nationally, and community capacity to address opioid use locally.
INTRODUCTION
Morbidity and mortality due to nonprescription use of opioids has been well documented following the significant increase in the availability of prescription opioids in the early 2000s. 1 Alcohol and drug use use among AI/ANs, less is known about opioid use among AI/AN populations. In less than 20 years, AI/AN communities have seen a 5-fold increase in deaths by opioid and nonopioid drug overdose with AI/AN youth being estimated to use nonprescription opioids at a rate twice as high as their white peers. 1,3 Addressing these disparities and preventing prescription drug and opioid misuse among AI/AN communities was named as a priority area in the National Tribal Behavioral Health Agenda. 4 Although many studies draw conclusions that generalize about AI/AN peoples and opioid use, these conclusions inherently neglect important historical, social, political, and cultural differences between the nearly 600 federally recognized tribes and several other hundred state recognized and nonrecognized tribes in the United States.
Studies that examine differences in opioid and other substance use among AI/AN populations have found significant heterogeneity by regions and tribe. One such study, utilizing national opioid overdose and death data from the Centers for Disease Control and Prevention, found regional differences in opioid-related mortality rates among AI/ANs; Arizona and New Mexico had lower opioid-related mortality rates compared with higher rates in Nevada, Utah, and the Great Lakes region. 5 Another study comparing differences in drug use and drug use disorder between a Northern Plains and a Southwest reservation community found significant differences between tribes, between age groups within tribes, and between genders. For instance, lifetime drug use among the sample of Northern Plains women was significantly higher than that of Southwest women. Drug use in the past year was also higher among the Northern Plains sample than the Southwest sample; both of which were higher than comparisons to a national sample. 6 These studies demonstrate the need for region-, state-, and tribe-specific data. Such data would inform prevention efforts that are uniquely tailored to individual tribes and urban AI/AN communities.
There are numerous innovative examples from Indian Country to reduce morbidity and mortality due to opioid misuse. In January 2015, the Indian Health Service (IHS) was the first federal agency to mandate pain management training for all its prescribing physicians. 7 IHS has also invested heavily in disseminating best practices in opioid overdose prevention through its Substance Abuse and Suicide Prevention initiative (SASP; formerly MSPI), including programs that incorporate ceremony and cultural practices. There are also many individual tribe and community-driven interventions that incorporate best practice recommendations and locally relevant knowledge and solutions. 7 For example, Lummi Nation's Healing Spirit Clinic Model incorporates medically assisted treatment, including the first to use suboxone in Indian Country, while also offering cultural therapists to teach about tribal traditional and spiritual practices to aid in the healing process. 8 To inform local opioid prevention and treatment needs, qualitative methodology can be particularly useful for looking at the drivers of issues in a local context, as well as for providing contextual background to epidemiological studies. Qualitative data can help shape how to ask questions on quantitative surveys, as well as, provide further explanation of quantitative findings. By taking a qualitative approach, researchers show their respect for the values and beliefs of the community with which they are working. 9,10 This approach can also yield par-ticularly informative findings around the context of a problem beyond, or in absence of, rigorous epidemiologic data, which is sparse for small, individual communities. Qualitative methods have been used previously to understand prescription drug use in AI/AN communities. One study conducted with a rural Midwestern reservation community utilized the Indigenous practice of talking circles (a tradition that in practice is similar to a focus group) with youth, adults, and elders and found that exposure to drug use was initiated by family members or friends and boredom was cited as a motivator for prescription drug use. 11 A similar study, focusing specifically on OxyContin, found that increases in misuse were leading to growing problems for individuals, families, and the tribe, located on a rural Midwestern reservation. 12 This research gives context to epidemiologic studies showing the rapid rise in opioid misuse. It elevates the voices of community members and elaborates on the reasons for use, in addition to beginning to illustrate what interventions are needed.
The Southwestern reservation community where this study was conducted has a history of innovative studies to examine, prevent, and treat substance use generally, and had a current and specific interest in understanding opioid use and misuse. Further opioid use has the highest mortality risk compared to other commonly used substances; opioid use disorder increases the likelihood of suicide by 13.5 times, much higher than that of alcohol use. 13 The current study builds on past qualitative research by examining the context of opioid misuse among American Indian youth and young adults from a Southwestern Tribe.
This study was embedded in an ongoing research study to understand the effectiveness of 2 brief interventions for suicide prevention among youth. Details of parent study have been described elsewhere (see Ref. 15). The aim of this paper is to explore community beliefs about correlates of opioid risk, protective factors, and behavioral functions of opioid misuse among American Indian youth and young adults living on or near a Southwestern reservation through in-depth interviews. The insights gained through this work could help to inform treatment and prevention programs in the community.
Community-based participatory research approach
This project builds on a long-standing 40+ year partnership between the tribe and the university focused on public health research and programming. In addition to this long-standing partnership, communitybased participatory research principles of focusing on communitydriven inquiry, colearning among all research partners, and balancing the needs of research and action to mutually benefit science and community are central to this study and all research conducted jointly by these partners. 14 This project was born out of a desire by tribal partners to better understand opioid use and misuse locally. The project was directed and overseen by the community research director, an enrolled tribal member of the tribe, where this study took place; all interviews were conducted by a tribal member. As part of this project, training and capacity-building opportunities were provided to the local research team. Tribal research team members and partners provided input in all aspects of the research process, including-identifying the research question, design of interview guide, reviewing analysis results, and providing input on this manuscript. Local Tribal Council and Health Advisory Board approved this study and this manuscript. This study was also approved by the University Institutional Review Board.
Sample
This research was undertaken as part of the Southwest Hub for American Indian Youth Suicide Prevention (SW Hub), 1 of 3 National Institutes of Mental Health U19 Hub grants to address youth suicide across American Indian/Alaska Native communities (3U19MH113136-02S2).
One component of the SW Hub aims to test, through a sequential, multirandomized trial, 2 brief culturally adapted interventions for youth who have experienced recent suicide ideation, binge substance use with suicide ideation, or a suicide attempt. 15 Study participants were recruited from the ongoing SW Hub study. Eligibility criteria for the parent trial include: identify as American Indian, reside on or near the reservation, between 10 and 24 years old, report suicide ideation, and attempt or binge substance use (with ideation) within the 30 days prior to enrollment. Participants were eligible in this qualitative substudy if at any point during their 6-month enrollment in the SW Hub, they endorsed opioid use. Given the lack of understanding of the prevalence of opioid use among this population, eligibility criteria were also opened to individuals who indicated they had close friends or family members as used opioids, who could also provide insight on the issue. probes per question about specific influences on opioids across different socioecological levels. 16 We asked about: (1) personal experi-ences with opioids (ie, first use, where they obtained opioids, how they were exposed, types of opioids found in the community, and relational aspects of using opioids); (2) opioid use in the community (ie, which group of people used opioids the most, why certain groups used opioids, and consequences of opioid use in the community); (3) prevention and treatment of opioids in the community; and (4) topics perceived to be important but had not yet been mentioned during the interview. The interview guide has been added as a Supplementary File.
Data management and quality assurance
Audio recordings were uploaded to a protected server and labeled with unique participant IDs after each interview. Once confirmation was received that interviews were securely uploaded, original audio recordings were deleted from the recording device. A secure transcription service (rev.com) was used to create verbatim transcripts of all recordings, which were uploaded to the same server. After approximately every 5 interviews, the supporting faculty would meet with the interviewer to discuss the interviews, the interview guide, review notes of past interviews, and iteratively adjust the interview guide as needed.
Data analysis
Data analysis was conducted in multiple phases using ATLAS.Ti for a thorough and dynamic examination of the data. Broad topics from the interview guide were first identified as deductive codes. This draft framework was used as a guide by the first and second authors to review a selected transcript. The purpose of this review was to understand the utility of the deductive codes and to understand in more depth the types and range of topics addressed by participants and to identify any needed inductive codes. This process was repeated with an additional 3 transcripts to further refine the codebook and ensure consistency of the coders. This process allowed for iterative adjustments to the codebook; the majority of codes in the final codebook were deductive in nature. After a codebook was established, all interviews were coded by the same 2 authors. Data were queried and analyzed separately by those who used opioids themselves (personal use group) and those who know others who used (use by others group).
RESULTS
Characteristics of the study sample are presented in Table 1. Half the participants were under 15, two-thirds were females, and just over half reported recent opioid use. Major themes discussed include: (1) description of opioid use and those who use opioids; (2) acquisition; (3) initiation; (4) motivation to continue using; (5) consequences; and (6) possibilities for intervention. Among the personal use group, most participants did not express a preferred type of opioid; "oxys," "methadone," "hydro," and "morphine" were the only specific preferred opioids mentioned. Notably, there was no description of opioid overdoses by participants in either the personal use or use by other groups. Some participants described overdosing, but all were specified as related to a nonopioid substance, primarily over-the-counter drugs, including unisoms, Mucinex, and benadryl.
However, overdose was a common and most serious consequence pertaining to opioid use that was discussed.
Acquisition
In the use by others group, participants had little knowledge of how their friends or family members acquired opioids. In the personal use group, the most mentioned method of acquiring opioids was from family members, including siblings, parents, grandparents, and extended relatives with a prescription. Acquisition included both knowing dis-tribution of opioids to family members-"My mom gives it to me when I'm extremely weak or when I'm extremely in pain." (15yr old male)-and unknowing, sometimes referred to as "stealing" or "taking"-"So they
would prescribe him [my brother] different types of painkillers throughout the years and everything. So I would steal those from him and just play with them." (23 yr old female)
All participants were also asked directly about what they know about selling opioids. One older participant in the personal use group brought up the ability to make a profit from selling them, but few other participants knew much about selling (or buying) opioids.
Initiation
In the personal use group, participants age at initiation ranged from 11 to 18 years. In the use by others group, participants thought people started using from middle school aged to early 20s. Three participants from the personal use group had initially been prescribed opioids for a medical purpose. Two of these participants said it led to misuse and were among the very few to describe selling opioids. In the personal use group, several participants described first learning about opioids from family members and/or introducing other family members to opioids. Family members were influenced by this negative behavior that relatives were modeling, "Probably just having the children having to see their parents actually take a medicine, they're going to see them pills. They're going to see them actually put them in their mouth and everything and. . . because they're too young to be knowing about those things, to be doing those things too." (17 yr old female)
Consequences
Consequences of opioid use were asked about generally, to avoid stigmatizing those who use or make participants uncomfortable discussing sensitive topics. Data were queried separately by the personal use and use by others group; there were no discernable differences between these groups. Consequences of opioid use that were identified fall into 3 categories-physical, school, and family; although legal consequences were specifically asked about, only 1 participant identified any during the interviews. Death, overdose, and more general "health problems" were commonly mentioned physical consequences. Participants were split regarding how opioid use would affect their schooling; some felt that use did not affect their school performance, whereas others felt that opioid use kept them (or their friends) from concentrating or doing schoolwork at all.
"I think it would affect school because you never know, they might be good at basketball or sports or something, and if they get introduced to this drug and they get addicted to it, it can make them lose interest in what they're doing. And it can make them lose interest in school (13 year old female)" Another consequence of opioid use was "families breaking apart" (15 year old male). Participants described opioid use leading to family problems because when using, they were not around to care for children or were not there to offer support for siblings, cousins, or other relatives.
Motivation
Among the personal use group, the most commonly mentioned reason for using opioids was to "get high." For example, 1 participant stated: "Just anytime of the day, like I would just do it [take opioids] when I want to get high. It wasn't really because I was depressed. It was just because I wanted to get high.
(22 yr old female)"
Other motivations included: depression, anxiety, stress, and because one can pass an employer's drug test while using opioids.
DISCUSSION
This study is one of only a few to use qualitative methods to bet- lenging, limiting our ability to understand distinctions between these groups. Data collection was stopped sooner than anticipated due to the COVID-19 pandemic; we were not able to purposively sample within these groups to better understand any differences that may exist. It is possible that we did not reach full saturation among all of our themes, although we began to see redundancy in responses indicating that we were at least approaching saturation. However, there are several notable strengths, including obtaining the perspectives of youth on opioids and utilizing rigorous qualitative methodology to understand the unique context of 1 Southwest reservation-based tribal community, an approach that could be replicated by other tribal nations and urban AI/AN communities interested in understanding how opioids are impacting their youth.
CONCLUSIONS
The insights gained through this work could help to inform treatment and prevention programs in this community. This study contributes context-specific, local qualitative information, as well as future research directions and an approach that could be used for other Indigenous communities. American Indians/Alaska Natives are not often well represented in larger studies of opioid use, and even when they are, these studies do not capture the heterogeneity of different tribal groups and geographic regions. One way to capture this heterogeneity is to conduct in-depth qualitative work in different AI/AI communities, as such was done here in 1 tribal community. This study contributes context-specific, local qualitative information, as well as future research directions and an approach that can inform work in this and other Indigenous communities.
ACKNOWLEDGMENTS
We would like to respectfully acknowledge the study participants who gave their knowledge, insights, and time to this project. We would like to thank all study team members who contributed to the success of this project. Lastly, we would like to thank the tribal leaders and other community stakeholders, without whom this project could not have been completed.
FUNDING
This work was supported by the National Institute of Mental Health under grant 3U19MH113136-02S2. | 2022-03-30T06:17:48.106Z | 2022-03-28T00:00:00.000 | {
"year": 2022,
"sha1": "ae24b3bb008894a472696886ee9623fd1ea71df7",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "3bae01424178752503a0bc601f9fb4fb661747d6",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
57996787 | pes2o/s2orc | v3-fos-license | Diagnosing post-transplant diabetes – The need for capillary glucose monitoring
Sir, We read with interest the study by Kumar et al.[1] on the preoperative risk factors for the development of post-transplant diabetes mellitus (PTDM). The authors have reported an incidence of 24% of PTDM in a follow-up period of 12 months. In addition to the known risk factors of prediabetes, family history of diabetes, and age, the authors have looked at markers of insulin sensitivity and insulin resistance which have predictably correlated with the development of PTDM. However, the study does not specify the methods, that is, 75 g oral glucose tolerance test, fasting or post-meal glucose or capillary glucose, that were used to diagnose PTDM in the follow-up period. Fasting glucose levels carry low sensitivity in the diagnosis of PTDM because a majority of them develop post-meal hyperglycemia with peak glucose levels after lunch and after dinner. This pattern of steroid aggravated hyperglycemia has a clinical bearing on the tools used to diagnose PTDM.[2] They have also not mentioned the details of patients who developed transient hyperglycemia in the immediate postoperative period which has a bearing on the development of PTDM.[3]
Sir,
We read with interest the study by Kumar et al. [1] on the preoperative risk factors for the development of post-transplant diabetes mellitus (PTDM). The authors have reported an incidence of 24% of PTDM in a follow-up period of 12 months. In addition to the known risk factors of prediabetes, family history of diabetes, and age, the authors have looked at markers of insulin sensitivity and insulin resistance which have predictably correlated with the development of PTDM. However, the study does not specify the methods, that is, 75 g oral glucose tolerance test, fasting or post-meal glucose or capillary glucose, that were used to diagnose PTDM in the follow-up period. Fasting glucose levels carry low sensitivity in the diagnosis of PTDM because a majority of them develop post-meal hyperglycemia with peak glucose levels after lunch and after dinner. This pattern of steroid aggravated hyperglycemia has a clinical bearing on the tools used to diagnose PTDM. [2] They have also not mentioned the details of patients who developed transient hyperglycemia in the immediate postoperative period which has a bearing on the development of PTDM. [3] We carried out a similar prospective observational cohort study in nondiabetic patients undergoing renal transplant at our center and currently have data up to 6 months post-transplant. The mean age of our patients was 36 years with 80% males and an average dialysis vintage of 9 months. We used capillary glucose measurements (CBG) post breakfast, post lunch, and post dinner which patients checked thrice a week in the first month and once a week in the next 5 months as part of clinical care monitoring. More than two values of post-meal CBG ≥200 were diagnosed as PTDM. In addition, 75 g glucose tolerance testing was done monthly. The incidence of PTDM in our study was 37% with a cumulative incidence of 20% at the end of the first month, 30% at the third month, and 37% at the sixth month of follow-up after transplant. A diagnosis of PTDM in two-thirds of our patients was made by home-monitored CBG. Post-lunch and post-dinner CBG values clearly separated out early in the follow up and were significantly higher in the PTDM group compared with the non-PTDM group (P = 0.04) [ Figure 1a and b]. The presence of post-transplant transient hyperglycemia was found to have significant association with future development of PTDM. Our observation did not demonstrate an association between pre-transplant body mass index, HCV-positive status, hypomagnesemia, cumulative dose of immunosuppressive medication, and rejection episodes with the occurrence of PTDM.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. Vitamin D toxicity (VDT) is being increasingly reported from our country because of overzealous correction of VDD with mega-doses of vitamin D by the general healthcare providers. [2][3][4] Nephrocalcinosis (NC) is a well-known but rare complication of VDT, which is usually irreversible.
We previously reported an infant with acute VDT and NC. [5] Briefly, the child presented to a local physician at the age of 8 months with symptoms of hypocalcemia and was prescribed one dose of intramuscular vitamin D containing cholecalciferol 600,000 units and oral calcium carbonate 500 mg daily. However, the parents continued the intramuscular injection for 3 consecutive weeks, and a week following the final injection, the child was brought to us with features of hypercalcemia. The child was diagnosed to have parathyroid hormone (PTH)-independent hypercalcemia due to VDT (serum total calcium 11.5 mg/dL, 25-hydroxyvitamin D >100 ng/mL, and undetectable intact PTH) and was treated with intravenous normal saline and subcutaneous calcitonin injections, with improvement in hypercalcemic state. During the initial presentation, he was also found to have hypercalciuria (24-h urine calcium >4 mg/kg and elevated urine calcium: creatinine ratio of 0.83) and bilateral medullary NC. In this report, we present the long-term follow-up data of this child.
The child was followed up annually with serum total calcium value, urine calcium: creatinine ratio, and ultrasonography of bilateral kidneys. On serial follow-up visits, serum calcium level, urine calcium: creatinine ratio, and estimated glomerular filtration rate (eGFR) remained normal, and there was no reduction in NC. On a recent follow-up visit (14 years after the initial presentation), the child was growing normally with good scholastic performance. His total serum calcium value, urine calcium: creatinine ratio, and eGFR were normal at 9.4 mg/dL, Bilateral Medullary Nephrocalcinosis Secondary to Vitamin D Toxicity: A 14-year Follow-up Report 0.015, and 138 mL/min/1.73 m 2 , respectively. Ultrasonography and computerized tomography of kidneys revealed persistent medullary NC with minimal reduction in size [ Figure 1].
NC is defined as generalized deposition of calcium salts (calcium oxalate or phosphate) in the kidney, predominantly in the interstitium. It usually involves the renal medulla (>97% cases), and less commonly the cortex. [6] The common causes of medullary NC include primary hyperparathyroidism (PHPT), distal renal tubular acidosis (dRTA), primary hyperoxaluria, Barter's syndrome, hereditary hypophosphatemic rickets with hypercalciuria, Dent's disease, idiopathic hypercalciuria, medullary sponge kidney, Williams-Beuren syndrome, VDT, and treatment with active vitamin D for hereditary hypophosphatemic rickets.
In a pediatric series of 40 patients from North India with NC (median age at presentation 72 months), dRTA (50%), idiopathic hypercalciuria (7.5%), primary hyperoxaluria (7.5%), and VDT (5%) were reported as the most common causes. [7] At a median follow-up of 35 months, no patient showed resolution of NC while GFR declined significantly from 82 to 73 mL/min/1.73 m 2 . In another study from the Netherlands, NC in preterm neonates was found to be associated with long-term adverse effects on glomerular and tubular function. [8] In a series of 41 patients from Italy (median age at presentation 15 months), renal tubulopathies (41%) and VDT (10%) were reported as the most common causes of NC. [9] The authors also reported the follow-up data (median 53 months) for 26 patients with NC. The degree of NC worsened in 16 (62%), remained stable in 8 (31%), and improved in 2 (8%) patients. The two children with improvement in NC on follow-up had VDT and unknown cause, respectively. The authors also concluded that progression of NC was not related to glomerular function, because GFR remained stable in 14 | 2019-01-14T14:15:23.650Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "ff151a0f5af5155931db57984800e9651fbfa730",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijem.ijem_584_18",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "a2d101aef83abff432e3a0913543e2966296de80",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52892011 | pes2o/s2orc | v3-fos-license | Marina crystal minerals (MCM) activate human dendritic cells to induce CD4+ and CD8+ T cell responses in vitro
Marina crystal minerals (MCM) are a mixture that contains crystallized minerals along with trace elements extracted from seawater. It is a nutritional supplement that is capable of enhancing natural killer (NK) cell activity and increasing T and B cell proliferation in humans post ingestion. However, its effect on dendritic cells (DCs), the cells that bridge innate and adaptive immunity, is not yet known. In this study, we examine the stimulatory effects of MCM on DCs’ maturation and function in vitro. Human monocyte–derived DCs were treated with MCM at two different concentrations (10 and 20 µg/mL) for 24 h. Results showed that MCM treatment activated DCs in a dose-dependent fashion. It caused the upregulation of costimulatory molecules CD80, CD86, and HLA-DR, and prompted the production of DC cytokines, including interleukin (IL)-6, IL-10, tumor necrosis factor (TNF)-α, and IL-1β, and chemokines (monocyte chemotactic protein-1 (MCP-1)) and interferon-gamma-inducible protein-10 (IP-10). In addition, activated DCs primed CD4+ T cells to secrete significant amounts of interferon gamma (IFN-γ), and they also stimulated CD8+ T cells to express higher amounts of CD107a. These results indicate that MCM is a potentially powerful adjuvant, from natural materials, that activates human DCs in vitro and therefore may suggest its possible use in immune-based therapies against cancer and viral infections.
Introduction
Marina crystal minerals (MCM) are a natural mixture containing crystallized minerals along with trace elements from the Oharai Sea in Japan. It is processed by condensing and reducing pure seawater to a powder through a sterilizing sequence of heating, freezing, and drying. The product contains 27 minerals and trace elements; no harmful trace elements have been detected in it. 1 Our earlier studies showed the immunomodulatory effect of MCM to activate natural killer (NK) cells and increase T and B cell proliferation in humans post ingestion. 1 However, its effect on dendritic cells (DCs) has not yet been discovered.
DCs, the professional antigen-presenting cells (APCs), activate adaptive immunity through their capacity to capture, process, and then present antigens to T cells. 2,3 DCs are usually localized in nonlymphoid tissues under healthy conditions and reside in an immature state. Immature DCs are highly phagocytic for peptide uptake and processing, and they respond to signals via different receptors including scavenger receptors, nucleotide oligomerization domain (NOD)-like receptors, and toll-like receptors (TLRs). Immature DCs also respond to inflammatory mediators, chemokines, and cytokines. 4 The conversion of immature DCs to mature DCs is associated with both phenotypic and functional changes. Maturation is characterized by the increased expression of costimulatory molecules, redistribution of HLA-DR molecules, and increased presentation of antigen and secretion of cytokines, such as interleukin (IL)-12, IL-15, and type I interferons (IFNs I). 5,6 Mature DCs prime Th cell responses, 7 induce the differentiation of CD8+ T cells into effector cytotoxic T lymphocyte (CTL), and have the ability to activate NK cells' cytotoxicity. 8,9 This study examines the ability of MCM to activate DCs with respect to phenotypic changes, including the type of cytokines secreted, and to examine the role of MCM-stimulated DCs on the activation of CD4+ T cells and CD8+ T cells, as well as the underlying mechanisms of its effect. Our results indicate that MCM is a potentially powerful adjuvant, made from natural materials, that is capable of activating DCs and therefore may be beneficial for provoking an effective immunological response against cancer and infections.
MCM
MCM is a mixture that contains crystallized minerals along with trace elements and other active ingredients, extracted from seawater and originally separated from sodium chloride. MCM was prepared for use by dissolving in complete medium (CM), resulting in a range of concentrations (10 and 20 μg/mL). We received MCM for this study from the Foundation for Basic Research Institute of Oncology, Japan (MCM was provided by Kaiyo Kagaku Co., Ltd, 3-11-5 Minami Azabu, Minato-ku, Tokyo 106-0047, Japan).
Isolation and culture of human monocytederived dendritic cells
We prepared monocyte-derived dendritic cells (moDCs) for this study as described previously. 10 In summary, peripheral blood mononuclear cells (PBMCs) from heparinized blood, obtained from donors who were normal and healthy (approved by the Institutional Review Board (IRB), Charles Drew University), were separated using Ficoll-Hypaque density gradient centrifugation. We then allowed the cells to attach to culture plates for 2 h. Any cells not adhering to plates were removed. Monocytes adhering to plates were then cultured for 6 days within a humidified atmosphere, containing 5% CO 2 at 37°C in RPMI 1640 supplemented with 10% FBS, 1 mM glutamine, 100 U/mL penicillin, 100 μg/mL streptomycin, human granulocyte-macrophage colony stimulating factor (GM-CSF) at 50 ng/mL (PeproTech, Rocky Hill, NJ, USA), and 10 ng/ mL recombinant human IL-4 (PeproTech). We discarded half of the culturing medium every 2 days and replaced it with fresh medium. After 6 days, DCs were collected, and we measured the purity of the obtained DCs to be >95%. We then pulsed DCs with either 1 μg/mL E. coli LPS, used as a positive control, or MCM (10 and 20 μg/mL) for 24 h.
DC phenotyping
We determined the expression of cell surface markers by employing flow cytometry. FACS analysisflow cytometry was performed using FACSCalibur (Becton-Dickenson, San Jose, CA, USA) and analyzed using FlowJo software (Tree Star, Ashland, OR, USA). In summary, we analyzed gated CD11c+ HLA-DR+ DCs for the expression of CD80, CD86, and HLA-DR. We received the appropriate antibodies from BD Pharmingen (San Diego, CA, USA). Viability of DCs was tested by trypan blue; more than 95% cells were live.
DC-CD4+ T cells
We purified allogenic CD4+ T cells by negative selection by employing a magnetic bead-based kit, which we acquired from Stem Cell Technologies (Vancouver, BC, Canada). We then cultured allogenic CD4+ T cells with DCs that had been stimulated with MCM (10 and 20 μg/mL) for 24 h as described above. We co-cultured the DC-CD4+ T cells for a total of 5 days in a U-bottom 96-well plate. The DC:CD4+ T cell ratio was 1:5 (2 × 10 4 :1 × 10 5 ). After 5 days, the supernatants were collected and kept at −70°C. We subsequently detected the cytokines IFN-γ, IL-10, and TNF-α by employing a specific ELISA kit (BD Pharmingen) IL-22 (R&D systems, Minneapolis, MN, USA). Viability of cells was tested by trypan blue; more than 95% cells were live.
DC + T cells
We enriched allogenic T cells by negative selection by employing a magnetic bead-based kit, which we acquired from Stem Cell Technologies. We cultured MCM-stimulated DCs with T cells in 96-well plates, with the ratio of DCs to T cells of 1:5. After 5 days, supernatants were collected and cells were stained for the surface markers CD4, CD8, CD107a, and CD25. Viability of cells was tested by trypan blue; more than 95% cells were live.
Statistics
In this study, we repeated all of the experiments with samples from 5-7 individual subjects. We tested the probability of the mean values of two experimental groups by the two-tailed t-test for paired samples. We set the level of significance at P < 0.05. We performed statistical analysis for bar graphs by employing GraphPad Prism software.
MCM activates DCs and upregulates costimulatory molecules
MoDCs (1 × 10 6 /mL) were cultured with MCM for 24 h. Flow cytometry was used to measure the expression and density of maturation markers. Figure 1 shows the mean fluorescent intensity (MFI) of CD80, CD86, and HLA-DR in DCs. MCM treatment caused a dose-dependent increase in the expression of DC surface costimulatory and maturation markers CD80, CD86, and HLA-DR. This increase was detected at a concentration of 10 µg/mL and further increased at 20 μg/mL. In comparison with untreated DCs, it can be seen that treatment with MCM significantly upregulates the expression of CD80, CD86, and HLA-DR markers.
MCM induces cytokine production by moDCs
MCM at the concentrations of 10 and 20 μg/mL appear to be non-toxic to the normal cells (human moDCs). Data in Figure 2(a) show that MCM has the ability to activate DCs to induce cytokine production, such as IL-6, IL-10, TNF-α, and IL-1β. The levels of cytokine secretions post treatment with MCM was compared with moDCs alone. IL-6 production was increased by about threefold at the concentrations of 10 and 20 μg/mL. MCM at a low concentration of 10 µg/mL also induced a significant increase in IL-10 production. The level of activation did not increase with increasing the concentration to 20 μg/mL. In addition, MCM was able to activate TNF-α production in a dose-dependent manner.
Furthermore, IL-1β production was significantly increased by sixfold at the concentrations of 10 and 20 µg/mL.
MCM induces chemokines secretion by DCs
Chemotactic proteins MCP-1 and IP-10 are known to help DCs migrate to the lymph nodes. The levels of MCP-1 and IP-10 were examined post treatment of DCs with MCM. Results in Figure 2(b) show that treatment with MCM caused a twofold increase in the level of IP-10 at a low concentration (10 μg/ mL) and further increased at a higher dose of MCM (20 μg/mL). MCM can also activate MCP-1 in a dose-dependent manner.
MCM enhances IL-10 secretion on LPS stimulation
Data in Figure 2(c) show that DCs were stimulated with LPS alone and LPS + MCM. A siginificantly higher secretion of IL-10 was observed; however, there was no change in the levels of TNF-α, IL-6, IP-10, and MCP-1 (data not shown).
MCM-stimulated DCs activate CD4+ and CD8+ T cells upregulating a higher amount CD25 and CD107a
Data in Figure 3(b) show that MCM-treated DCs activate CD4/CD8 cells and upregulate CD25 expression which is a marker of activation. These CD25+ cells also display upregulated expression of CD107a which is a marker of degranulation expressed in cytotoxic T Cells.
Discussion
DC maturation and activation is an essential step for DCs to mount effective immune responses against infections and for cancer immunotherapy. In this study, MCM, a crystallized mixture of 27 minerals and trace elements from seawater, was shown to be a potent activator of human DC maturation and function. MCM-activated DCs induced CD4+ and CD8+ T cell responses in vitro as manifested by CD4+ T cell production of the cytokines IFN-γ and IL-22 and higher CD107a expression in both types of T cells. In addition, MCM-activated DCs caused a dose-dependent upregulation of costimulatory and maturation marker expressions on the surface of DCs including CD80, CD86, and HLA-DR. MCM at 10 μg/ mL markedly increased DCs' cytokine secretion (IL-6, IL-10, TNF-α, and IL-1β), chemokine secretion (MCP-1 and IP-10), and IL-10 secretion on LPS-activated DCs. While the mechanisms underlying MCM's activation of DCs are not fully understood, they might be due to the ability of MCM to bind to receptors on the DC surface, subsequently triggering the signaling pathways involved in DC activation. Alternatively, signal cell activation pathways could be achieved through possible binding of MCM to intracellular receptors such as NLRP3 inflammasome since it secretes IL-1β. Any of MCM's several minerals and trace elements might contribute to the activation of DCs. Given the range of literature linking zinc, magnesium, copper, and iron with immunological responses, we tentatively favor the presence of these elements in MCM as primary contributors to MCM's induction of phenotypic and functional changes in DCs.
Previously, MCM has been shown to exert an apoptotic effect against human LNCaP prostate cancer cells in vitro 11 and to activate NK cells in humans post ingestion. 1 In our study, MCM enhanced the cytotoxic effect of DC-CD8+ T cells and stimulated DCs to prime CD4+ T cells and secrete significant amounts of IFN-γ, all of which are known to exert antitumor activity. 12 Taken together, these results suggest that MCM exerts its anti-cancer activity by mounting different arms of the immune system.
This study indicates that MCM is a potent natural dietary adjuvant that effectively activates human DCs and suggests MCM's potential use against cancer and viral infections via DC-based vaccine strategies in multiple clinical trials. | 2018-10-14T17:01:43.511Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "18bd96bf1dbc152646e800db295a30cabb500c01",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2058738418797768",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18bd96bf1dbc152646e800db295a30cabb500c01",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
248376004 | pes2o/s2orc | v3-fos-license | Eight-year Review of a Clubfoot Treatment Program in Pakistan With Assessment of Outcomes Using the Ponseti Technique: A Retrospective Study of 988 Patients (1,458 Clubfeet) Aged 0 to 5 Years at Enrollment
Objective: To conduct an 8-year retrospective review of a clubfoot treatment program using the Ponseti technique with close monitoring of outcomes. Methods: Between October 2011 and August 2019, 988 children with 1,458 idiopathic clubfeet were enrolled, ages ranging from new born up to 5 years. Ponseti treatment was used, and progress was monitored by comparing mean Pirani scores at enrollment (P1), initiation of bracing (P2), and end of treatment (P3) or most recent visit (P4) for children under treatment. Results: A statistically significant reduction in Pirani scores was noted (P < 0.001) for all feet. For 320 feet completing treatment (213 children), the mean Pirani scores reduced from P1: 3.8 (±1.1) to P2: 1.1 (±0.6) and finally to P3: 0.6 (±0.3). Four hundred sixteen children are currently undergoing bracing. Higher education of the head of household and male sex of the child were markedly associated with improved outcomes in foot correction status. Correction was obtained with a mean of 5.8 casts per foot, the tenotomy rate was 68.2%, and the mean duration of bracing in children completing treatment was 3.6 years (±0.9). No surgical correction, other than tenotomy, was required. Relapse was noted in 12.1% of the total enrolled feet, and 32.0% children were lost to follow-up from the entire cohort of 988 children. Conclusion: Clubfoot treatment requires long-term follow-up. A dedicated clubfoot program is effective in maintaining continuity of care by encouraging adherence to treatment.
common congenital birth defect. 7 Clubfoot may be associated with neuromuscular disease, syndromes, and chromosomal or congenital abnormalities; however, isolated clubfoot deformity in an otherwise normal child is labeled idiopathic and has four components-equinus, varus, adductus, and cavus.
Treatment of idiopathic clubfoot using the Ponseti method is considered "gold standard" across the world. 2,[8][9][10] Dr Ignacio Ponseti, an orthopaedic surgeon, pioneered this nonsurgical method as a simple, inexpensive, easily adopted, and applied outpatient technique for correction of clubfoot. 11 Popularity of this method is steadily increasing, and its judicious use has yielded excellent results, with surgical correction rarely required. 12 Orthopaedic societies in more than 45 countries across the globe have endorsed Ponseti treatment, 8 and 113 of the 193 United Nations member countries report some evidence of Ponseti activity. 13 It is important to note that low-and middle-income countries (LMICs) are host to more than 90% of clubfoot cases. A study in 2015 documented that only 15% of clubfoot cases accessed Ponseti treatment 14 ; therefore, a majority of untreated or undertreated cases lead to neglected clubfoot and consequent life-long disabilities contributing to the health burden in developing countries. In Pakistan, approximately 7,500 children are born with clubfoot every year; this approximation is based on calculations using World Bank figures for population and a crude birth rate. 14 The poorly structured health delivery system translates into a high percentage of these children being left untreated; this has notable social and economic consequences in a predominantly agricultural, developing country where physical labor is the main form of employment for the majority.
Stand-alone clubfoot programs managed by nonsurgeon clinical officers have been implemented; however, lack of supplies and reliable referral service for surgery in complicated cases were identified as main issues. 15,16 Better outcomes were observed when clubfoot was treated in dedicated Ponseti clinics embedded within orthopaedic services, with lower recurrence rates and prompt on-site referrals when required 17 ; this is the model that we adopted for our clubfoot program, headquartered in the country's largest city. Healthcare access is a major barrier in low-resource settings 15,18 ; thus, it is vital to use the opportunity and provide the required comprehensive care when children with abnormalities present to a tertiary care facility. The College of Physicians and Surgeons Pakistan recognizes 56 orthopaedic residency training programs based at public and private tertiary hospitals across the country 19 ; this provides the opportunity to establish clubfoot clinics within facility-based orthopaedic services with the added advantage of training upcoming orthopaedic surgeons in this method. Establishment of integrated programs contributes to strengthening of health systems as compared with vertical programs that tend to work in silos. 15,16 This study aims to determine the effectiveness of the Ponseti technique in children with idiopathic clubfoot enrolled in the program by comparing the mean Pirani scores of feet over the course of treatment. It also intends to examine the association of factors with foot correction status along with presenting the clinical outcomes of those enrolled children who have completed the entire course of treatment. In addition, the integral program processes for the purpose of monitoring and evaluation have been reviewed and presented.
Program Description
The "Pehla Qadam-PQ (First Step) Program" was initiated in August 2011 with a preliminary preparation phase followed by commencement of a clubfoot treatment clinic in October 2011, at The Indus Hospital (TIH), Karachi, the flagship campus of the Indus Hospital & Health Network (IHHN). When the program was in its nascent phase, only children up to the age of 1 year were enrolled for treatment. With passage of time, our referrals increased by word of mouth and older children also presented to our clinic with untreated clubfoot. To save them from future disability, we gradually increased our age limit for inclusion to 5 years.
IHHN provides all treatment free of cost; for children in the PQ program, braces are also provided and a transport allowance is offered to families to encourage adherence during the long course of clubfoot treatment. Overall management of clubfoot patients using the Ponseti method includes case selection after accurate diagnosis, sequential cast applications, Achilles tenotomy (if required), maintenance bracing, periodic assessment to prevent relapse, managing associated complications, and monitoring of outcomes. Pirani scoring is used to determine the severity of clubfoot and to aid in clinical decision making. 20 Ultimate objective of the treatment is to help the child attain good clinical outcomes allowing him to lead a normal life. After treatment completion, patients are followed up annually for 5 years.
Personnel and Training
The program team includes a coordinator responsible for developing program tools, training health workers, analyzing data, and managing the program. Health workers are responsible for counseling families, obtaining informed consent from caregivers, enrolling children, maintaining photographic record, data collection, and entry. The clinical team comprises orthopaedic surgeons and residents from the Department of Orthopaedics at TIH who have been trained in the Ponseti method; they screen children at the initial visit, conduct Pirani scoring of their feet at each subsequent visit, apply casts, conduct tenotomy, and assign braces; a plaster technician assists the doctors during plaster application.
Program Tools
Brochures include information about clubfoot and the PQ program, plaster care and removal, and handling of braces. Standardized proformas are used to document patient details and outcomes.
Clinic Routine PQ outpatient clinic is conducted twice a week where new patients are registered and returning patients are assessed for progress.
Data Documentation
Demographic data are collected from each patient at the time of enrollment. At each clinic visit, photographs are obtained and Pirani scores recorded, which range from 0 to 6 in half-point intervals, where 0 is a normal foot and 6 is the most severe deformity. 5,15,21 Data are entered on the hospital database and in the International Clubfoot Registry. Monthly reports are generated and analyzed.
Program Processes
Casting Management consists of an initial "treatment phase" involving gentle manipulation of the child's foot at each visit to stretch the ligaments and tendons followed by plaster cast application in the new stretched position; this routine is carried out weekly for 6 to 8 weeks. Plaster of Paris casts are used in children younger than 3 years at the time of presentation, whereas synthetic casts are applied in older children for whom the Ponseti technique has been modified by increasing the number of casts required to achieve the desired correction (Pirani score 0 to 1) and by keeping each cast on for 2 weeks. 20,22 Families are counseled to remove the Plaster of Paris cast 2 hours before the next appointment. 23 This allows clinic time to be used more efficiently. Synthetic casts are removed in the clinic by the plaster technician.
Tenotomy
Based on the surgeon's assessment, percutaneous tenotomy to release the Achilles tendon may be con-ducted as an outpatient procedure under local anesthesia. After tenotomy, the affected foot is placed in a cast for 3 weeks in the fully corrected position before initiating bracing.
Bracing
In the "maintenance phase," children are provided with braces, worn 23 hours a day for 2 to 3 months, followed by night-time bracing until the age of 4 years. Children are followed up in the clinic after 1 month of initiation of braces and then every 3 months for 4 years. More frequent visits may be scheduled for severe or poorly compliant cases. The decision to discontinue bracing is based on clinical judgment; in some cases, the duration may be extended by a few months. 21,24 Children older than 1 year at enrollment are advised bracing for a duration of at least 2 years. 8,16 Daily exercises are taught to the parents to help reduce foot rigidity.
Relapse
Appearance of cavus, adductus, varus, or equinus, depicted by worsening Pirani scores, is considered a relapse. 4,25 In such cases, manipulation and casting are reinitiated followed by tenotomy, if needed. 20 Loss to Follow-up Families are counseled regularly to encourage treatment adherence. If a patient misses an appointment, the health worker calls the family on the same day to reschedule the appointment. Failure to visit the clinic after three consecutive rescheduled appointments is labeled "lost to follow-up," which also includes families refusing to continue treatment at any stage. Reasons for refusal are documented, and if geographical distance is the determining barrier, efforts are made to direct the family to a treatment facility closer to their residence. Noncontactable families are also considered "lost to followup."
Completion of Treatment
All children undergoing the specified duration of treatment resulting in notable improvement in clinical outcomes are considered to have completed their treatment. These specific clinical parameters include cosmetic appearance, flexibility, pain, position of the foot, squatting, walking without a limp, running, wearing normal shoes, and carrying out daily activities with ease.
Study Description
The duration of this retrospective study is from October 2011 to August 2019.
Patient Selection Inclusion Criteria
(1) All children completing Ponseti treatment.
(2) Children completing casting and progressing to braces (under treatment).
Exclusion Criteria
Children older than 5 years at enrollment, those with syndromic associations, those still in the casting phase at the study cutoff date, or those enrolled in our program during the bracing phase of treatment are excluded from this study (Figure 1).
Ethical Approval
Approval was granted by the Institutional Review Board of TIH.
Informed Consent
Informed consent was obtained from parents of all children included in this study.
Outcome Assessment
For the purpose of this study, Pirani scores at three points during treatment were noted to assess the efficacy of the Ponseti technique-(P1) at enrollment, (P2) at initiation of bracing, and (P3) at the end of treatment or (P4) at the most recent clinic visit for children under treatment. Patients completing full treatment were assessed to see if their duration of treatment was within the acceptable range described by Ponseti, that is, "timely correction" or "delayed correction," and factors associated with foot correction status were examined. Posttreatment clinical outcomes were also recorded.
Data Analysis
Data were analyzed using Stata version 14. Normality was assessed for all the quantitative variables. Mean (SD) and median (interquartile range) were reported for quantitative variables as appropriate. All the categorical variables were presented using frequencies and percentages. Age categories were studied with mean cast numbers using one-way analysis of variance. Repeated measures analysis of variance was used to assess the effectiveness of the Ponseti method by comparing the means of Pirani scores at the three specified time points. Foot correction status of children with completed treatment was categorized as "timely correction" and "delayed correction." The chi square test was used to study the association of different factors with correction status. All tests were two-sided, and P values ,0.05 were considered significant.
Results
During the study period, 2,787 children were brought to the PQ clinic with suspected clubfoot. On initial screening (Figure 1)
Demographic Data
Most of the children were male (n = 760, 76.9%) and the age at enrollment ranged from 1 day to 54 months (median = 3.5 months, interquartile range = 1.48 to 9.88 months); 44.5% of the enrolled children belonged to the 0 to 3 months age category (Table 1). Approximately half of the enrolled children had unilateral foot involvement (n = 518) with the right foot affected in 295 cases and left in 223 cases. A family history of clubfoot was elicited for 222 children, with first-degree and second-degree or third-degree relatives affected in 32.0% and 68.0%, respectively.
Pirani Scores
Of the 1,458 feet, more than half presented with a Pirani score (P1) between 3.5 and 5.0 ( Figure 2). Mean Pirani scores for feet completing treatment and those progressing to braces are summarized in Table 2. Among the feet completing treatment, a significant reduction (P , 0.001) in mean Pirani scores was noted from P1: 3.8 (61.1), to P2: 1.1 (60.6) and finally to P3: 0.6 (60.3). Furthermore, a statistically significant reduction in mean Pirani scores was also noted for feet still under treatment.
Casting, Tenotomies, and Bracing
Correction was obtained with a mean of 5. No surgical correction, other than tenotomy, was required in our cohort. The mean duration of bracing for 213 children who completed treatment was 3.6 years (60.9). Table 3 summarizes the mean treatment duration of different age categories in the treated cohort, and it can be noted that the mean duration of treatment is comparatively less (3.3, 61.0) for older children. No association was seen between the number of casts and the brace duration (P = 0.31).
Relapse
Relapse during the bracing phase was noted in 177 feet (12.1%), and in these cases, casting was reinitiated. Of these, multiple relapses were seen in 27 feet (twice in 26 feet and thrice in one foot).
Loss to Follow-up
Initially, 528 children (53.4%) were assessed to fall in the category of "lost to follow-up," of which 212 children were retrieved through active contact and counseling by PQ health workers, whereas final loss to follow-up was documented in 32.0% children (n = 316). Of the 316 children lost to follow-up, most of them (63.9%) occurred during the first year of bracing followed by 23.7%, 10.1%, and 2.2% in the second, third, and fourth or fifth years, respectively ( Figure 3). The commonest reason given was relocation of family to another city (n = 111, 35.1%), followed by refusal to continue treatment (n = 51, 16.1%) and domestic issues (n = 8, 2.5%). The remaining (n = 146, 46.2%) were not contactable, so the reason remains unknown. A mortality rate of 2.1% (n = 21) was documented, with reasons reported by parents in 20 children: respiratory causes (n = 7), febrile convulsions (n = 6), acute diarrhea (n = 4), and drowning, head injury, and hydrocephalus (1 each).
Factors Affecting Correction Status in Children Completing Ponseti Treatment
Of 213 children completing treatment, 177 (83.1%) achieved timely correction, whereas correction was "Education of the head of household" and "sex of the child" were found to be significantly associated with the foot correction status (Table 4).
Clinical Outcomes in Treated Children
All children who completed the Ponseti treatment demonstrated notable improvement in all the clinical parameters mentioned earlier.
Discussion
We present the largest, single-facility series of clubfoot patients systematically followed up in an LMIC setting where data were collected with strict compliance to program guidelines. This provides us with a unique opportunity to analyze and present findings from a resource-limited setting with rigorous standards of data compilation.
The age at initiation of casting is a key factor in the success of the treatment; although casting can be initiated as early as the age of 1 week, good results are generally also seen when started before 2 years. 2,18,26,27 However, the Ponseti method has also been used to manage older children, even those with neglected or rigid clubfoot, with good outcomes by increasing the number and duration of casts. 6,22,28,29 Ayana et al 20 demonstrated successful clubfoot correction in children from the age of 2 to 10 years, reporting a correlation between age and an increasing number of casts. We report similar findings from our cohort; although initial successful correction was achieved for all affected feet with a mean of 5.8 casts per foot, a markedly higher mean number of casts were required in children older than 1 year. This decreases the need for surgical correction, as reflected by the fact that none of our treated children required any surgeries, apart from tenotomy and has also been reported by others. 20,29 An upper age limit for using the Ponseti method remains to be determined. 5,6 Achilles tenotomy rates range widely from 37% to 97%. 4,15,[29][30][31] In our study, tenotomy was required in 68.2% feet; a small number of feet required redo tenotomies, mainly to address relapse. Failure to adequately divide the tendon may explain the need for redo tenotomy. Although complications have historically been reported, 4,31,32 none occurred in our considerably large patient population.
The real challenge remains ensuring adherence to treatment during the long duration of bracing, which allows the correction to be maintained. Clubfoot has a stubborn tendency to recur if bracing is inadequate, even after achieving perfect initial correction through casting with or without tenotomy. 11,24 Because correction of the deformity is visible to the parents, they tend to think that their battle is over. Noncompliance to braces remains the commonest cause of relapse, 6,24,33 with rates ranging from 13.7% to 28%. 25,34,35 Our relapse rate of 11.7% was attributable to either poor compliance or rigid feet. Effective counseling and guidance of parents, regular contact with the family to encourage timely follow-up, uninterrupted supply of treatment materials, and encouraging interaction between parent groups were critical factors in promoting adherence to treatment and early identification of relapse. 18 Among the age categories of children completing Ponseti treatment, the oldest age group had the shortest mean treatment duration, which could be explained by the fact that children enrolled at an older age required shorter brace duration, a fact supported by the literature. 8 Regular follow-up over a period of several years is difficult, especially in low-resource settings where health systems are weak. Therefore, many studies are unable to report long-term outcomes conclusively, 1,5,21,36 with reported loss to follow-up rates ranging between 0% and 32% in cohorts ranging from 17 to 307 patients. 15,28,37 In our series of almost a thousand children, we initially encountered a very high loss to follow-up rate of more than 50%; however, with stringent programmatic guidelines of following up patients through phone calls and subsequent counseling, we were able to retrieve 40.2% of them. This highlights the importance of active involvement of the program team with parents, which influenced their decision to continue treatment. However, 32.0% of the enrolled children were still lost to follow-up, with most of them dropping out of the program in the first year of bracing. Therefore, increased focus on parental engagement in the initial few months of treatment may help improve adherence. Nearly half of the children lost to follow-up were not contactable, possibly because of change in mobile numbers. Therefore, even a simple strategy such as documenting two contact numbers instead of one may help mitigate this issue; this change was made in our program strategy toward the latter half of the study period. Relocation during the treatment period was cited as the reason for discontinuation of treatment, which could be addressed by a coordinated referral system, allowing children to receive treatment as close to home as possible. In this context, we have already established PQ clinics at other IHHN sites across Pakistan and hope that this will have a positive effect on treatment adherence.
Timely correction was achieved in more than four-fifths of the children completing Ponseti treatment. This success can be attributed to the rigorous program processes being followed efficiently by the program team, offering support and prompting early intervention and guidance to the parents in case of default. Conversely, interruption in treatment was identified as the major cause for delayed correction. Education of fathers was found to be markedly associated with the foot correction status, showing a positive relationship between the number of children achieving timely correction and the literacy level of the head of household. Literate parents have better understanding of the consequences of treatment failure, and their awareness leads to better compliance. In addition, male sex was also found to have a positive association with foot correction; however, this could simply be attributed to men enrolled in the program outnumbering women by threefold. It reflects the male sex prevalence of clubfoot globally, and we do not think that this represents a gender bias in seeking treatment. 1 The Ponseti method is described to have a success rate of 90% to 95% in both short and long terms 6,24,37,38 ; Morcuende et al 29 reported a 98% success rate in clubfoot correction, whereas in our series, successful initial correction was achieved in 100% of the enrolled feet.
Strengths of this study include its large sample size, low relapse rate, strategies to recover patients from those lost to follow-up, absence of need for surgical correction, and excellent clinical outcomes in all treated children.
Threats to internal validity in this study include interobserver and intraobserver reliability because of the subjective nature of Pirani scoring. 39,40 Because our program is donor-funded, it may not be generalizable or replicable in other resource-limited settings. In particular, good quality braces have been difficult to procure in many LMIC settings. Finally, attrition was notable in our study for reasons already described and needs to be addressed further.
Clubfoot treatment requires long-term follow-up to achieve good outcomes. A dedicated clubfoot program embedded within an orthopaedic service is effective in maintaining continuity of care and adherence to treatment. | 2022-04-26T06:23:59.986Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "0d40dff1a7dc8f843813c8d3aa71b57041b73f51",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "WoltersKluwer",
"pdf_hash": "47d6724dcf9ff05c89a4b681ed6334415d26e548",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252792931 | pes2o/s2orc | v3-fos-license | Health Information System in Developing Countries: A Review on the Challenges and Causes of Success and Failure
Background: A review on the health information systems (HISs) of each country should not be limited only to data collected and reported normally by the service providers. In this regard, the first step for the development in any national project is exploring the experiences of other countries worldwide, especially those with economic, political, cultural, and regional partnerships, and then using their resources and documents to have a broader attitude and a better profitability in planning the development strategy. This study was conducted to review the studies conducted on the causes of HIS success and failure, and the challenges faced by developing countries in using these systems. Methods: The present study was a narrative review to meet the aim of the study, and those studies published in English language in PubMed, Web of science, and Science Direct databases and Scopus between 2000 and 2020 were investigated. Primary keywords used to extract content in these databases were as follows: "health information system", "challenges", "success", "failure", "developing country", and "low and middle income country". Results: After searching the above-mentioned databases, 455 studies were retrieved. Finally, 24 articles were used. The causes of success and failure of HISs were finally divided into 4 categories: human, organizational, financial and technical factors. A total of 30 subfactors were extracted for different factors. Moreover, the findings indicated that many of the challenges that developing countries face in using HISs are influenced by the social, cultural, economic, geographical, and political conditions of these countries. The results represented that organizational and human elements play a critical role in the advancement or falling of the health HIS in growing countries. Conclusion: There is a demand to come up with flexible standards for designing and deploying HISs to address these complexities. Several solutions can be found to address the obstacles and problems facing HISs in developing countries, including formulating strategic plans and policies necessary for the development of national HISs.
Introduction
Since past years, the World Health Organization (WHO) has declared the health inform ation system (HIS) as a key ______________________________ Corresponding author: Dr Mohammad Sattari, msattari@mng.mui.ac.ir pillar in achieving the goal of "health for all." The WHO report in 2010 had identified the improvement of management to be related to the improved information system (1,2). In addition, from the technical point of view, a system can be defined as a set of components associating with the collection, maintenance, and processing of the obtained data and published information to assist in the decisionmaking process and monitoring the organization. moreover, providing support on decision-making, coordination and monitoring can help managers and employees in analyzing issues, uncovering complicated matters, and creating new products (3).
In the health care system, HISs can be defined as the collected components and structured processes for producing information that would improve the decision-making process at all levels of the health system's management. The ultimate goal of HIS is not only to obtain information, but also to improve health system performance (4,5). Accordingly, in the last decade, there have been extensive activities and innovations for the development of HISs due to the development of new technologies. Many organizations in both private and governmental sectors in developing and developed countries have resorted to HIMs to meet the increasing trend of demands on improving the efficiency and effectiveness of health services (6).
Most of the health service providers in developing countries provide their information systems with registration forms that include the patients' name and address and the information related to their disease, which are completed weekly or monthly and then delivered without adequate feedback (7). In addition, the received data are not often useful for making decision on the management issue because they are incomplete, inaccurate, useless, and irrelevant to the priorities of the functions and the task lists of health staffs. In other words, information systems in these countries are data-based, rather than performance-based. Therefore, providing HISs in these countries is considered a barrier in management, instead of being a tool (8).
A review on the HISs of each country should not be limited only to data collected and reported normally by the service providers. On the other hand, the performance of the HIS should be considered in the quality dimensions of the produced data, and also the data should be used to improve the performance and the status of the health system. To achieve this goal, all components of the HIS;, the causes of success and failure, and challenges of these systems, programs, and strategies should be noted (9). In this regard, the first step for the development in any national project is exploring the experiences of other countries worldwide, especially those with economic, political, cultural, and regional partnerships, and then using their resources and documents to have a broader attitude and a better profitability in planning the development strategy. Therefore, the aim of this study is to explore the experiences of growing countries regarding the causes of success and failure of HISs, and the challenges faced by them in the use of HISs.
Methods
The present study was a narrative review. This research was performed in terms of the PRISMA (10) (Preferred Reporting Items for Systematic reviews and Meta-Analyses).
Search Strategy
To investigate problems and challenges of developing countries on the use of HISs and the causes of their success and failure in these countries, the related studies published in PubMed, Web of science, and Science Direct, and Scopus databases were investigated. Primary keywords used to extract content in these databases were as follows: "health information system," "challenges, " "success," "failure," "developing country," and "low and middle income country." After extracting the primary keywords, the following synonyms terms were extracted (Table 1). Moreover, the following step was performed to combine these words and obtain the main keywords for searching the related material to HISs in developing countries. In addition, we have utilized Mesh terms as well as truncation, wildcard, and closeness operators to fortify the search.
Inclusion Criteria
The inclusion criteria were the articles published in English language from January 1, 2000 to June 1, 2020 that reported a problem or a challenge of HISs in developing countries as well as the causes of success and failure of HISs.
Exclusion Criteria
Studies without available full text were excluded from this study.
Study Selection
After removing duplicates, the researchers screened the titles and abstracts of the studies according to the inclusion/ exclusion criteria. The same researchers has then evaluated the full texts of potential related articles (Fig. 1).
Results
After searching the above-mentioned databases, 455 studies were retrieved. Afterward, the duplicates were deleted and 338 articles remained. Based on their titles and abstracts, 131 articles were included, and after reading their full text, 24 studies were approved (Fig. 1). Of these, 5 studies (21%) have been conducted in 2015. Also, 12 studies (50%) were conducted in Asian countries and 6 studies (25%) in African countries. Notably, 6 studies (25%) were review articles, and 7 studies (29%) have focused on the HIS ( Table 2).
Challenges and Causes of Success and Failure of HISs in Developing Countries
Various studies have presented numerous challenges and different causes for the success and failure of HIS in developing countries. In this regard, Sidek and Martinez (24). in their study have identified the lack of confidence of clinical professionals in the new system as the main challenge to have a successful implementation of the electronic health record at a dental center. Paying attention to the principles of change management and the commitment of the center's senior managers in adhering to making changes is considered as the key factor in the success of this system. Moreover, in a study by Maystson et al, (34), the challenges and opportunities for a successful implementation of an information system for chronic mental health care were divided into 3 categories as follows: behavioral, organizational, and technical factors. In addition, Poor data quality in decision-making was one of the main causes for the failure of the system. Accordingly, they have also cited the high adaptation of the data collected to the needs of stakeholders as another important cause in the success of a the HIS. Ebne Hosseini et al (31) in their study mentioned 3 factors of usefulness, system quality, and net profit as essential for the success of the hospital information system. In a systematic review on the success factors of HISs, deRiel et al (27) identified 5 following categories of success factors: functional, organizational, political, technical, and educational. Moreover, they emphasized that these important factors for the long term success in developing complex HISs are as follows: adjustinginvestment in hardware and software, user infrastructure, and data quality control.
In another study by ChePa et al (28) a total of 36 challenges were identified in implementing hospital information systems projects, which fall into 4 fundemental categories, including human, technology and infrastructure, software constraints, and support. Moreover, 14 challenges were also found to be related to human problems, such as workload, readiness, priority, skill, mentality, desire, outlook, feeling, initiative, perception, commitment, awareness, personal interest, and user dependence. Also, there were 6 challenges for support and technology, as well as 12 challenges for software constraint problems. In a study by Alipour et al (25) functional, ethical, and cultural factors were identified as success factors, and behavioral, organizational, and educational factors were recognized as weaknesses of the system. In their study, Afrizal et al (32) have reported human factors, infrastructure, organizational support, and process as effective factors on adopting the primary health care information system. Furthermore, in a study conducted by Kpobi et al (29) the increase in staff workload and rework in data recording were noted as the causes of the failure of the mental health information system and the lack of infrastructure was found as the challenge in implementing System. In a review article, Mohamadali and Aziz (26) have also stated the lack of system integration as the main obstacle for the implementation of hospital information Provides continuing education Use of management rules shortage of manpower Non-sharing of information system in hospitals. In this regard, the quality of information and system quality were introduced as the other factors affecting the success of the system. In another study, Abbas and Singh (33) considered the lack of financial support as the most important obstacle for the successful implementation of information technology in health care from the perspectives of customers and sellers. The lack of general knowledge on information technology is another challenge that can disrupt the implementation of PACS systems as well, along with the lack of management and change in management that create challenges during the implementation and training of the project. Additionally, in a study performed on the Health Management Information System, Asangansi (13) cited the lack of access to quality ethical, financial, functionality, organizational, political, technical and training balancing expensesin hardware and software infrastructure upkeep, user capacity and data quality control; implementinga system within the context of the larger eHealth ecosystem with a strategy for interoperability and data exchange; providing system governance and strong coaching to support local system ownership and planning for system financing to ensure sustainability data, ambiguous data and system ownership, and instability and lack of trust in servers as challenges in the way of implementing this system. His study has mentioned the lack of sufficient time to provide services to patients following the use of hospital information system, and high safety and information security were among the factors in the success of this system. Khalifa (17). in his study he aimed to reduce the obstacles to HISs and divided them into 6 categories as follows: human, financial, legal, organizational, technical, and professional. The high cost of setting up and maintaining the system, misconceptions and beliefs on the use of these systems, and the fear of losing information were among the important obstacles raised in this study. In a review study on the elements influencing the advancement and falling of hospital information systems, Sadoughi et al (18) identified several factors in developing and developed countries. Correspondingly, these factors were then classified into 12 areas named as functional, organizational, behavioral, cultural, managerial, technical, strategic, economic, educational, legal, ethical, and political factors. In their study, Aziz et al (19) examined the role of human factors or the role of users of the hospital information system in the success or failure of these systems. The results have shown that physicians play the most important role in the acceptance of hospital information systems in hospitals. Also, Verbeke et al (23) in their study listed 14 failure factors as well as 15 success factors for hospital information systems in sub-Saharan Africa. Some of these failure factors were as follows: unclear goals, poor management, inadequate skills, and inadequate training.
Also, some of the success factors were transparent communication, real timing, and managing progressive changes.
Solutions to the Barriers and Challenges of HISs in Developing Countries
Seven studies (14,24,(26)(27)(28)(29)(30) have provided some solutions to address the challenges raised in the previous low and middle-income countries chronic Review article mental healthcare behavioral, technical and organizational determinants resource allocation poor quality unreliable of data incomplete management and administrative capacity to use data effectively to support decision-making lack of healthcare personnel with health informatics training deficits in IT infrastructure, absence of computers, networking equipment, internet connectivity interoperability between different EHR systems is poor challenge of mixture of the new data collection with current information systems, insufficient decentralization data collection is planned in association with local stakeholders, there is some sign of success completion and accuracy of data forms Table 3. Solutions extracted from selected studies to overcome HISs challenges Solutions Continuous and appropriate training of health personnel Development of local and national technologies Establishment and operation of an organization coordinating activities related to the health information system at the national level Use of standards Process reengineering Develop vision and action plan Strengthen organizations and human resources in terms of awareness, skills and leadership Strengthen information and communication technology Apply data-related rules and standards Private sector financing and cooperation with the public sector (14) Expanded communication and horizontal collaboration amongst stakeholders; elemental conversion from being focused on pushing healthcare technology to using EHR for developing the working practice of clinical staff; manage the alteringprocess by agreeing system aimsand functionalities through wider consensual debate, and participated helping strategies realized through common commitment 28 Improve computer self-efficacy, Monitor adverse effects on patient safety, Improve system design to be more user friendly, Improve system response time, System need to be flexible to alter and, integrate with current systems, Require adequate and effective IT strategies, Look for new methods to enhanceworkload, and enhance information quality, System can create a complete record, report and support related activities Make a secure intranet for information exchanging 30 Continuing involvement of policymakers, medical staff, technicians and managersin the developingprocess. Involving all stakeholders to ensure better compliance 33 Adopting hybrid rather than fully Internet; dependent systems Renewing what already exists -integration and rationalization 34 Proper situational and needs assessment, Government and private sector support, Training, Proper change management processes, Adequate needs assessment, Regular maintenance and evaluation 37 building strong multispectral partnerships, Locally meaningful mental health indicators, aligned to service; Technology used for data collection and management should be appropriate 38 sections, which are presented in Table 3.
Classification of HISs Causes of Success and Failure
After reviewing the success and failure factors that were raised in various studies, researchers compared the different divisions related to these factors, and then the differences and similarities of these classifications were extracted. Finally, success and failure causes were divided into 4 categories, which are human, organizational, financial, and technical factors. Human factors were divided into 6, organizational factors into 13, financial factors into 5, and technical factors into 6 subfactors (Table 4). If the factors listed in Table 4 are taken into account, the information system will be successful. And if ignored, they cause the HIS to fail.
Discussion
This study attempted to answer these questions: What are the challenges and barriers to health information systems in developing countries? And What factors contribute to the success and failure of these systems in these countries? In this regard, the findings indicated that many of the challenges that developing countries face in using HISs are influenced by the social, cultural, economic, geographical, and political conditions of these countries. High population and low level of literacy and inequality in the access to and use of information technology services and low level of information technology literacy in many developing countries are key factors in affecting the use of information technology and information systems in the health care systems of these countries (35). Some of the challenges are more specific to developing countries and developed countries have been less involved in applying HISs; some of these challenges are socioeconomic constraints (1,15,(20)(21)(22)(23)(24)23,29,32); topics related to technical and operational infrastructure (17,22,23,27,30,(32)(33)(34)(35)(36); incomplete business and lack of adequate business space; lack of private sector participation in health information; non-standard equipment and facilities used in this area; lack of a clear vision in this area (22,23,27,30,(37)(38)(39)(40); and poor integrity between the same HISs at health facility and health management level section (11)(12)18,20,(21)(22)24,39) In addition to these factors, the increase in workload due to the change of HISs from manual to electronic systems has also caused users to resist using these systems (40). Considering the experiences of developed countries, it seems that one of the root causes in the failure HISs is the existence of cultural factors and the lack of cultural capabilities required to accept and use this system. The other main factor that the researcher emphasizes is the lack of information and communication technology (ICT) infrastructures in these countries. ICT plays a significant role in developing information systems in organizations. On the other hand, one of the major challenges of using these systems in these countries is organizational complexity in the health sector, which is a major challenge to the governmental and private sectors in implementing development strategy. There is a need to deal with these complications by applying flexible standards regarding the design and deployment of HISs (41).
According to the research findings, one of the most critical elements influencing the advance or falling of the HIS in developing countries was the poor data quality and integrity. There are many reasons why data quality is low in different sources. In many developing countries, technical and professional skills needed to communicate with information among experts are few. Another reason is the lack of motivation among health workers. Also, the lack of feedback mechanisms is another reason for poor quality of data (42)(43). One of the reasons for the failure of information systems in developing countries was the irrelevance of the data collected to cover professional activities. Most of the data recorded and reported by the health sector employees of these countries have no practical application to help managers make decisions and control the delivery processes. On the other hand, useful data collected often support the goals related to control of diseases and rarely support managerial goals. One of the reasons for weaknesses in these countries is poor consensus about information needs between providers and users in different level of heal thcare (44).
There can be different solutions to overcome obstacles and problems on the way to HISs in developing countries, including the formulation of strategic plans and policies required for the development of information systems in the national and private sectors, formulation of laws for reporting communicable diseases from governmental and private sectors, formulation of confidentiality and retrieval policies, developing knowledge management capacity and using health information capacity by users and service providers, creating networks for knowledge sharing, establishing coordination and integration between information gathering systems, strengthening the disease surveillance and reporting systems, the periodic reporting of information needs and protection of health information system components, enhancing the use of information technology and communication technologies, enhancing the use of information technology in the field of transmission and access and sharing health information. (14,26,(29)(30)(33)(34)45) Many of the solutions proposed to remove barriers to the success of HISs in developing countries have focused on strengthening policy in this area. One of the most important limitations of the researchers in conducting this study was the scattering of success and failure factors of HISs in different studies, so that in some cases only one challenge or success and failure factor was presented. Researchers tried to select studies that presented a set of factors by expanding the range of keywords to increase the comprehensiveness of the search process.
Conclusion
Various sources have cited several reasons for the failure of information systems, including a lack of specialized and experienced human and financial resources and cultural factors and the lack of infrastructure. There is a need to create flexible standards for designing and deploying HISs to address these complexities. Finally, there are several solutions to address the obstacles and problems facing health information systems in developing countries, including formulating strategic plans and policies necessary for the development of national HISs, developing laws on reporting diseases and collecting data from public and private sectors, determining information privacy and disclosure policies, formulating maintenance and retention policies, Enhancing the capacity of knowledge management and health information capacity utilization by users and providers of health services, Creating a network for knowledge exchange, creating harmony and integration between information collection systems, strengthening disease surveillance and reporting systems, conducting periodic information needs assessment and support in health information system components, strengthening the use of information and communication in the health sector, using appropriate ICT technologies and strengthening the use of information technology in the transmission and sharing of HISs. | 2022-10-11T16:02:35.536Z | 2022-06-30T00:00:00.000 | {
"year": 2022,
"sha1": "a7b6547c0004d70c2bc17d9e28f89b058cc25327",
"oa_license": "CCBYNCSA",
"oa_url": "http://mjiri.iums.ac.ir/files/site1/user_files_e9487e/mngmui2026-A-10-7422-1-fd93e33.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b8a80d464dd32784b71ef700ff736d8edfdb700",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234746321 | pes2o/s2orc | v3-fos-license | Relationship between Epidermal Growth Factor Receptor Mutations and Adverse Events in Non-Small Cell Lung Cancer Patients treated with Afatinib
: Epidermal growth factor receptor (EGFR)-tyrosine kinase inhibitors such as afatinib are used for non-small cell lung cancer (NSCLC) and show varying efficacy depending on EGFR gene mutation. Few studies have examined the relationship between EGFR gene mutations and the adverse events of afatinib in NSCLC. This retrospective study included 32 Japanese patients with NSCLC with EGFR gene mutation who were treated with afatinib between May 2014 and August 2018 at Kagawa University Hospital. Among the 32 Japanese patients with NSCLC treated with afatinib, 19 patients were positive for exon 19 deletion mutation (Del 19) and 13 patients were negative for Del 19. The incidence of grade ≥ 2 skin rash was slightly higher in patients positive for Del 19 (42.1% vs. 7.7%, P = 0.050). No significant differences were detected in other adverse events between the two patient groups. Patients positive for Del 19 also showed significantly longer median progression-free survival (288 vs. 84 days, P = 0.049). Our study indicates a higher incidence of skin rash associated with afatinib treatment in Japanese patients with NSCLC positive for Del 19 compared with patients without Del 19. The Del 19 positive patient group also showed better progression-free survival. J. Med. Invest. 68 : 125-128, February, 2021
Although the effects of afatinib as first-line treatment may not necessarily be compared with first-generation TKIs, a meta-analysis revealed that afatinib was more effective as a second-line treatment for advanced squamous cell carcinoma than erlotinib (4). However, afatinib treatment should be started at a low dose in Del 19 patients at risk of malnourishment, sarcopenia, and low body surface area because of the higher incidence of adverse events, such as skin rash, diarrhoea, and mucositis (5).
EGFR is widely expressed in normal skin tissues and cells, such as the epidermis, sebaceous glands, glands, eccrine glands, and dendritic cells, and plays an important role in the normal development and physiology of the epidermis. The epidermis mainly originates from keratinocytes, and keratinocyte differentiation and migration to the skin surface are regulated by the EGFR signalling pathway. EGFR-TKIs have been associated with the development of numerous adverse events, such as skin rash, diarrhoea, and mucositis, through their effects on inhibiting EGFR signal transduction (6).
Several studies have reported the relationships between the adverse events and therapeutic effects of anticancer drugs, such as skin rash due to erlotinib in NSCLC patients (7), hand-foot syndrome due to capecitabine in breast cancer patients (8), and hypertension and proteinuria due to bevacizumab in colorectal and breast cancer patients (9,10). However, few reports have been published on the relationship between EGFR gene mutations and the adverse events of EGFR-TKIs in NSCLC. We previously reported that Del 19 patients were less likely to develop skin rash than L858R patients, although no significant difference was found on comparison of each drug (11). In one study in Japan, the therapeutic effects of afatinib were more significant with skin rash of grade 2 or higher at 1 week of treatment, although no significant difference was found because it was a small-scale study (12). In this study, we retrospectively investigated the relationship between EGFR gene mutations and the incidence of adverse events in NSCLC patients receiving afatinib.
Data collection and assessment
We retrospectively analysed the electronic medical records of inpatients with NSCLC who started afatinib between May 2014 and August 2018. We excluded patients who received afatinib beyond the standard dose. We collected data on genetic mutation type, age, gender, body surface area (BSA), performance status (PS), liver and renal function before administration, number of EGFR-TKIs used as prior treatment, and maximum grade of adverse events (skin rash, diarrhoea, stomatitis, and liver dysfunction). Grades of adverse effects were assessed according to the Common Terminology Criteria for Adverse Events version 4.0. Observation periods were up to 2 weeks from starting therapy, and patients were divided into those with adverse events of grade 0-1 and those with adverse events of grade 2.
Assessment of treatment effectiveness
The best antitumour responses during the treatment period were assessed with response evaluations (complete response : CR, partial response : PR, stable disease : SD, progressive disease : PD) by physicians according to the Response Evaluation Criteria in Solid Tumors (RECIST). Between-group PFS comparisons were also performed.
Statistical analysis
We used IBM ® SPSS ® Statistics 24.0 (IBM Corp., Armonk, NY, USA) for statistical analyses. Baseline patient characteristics were analysed using the Mann-Whitney U test and Fisher's exact test. We used Fisher's exact test for between-group adverse events and antitumour effect comparisons, with P < 0.05 indicating a significant difference. The Kaplan-Meier method was used in the PFS analysis and the log-rank test was employed for comparisons between groups, with P < 0.05 indicating a significant difference.
Ethics statement
This study was approved by the Kagawa University Ethical Research Committee (2018-201) and was conducted in accordance with the Declaration of Helsinki and Ethical Guidelines for Medical and Health Research involving Human Subjects by the Ministry of Education, Culture, Sports, Science and Technology, and the Ministry of Health, Labour and Welfare of Japan. Japanese law does not require individual informed consent from participants in non-invasive observational trials such as this study. Therefore, we used our clinical research support centre website as an opt-out method rather than acquiring written or verbal informed consent from patients.
RESULTS
This study included 32 Japanese patients with NSCLC, including 19 patients positive for Del 19 and 13 patients negative for Del 19. All patients were treated with afatinib at the standard dose. The patient characteristics in the Del 19 positive and negative groups are listed in Table 1. No significant difference was found in age, gender, BSA, PS, liver and renal function before administration, and number of EGFR-TKIs used as prior treatment between groups.
In the comparison between the two groups for the incidence of grade ≥ 2 adverse events, skin rash was slightly higher in the Del 19 group than in the non-Del 19 group, but the difference was not significant (P = 0.050) ( Table 2). No significant difference was observed for the other grade ≥ 2 adverse events.
Comparison of the objective response and disease control rates between the groups is shown in Table 3. The Del 19 group had a higher response rate (CR+PR) and disease control rate (CR+PR+SD), but the difference was not significant (P = 0.437 and 0.552, respectively). The comparison of PFS between the groups is shown in Figure 1. The median PFS for patients with and without Del 19 was 288 and 84 days, respectively. The median PFS was significantly longer in patients with Del 19 (P = 0.049).
DISCUSSION
This study compared the adverse events in NSCLC patients with and without Del 19 treated with afatinib. Our results suggested a higher incidence of skin rash due to afatinib treatment in patients with Del 19 compared with patients without Del 19. The most common EGFR gene mutations found in daily clinical practice in NSCLC patients are Del 19 and L858R. Of the 32 patients included in the study, 19 and 13 patients carried Del 19 and L858R, respectively. To examine the association of EGFR gene mutation with adverse events in response to afatinib, grade 2 or higher adverse events that prevented patient daily activities were set as the cut-off value. As no difference was found in the patient background between the Del 19 positive and negative groups, the difference in the incidence of skin rash may not be because of a relative overdose. Because there was no difference in BSA, renal and hepatic function between the two groups. However, no significant difference was noted in the other adverse events. This may be explained by the development of skin rash as early as at 2 weeks in the observation period or the involvement of different factors. The preventive use of moisturizers and skin care procedures for skin rash were similarly performed among all patients, as all patients received the same formulation and instructions at the first prescription. Patients with poor PS were also able to perform uniform skin care by nurses during the hospitalization. Notably, the overall incidence of skin rash in our study was approximately 30% compared with 41.9% in the LUX-Lung 3 global phase III clinical study. However, this study cannot be simply compared with the LUX-Lung 3 study due to the observation period of only 2 weeks, age group, and racial differences. That is 229 patients in the LUX-Lung 3 study, only 54 were Japanese. Therefore, different methods of skin care and racial differences may explain the inconsistent results.
Previous studies have reported relationships between gene mutations and therapeutic effects. One report showed that treatment outcomes, i.e., PFS and response rates, in response to afatinib were generally more satisfactory in Del 19 patients (3).
Our study was limited in that it was a small-scale single-site study. In addition, the reason underlying the relatively high incidence of diarrhoea regardless of gene mutation is not clear. To address this issue, a larger-scale study should be conducted.
CONCLUSIONS
Our study suggests that the incidence of skin rash and therapeutic effects of afatinib in NSCLC patients vary according to gene mutations. This finding suggests the ability to predict the risk of skin rash before the start of treatment and may be useful for patient treatment. Furthermore, skin care instructions will be more important for such patients because more significant therapeutic effects can be expected. Thus, our study should facilitate the reduction of patients who discontinue treatment for adverse events.
CONFLICT OF INTEREST
None of the authors have any potential conflicts of interest associated with this research. | 2021-05-18T06:16:12.078Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "b514e96a9f0e99283d9e52284f1de5428a14d90c",
"oa_license": null,
"oa_url": "https://doi.org/10.2152/jmi.68.125",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6d983fb98766c0878b600379e106dd34c8185498",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239462029 | pes2o/s2orc | v3-fos-license | The Effects of WordWall Online Games (WOW) on English Language Vocabulary Learning Among Year 5 Pupils
— In the effort to upgrade pupils' vocabulary learning experience, the potential of interactive educational games is increasingly explored as supplementary teaching and learning materials. While the eagerness to integrate mobile technology into English language education is noticeable, there is a lack of evidence on Malaysian English as Second Language (ESL) learners' views of the feasibility of online games in vocabulary learning. This study aims to determine the degree of improvement in pupils' vocabulary performance. The quantitative data was analysed using descriptive and dependent t-test analysis. The cross-sectional survey was adapted from the ACRS-V model. The questionnaire was distributed to Year 5 pupils from a national primary school in Negeri Sembilan who are using the syllabus of The English Language Curriculum for Primary Schools (KSSR). The findings show a moderate level of Satisfaction , Attention , Relevance , Confidence and Volition. In addition, a paired sample t-test indicates a significant improvement in the pupils’ vocabulary scores after using WordWall (WOW) as a vocabulary learning supplementary material. The effect size demonstrated is also larger regarding its effects in behavioural sciences. This study provides important insights as a guide for primary school English teachers in integrating online games as a learning tool for English language learning, especially in developing pupils’ English vocabulary repertoire.
I. INTRODUCTION
The online game serves as a continuation from lessons in school, intending to strengthen and support memory with real-life applications. There is a feature in the online game that records each pupil's vocabulary scores and achievements. It supports the currently employed Classroom-Based Assessment in Malaysia education system which takes into consideration pupils' progress in learning via various mediums to reduce exam-oriented assessments. Therefore, the declining proficiency of English language ability among Malaysian pupils and the weak grasp of English vocabulary has been a matter of concern to Malaysian linguists, educationists and policymakers. Many undergraduates possess insufficient vocabulary repertoire and fail to achieve the minimum word level (2000 words) out of five wordlevel (Lateh, Shamsudin & Abdul Raof, 2018). The word level consists of high-frequency vocabulary which is also used to gauge pupils' capabilities in English communication. The undergraduates' lack of mastery in English writing skills is affected by their vocabulary deficiency mastery and limited ability to use vocabulary effectively for communication purposes (Ashrafzadeh & Nimehchisalem, 2015).
In addition, there are cases of low vocabulary repertoire among Year 5 pupils' (Wang & Yamat, 2019) which results in delays for pupils to comprehend reading materials efficiently (Sidek & Harun, 2015). The traditional teaching methods of vocabulary which are currently employed in most schools are less interesting, ineffective as well as less ISSN 1799ISSN -2591 Theory and Practice in Language Studies, Vol. 11, No. 9, pp. 1059-1066, September 2021 DOI: http://dx.doi.org /10.17507/tpls.1109.11 motivating (Mohamad, Sazali & Salleh, 2018) and often requires pupils to memorize unfamiliar words with paired translations (Nejati et al., 2018). It leads to issues of low vocabulary acquisition among Malaysian pupils. They only develop their listening and writing skills but not their thinking and questioning skills (Chen & Lee 2018). Consequently, they become passive learners and are often quiet and uninterested in their learning. In line with the concerns highlighted, the World of Words (WOW) online game in WordWall platform can assist and enrich pupils' experience in acquiring English language vocabulary through 200 vocabularies. This game is accompanied by colourful pictures to help retain players' attention, association of words with images, strengthen the memory of spelling as well as support the understanding of word meaning directly and indirectly. The design of the WOW encourages the use of mobile and gamified learning in class as a teaching aid and serves as supplementary material to encourage fun and independent outof-class learning.
On the ground that there is great anticipation for the use of mobile learning in education, it is eminent to take pupils' perceptions towards the ease of use of online educational games into consideration to ensure the effectiveness and successful implementation of online games in education. After all, the game is designed for their language learning. Earlier studies indicate that pupils' acceptance, attitude, and perceptions towards online games are influenced by several factors. Although Fagan (2019) states that enjoyment and performance expectations contribute to the difference in perceptions among pupils, thus far, there are still limited studies conducted in investigating the relationship between these factors with pupils' perceptions of online games in vocabulary learning. Thus, the purpose of this study is to (1) investigate pupils' motivation levels towards using WOW interactive online games in vocabulary learning and (2) examine the effects of WOW interactive online games on pupils' vocabulary development.
II. LITERATURE REVIEW
Several studies have recorded teachers' concern over Malaysian pupils' poor recall of vocabulary. They are largely unable to remember the vocabulary they had learned from the previous day (Chong & Kee, 2019). For instance, if they learn four new words in an hour lesson for the day, generally they will only be able to recall one word during the lesson on the next day. There are also pupils who can recognize the written or oral form of the words but cannot determine the meaning of the words without guidance from the teacher. Afzal (2019) underscores the ineffective teaching practices adopted in vocabulary teaching and learning as one of the factors in Malaysian pupils' poor vocabulary mastery. Some teachers lack techno-literacy and traditional teaching methods are largely unappealing to learners to learn the subject.
The advancement of technological devices and the internet environment has uncovered a multitude of probabilities for pupils of all levels to learn particularly focusing on the new generation of technology-savvy learners. The emergence of Mobile-assisted Language Learning (MALL) is an advancement to language experience through the facilitation and improvisation using mobile devices (Gangaiamaran & Pasupathi, 2017;Klimova, 2019). In Malaysia and beyond, there is a growing demand to incorporate independent learning using online platforms in teaching and learning (Kessler, 2018;Nasir et al., 2018;Nasir, 2020). The concept of integrating mobile devices in 21 st century education has been the interest of many teachers to improve English competency. The Malaysian government facilitates the greater adoption and diffusion of ICT through several initiatives to improve capacities in education fields which is in line with Malaysia Education Blueprint (MEB) 2013-2025. In the 10-years strategic plan of MEB, the goal is to restructure education as future-proof in line with the industrial revolution 4.0 (Niko Sudibjo et al., 2019).
Nejati, Jahagiri and Salehi (2018) suggest that vocabulary is like building blocks of a language. A limited number of them inevitably disrupt pupils' development in other language skills such as listening, speaking, reading and writing. Alqahtani (2015) states in his study that ESL learners majoring in English rely heavily on their vocabulary knowledge rather than their knowledge in other language areas like grammar. Practitioners and researchers acknowledge that vocabulary acquisition is a challenging task, especially for English as a second language (ESL) learners. Afzal (2019) states that non-native speakers of English face problems relating to the meanings of the new words, spelling, pronunciation, correct use of words, guessing meaning through the context and others. Without sufficient vocabulary, ESL learners are more likely to struggle in comprehending common reading materials, understand and apply grammar rules when using the language (Nejati et al., 2018). Sidek and Ab. Rahim (2015) highlight that the issue of poor vocabulary contributes to difficulties in other language skills as lexical knowledge is fundamental for effective communication.
Many scholars define online games in learning as the integration of game thinking and game mechanics (Takahashi 2010; Bakhsh 2016; Chapman & Rich, 2018). Mobile learning is known to optimize the potential of mobile devices as learning tools in language learning environments (Daud et al., 2015). The rapid global development of mobile technology speeds up the popularity and proliferation of mobile learning in Malaysia. Kung-Teck et al. (2020) reveal that the incorporation of mobile learning into heutagogy teaching instruction is highly advantageous, as it facilitates interactive, versatile, and multi-modal learning via Google Docs, e-Portfolio, Twitter, YouTube, Quizizz and MindMap. Thamilarasan and Ikram (2019) developed a mobile application tutor service called MyMUET as a supplementary learning aid for Malaysian learners sitting for MUET (Malaysian University English Test). Their findings of the survey note that learners agreed that the mobile application tis highly useful for supplementary materials for out-of-class learning. To date, the application of mobile technology in education or 'mobile learning' has been generating interest among academicians. A growing body of literature has recognized that mobile learning is becoming more popular and 1060 THEORY AND PRACTICE IN LANGUAGE STUDIES appreciated due to its adaptability in language learning as the younger generation is generally more technology-savvy. Issham et al. (2016) deduce that one of the driving forces behind the utilization of mobile learning in education is the growing use of mobile devices by the current generation of learners. Perrin and Duggan (2015) also note that online games are favourably approved by young pupils, especially among the pupils at the age of 10 to 18 years old. Andreani and Ying (2019) found that interactive online game has succeeded in enhancing the language learning experience for low proficiency EFL elementary learners. There is an improvement in pupils' English vocabulary skills after the intervention of mobile application in vocabulary learning. Similar studies are also conducted by Govindasamy et al. (2019) and Fazil and Said (2020). The online game integrates thinking and game mechanics to solve pupils' problems and engage them in interactive learning (Bakhsh, 2016;Chapman & Rich, 2018). Letchumanan et al. (2015) and Azli et al. (2018) studies indicate that pupils' experience more gratification in learning English with mobile games. Game elements are the contributing factors in engaging, motivating and facilitating interest in ESL learners' vocabulary learning experience. Generally, there is no limit to subject areas in vocabulary learning. Pupils must be engaged in taskbased activities whereby they are able to apply the vocabulary in context to learn and retain new words effectively. With functional game elements, pupils are more motivated, autonomous, and prone to develop problem-solving skills as well as being intrinsically motivated. Azli et al. (2018) reinforce that online game facilitates learning experience and the use of online games in class is very beneficial for pupils.
Wordwall is the most suitable game platform for vocabulary practice. It provides a wide selection of game formats that are beneficial and appealing to the target audience; in this context, the primary school pupils. In contrast, it is crucial to select a game that is exciting while effectively meeting the learning goals as there are games with learning advantages but with little fun factor (Jantke & Hume 2015). Some games are lacking in educational purposes thus are not employable to the learning process with learning goals. A mindful selection of materials for mobile learning is necessary for the efficient incorporation of learning theory into mobile learning that can blend education and entertainment harmoniously and consequently, spur pupils' interest and boost motivation to learn.
Motivation acts as the essential factor that contributes to language learning efficiency (Tanaka 2017;Dornyei, 2007). Online learning often requires learners to be intrinsically motivated as the online environment is dependent on pupils' self-regulation, curiosity and interests. Other factors such as family socioeconomic status (Shereen & Tang In contrast, some studies discovered the acceptance of online games is evident among learners in Malaysia regardless of age and capabilities. It is also widely received by teachers who utilize online games due to their benefits in teaching and learning (Hasin & Nasir, 2021). Since the proliferation of mobile technology in learning, there is a tremendous buzz of using online games to promote learning through their multimedia capabilities. Tertiary learners remark a positive perception and experience towards mobile learning practice and application to deliver courses in higher education (Karim et
III. METHODOLOGY
The research design for the undertaken study is quantitative which focuses on the pupils' perceptions -the experimental study. The sample consists of 121 Year 5 pupils from a national primary school in Negeri Sembilan who are using the current syllabus for primary school, Language Curriculum for Malaysian National Primary Schools (KSSR). They are chosen through convenience sampling, as they are easily accessible to the researcher and generally possessed a similar level of English proficiency, which is low intermediate. They are selected from the three income groups, which are B40, M40 and T20. The term B40 represents the percentage of the country's population of Bottom 40% who earns RM3,000. M40 is Middle 40% whose median household is at least RM 6,625 while T20 is Top 20% whose median household income is at least RM13,148. A questionnaire is used as an instrument. It is adapted from ACRS-V model (Keller, by distributing the questionnaire via face-to-face to all the participants. Participants answered the questionnaire, within the time allocated. The questionnaire was distributed to the participants after they completed the six units of the online vocabulary games or known as WOW. The pilot test was carried out among a different group of Year 5 pupils with similar language proficiency. The researcher assessed four pupils involved in the pilot test to gauge pupils' comprehensiveness towards the format, content and terminology used in the questionnaire. The researchers identified and revised the items before distributing the questionnaire during the actual administration of the questionnaire. The pilot study was also conducted to identify the duration needed for the pupils to complete the questionnaire. The Cronbach Alpha values for all the items were higher than 0.70. It demonstrates an appropriate level of reliability; thus, all 30 items were included in their questionnaire. The design of WOW is to cater to the needs of designing follow up activities after learning English vocabulary at school. It serves as a supplementary material or tool in pupils' vocabulary learning. The pupils have accessed WOW after school hours. The words incorporated into WOW were categorized into themes according to the units. WOW designed for this study introduces 200 vocabularies to players. The selected vocabularies are adapted from Curriculum and Assessment Standard Document (for Primary School) (DSKP) and textbook for Year 5 pupils. DSKP is a reliable source of reference for all national schools in Malaysia. Thus, the words presented are the must-mastered vocabulary for all pupils in Year 5. Descriptive analysis is used to analyse and concisely present vast quantities of quantitative data. The analysis of the dependent t-test is utilised for pre-tests and post-tests to calculate the difference in pupils' performance before and after the intervention.
IV. FINDINGS
Out of 121 participants, only 60 participated in the study which yields 49.59% return rate. The majority of the participants are Malay, which is 75% (n=45). The second largest group is the Orang Asal or Indigenous people which makes up 18.3% (n=11) of the whole population of the study, while, 6.6% (n=4) of them are Indian. In terms of socioeconomic status, a significant 63.3% (n=38) of the participants are from the B40 group, 30% (n=18) are M40, and only 6.6% (n=4) are from the T20 group. The B40 group refers to the group with a median household income of less than RM4,360. The final variable is the participants' level of familiarity with mobile devices. The data obtained through a survey showed that 38.3% of the participants (n=23) are highly familiar with mobile devices while 43.3% (n=26) of the participants are in the medium familiarity group and the low level of familiarity group consist of 18.3% of the participants (n=11). The participants' level of familiarity with mobile devices is measured by the frequency of using mobile devices, their aptness in handling mobile devices without help and the regularity in using mobile devices for learning and playing games. (Jamil et al., 2019). Based on the proposed category, the majority of the constructs were in the good category with a range from 3.5-4.4.9; Satisfaction=3.64, Attention=3.54 and Relevance=3.51, the majority of the constructs were in the Good category with a range from 3.5-4.4.9. The overall reliability of all the scales on standardized Cronbach Alpha was 0.775 (n=30) which suggested good reliability of items as acknowledged by Chang and Chen (2015) and Huang and Hew (2016).
Additionally, a series of pre and post-tests were conducted on 40 participants to assess whether there are any improvements in pupils' vocabulary learning. The assumption and conditions for the test were assessed. Participants were tested on the spelling of the new words introduced, their association with pictures as well as applying the vocabulary in sentences. The topics for each unit are as follows: Unit 1: Family Day, Unit 2: Saving, Sharing and Spending, Unit 3: Superheroes, Unit 6: Self Protection, Unit 7: The King's Decision and Unit 11: Natural Disasters. A
THEORY AND PRACTICE IN LANGUAGE STUDIES
paired or correlated sample t-test indicated that the post-test unit 1 to unit 7 and unit 11 had on average significantly improved on vocabulary score after using WOW as vocabulary learning tool than pre-test unit 1 to unit 7 and unit 11, t
V. DISCUSSION
The WOW interactive vocabulary game, as an external stimulus, contributes to the increase in scores between the series of pre and post-tests. The study proves that mobile phone application in learning increases pupils' comprehension and understanding of vocabulary as cited in Govindasamy et al. (2019) and Paulus et al. (2017). The vocabulary can be found in the form of images and the pronunciation of the word can be listened and seen in the form of audio or video which is also in line with Alnatour and Hijazi (2018). The use of online games promotes engagement via repetition contributes to a deeper understanding of the vocabulary and the ability to recall spelling easily. This is similar to a past study that supported the positive and long-lasting effect of online game learning on pupils' motivation (Darmi & Albion, 2017).
This research highlights two significant challenges for pupils in incorporating WOW online games which are the limited number of mobile devices to access WOW and a stable internet connection. Pupils use their devices to access WOW at home. However, it is a challenging experience for some pupils who are from poor families due to inability to purchase mobile data and mobile phones. Acknowledging the issues, stakeholders, such as schools and parent teacher associations shall take the necessary steps to improve the pupils' learning experience by providing sufficient devices and internet connection through the optimization of the school's facilities. The school in the study has a problem with low maintenance and support of the internet and computers which prompted the researchers to conduct this study by integrating WOW as supplementary materials to be used at home.
Year 5 pupils had high Satisfaction, Attention, and Relevance motivation. However, their Confidence and Volition were at a moderate level. The results could educate pupils on utilizing various tools to enhance learning, such as online educational games in mobile applications and awareness. These findings could provide information on the necessity to design online educational games in line with the national syllabus. The WOW online games provide a positive learning experience, multiple game types, scores, and challenges. It was also found to amplify intrinsic motivation and persistence to achieve desired goals and ranks through healthy competition promoted in games. Self-directed learning is one of the skills highlighted in Education 4.0 as it facilitates independent learning via technology-enhanced educational tools as suggested by Min and Nasir (2020). Indirectly, this application can be used as supplementary or revision materials for Year 5.
In essence, educators need to embrace the technology-immersed classroom to cater to the current generation inclination towards technology-enhanced learning. Simultaneously, there is a great need to employ the appropriate online game-based learning that adheres to the national syllabus and outlined standards to provide easy integration of the innovations and pedagogical focus in the education system (Purgina et al., 2016). The study is limited to the development of the English vocabulary of the Year 5 KSSR syllabus. Other than that, this study only measured pupils' views of online vocabulary games on one platform. Hence, pupils' perceptions could not be generalised towards other English vocabulary online games. Exposing learners to multiple online games may require better resources and a longer time for the researchers to produce games on multiple virtual platforms that is of similar standard based on the national syllabus.
Maslawati Mohamad (Ph.D) was born in Johor, Malaysia. Currently, she is a senior lecturer at Faculty of Education, Universiti Kebangsaan Malaysia. Her main research interests are innovations in teaching and learning in ESL context, Teaching Reading in ESL context and English for Specific Purposes. Currently she has published 101 journal articles including 30 Scopus articles, 55 proceedings, six book chapters and a book. She is also a reviewer for a few international journals and editor for a local journal. She graduated from Universiti Kebangsaan Malaysia and her area of specialization is Teaching English as a Second Language. She had also presented her research output locally and internationally in various seminars and conferences.
Md Yusoff Daud was born in Kota Bharu, Kelantan. He is currently working as a senior lecturer at the Center of Innovation in Teaching and Learning, Faculty of Education, UKM since 2000 until now. He graduated from Universiti Kebangsaan Malaysia (UKM) in Bachelor of Science (Honours) and Master of Education (ICT). His specialization was in the field of integration of ICT in teaching and learning. The job scope includes teaching, supervising and writing books and journal articles. Until now, he had actively produced many journals, articles and books related to ICT cross discipline taught in schools and also in the higher education level. Moreover, he had also presented his research output locally and internationally in various seminars and conferences.
Mohd Jasmy Abd Rahman was born in Mersing, Johor in 1969. He is a Senior Lecturer at the Faculty of Education, UKM since 1998 until now. He specializes in the field of multimedia in education and has produced many journals, articles and books related to instructional technology in many disciplines taught in schools or at the higher education level. He is also very active in the field of Co-Curriculum and has been appointed as the Director of the Kesatria-UKM, center to coordinate the activities of the Uniformed Force. | 2021-10-22T15:40:34.490Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "04c21b12cb738fb2bdcc77f0030603a2bb970bd5",
"oa_license": null,
"oa_url": "https://tpls.academypublication.com/index.php/tpls/article/download/1402/1131",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1c6001efe258099f12539208eb9020f7beff0554",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": []
} |
246079981 | pes2o/s2orc | v3-fos-license | Detection of intra-family coronavirus genome sequences through graphical representation and artificial neural network
In this study, chaos game representation (CGR) is introduced for investigating the pattern of genome sequences. It is an image representation of the genome for the overall visualization of the sequence. The CGR representation is a mapping technique that assigns each sequence base into the respective position in the two-dimension plane to portray the DNA sequence. Importantly, CGR provides one to one mapping to nucleotides as well as sequence. A coordinate of the CGR plane can tell the corresponding base and its location in the original genome. Therefore, the whole nucleotide sequence (until the current nucleotide) can be restored from the one point of the CGR. In this study, CGR coupled with artificial neural network (ANN) is introduced as a new way to represent the genome and to classify intra-coronavirus sequences. A hierarchy clustering study is done to validate the approach and found to be more than 90% accurate while comparing the result with the phylogenetic tree of the corresponding genomes. Interestingly, the method makes the genome sequence significantly shorter (more than 99% compressed) saving the data space while preserving the genome features.
Introduction
Representations of genomic data into numerical, graphical or audio form have gained importance in bioinformatics research. In recent years many studies showed different ways of genome representations to find their various DNA significances. A graphical representation of DNA is convenient to achieve a visual analysis of the distribution of nucleotide bases A, C, G and T of DNA. The mapping model was introduced as an Hcurve, a three-dimensional space representation of the DNA sequence by Hamori andRuskin in 1983 (Hamori andRuskin, 1983). Later on, researchers found many exciting ways of graphical representation in different methods (Bielińska-Wąż and Wąż, 2017;Mo et al., 2018;Randić et al., 2006;Hoang et al., 2016;Touati et al., 2021;poor and Yaghoobi, 2019;Sun et al., 2020). Chaos Game Representation (CGR) is one of the graphical techniques, which assigns each DNA bases into the respective position in the two-dimension plane to portray the DNA sequence (Hoang et al., 2016). This 2-D representation was introduced to show the depiction of the local and global pattern of DNA by using iterated function systems based on chaotic dynamics techniques (IFS) (Jeffrey, 1990). The CGR technique was recently shown as an efficient classifier of helitrons families in Caenorhabditis elegans genomes (Touati et al., 2021). Also, the CGR tool is useful to make a comparative study among genome sequences (Hoang et al., 2016). A genome sequence can be characterized in both graphical and numerical form through Chaos Game Representation. Importantly, CGR provides one to one mapping to nucleotides as well as sequence mapping (Jeffrey, 1990). From one coordinate of the CGR plane, it can tell the corresponding base and its location in the original genome. Therefore, the whole nucleotide sequence (until the current nucleotide) can be restored just from a point location of the CGR. Additionally, a CGR portrays whole-genome data or parts of the sequence in a single plane, making the CGR tool more practical for a comparative study (Deschavanne et al., 1999). Recently, CGR was used for capturing the recurrence features from SARS-Cov-2 datasets, and the results were proposed for clustering (Olyaee et al., 2020). Mostly, the CGR plot were made in square in shape but it can be presented in n-vertex polygon CGR (Xiaohui et al., 2014).
The year 2020 started with a threat of a pandemic where a virus badly hit the human life. Gradually, the virus spread almost all around the world. The human community is presently suffering from a pandemic caused by a positive RNA strand virus, Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (Yan et al., 2020). Moreover, there are several viral diseases, as mentioned in the world health organization (WHO) pandemic list, such as the Middle East respiratory syndrome (MERS), Disease X, severe acute respiratory syndrome (SARS), Ebola and Influenzas. These viruses have shown their impact on human in several years (Who 0000). Initially, WHO suspected the source of the present pandemic as Disease X. However, a novel coronavirus (2019-nCoV) is found for the COVID-19 pandemic Wu et al., 2020;Zhu et al., 2020). Sometimes, a new virus comes from its variants or after mutations. For example, SARS-CoV-2 is 79.5% similar in nucleotide sequence with the previous SARS-CoV family and shows 96% similarity with another coronavirus, SL-Cov-RaTG13 . More than these two examples, there are also many more cases for similarities in nucleotide sequences, as discussed in the study .
Genome studies of sequence analysis in the laboratory are always expensive and time-consuming work. Therefore, the computation process using an artificial neural network (ANN) can be an efficient tool in studying sequence analysis in segmentation (Cheng et al., 2012) and computing of DNA sequences (Zhong et al., 2020). An ANN network consists of an input layer, hidden layer and output layer. The smallest processing unit of an ANN is neurons, connecting the different layers in the network (Garro et al., 2016). Each neuron is responsible for carrying the input from the previous layer and computing the sum or part of the input with individual weight. The simplified human brain inspires the interconnections, nodes and layers of a neural network to compute different recognition algorithms. ANN is used when there is plenty of data and no algorithmic solution possible with them. These data are used to train the network during the learning process of the network (Hoang et al., 2020). Multilayer perception (MLP) with one or more hidden layers is more accurate with given sufficient training data (Hoang et al., 2020). A multilayer ANN is capable of finding a correlation among the input data. This machine learning tool is efficient in pattern recognition that can be applied in gene detection, sequence classification, disease detection and many other aspects in bioinformatics. The ANN method was used in the detection of high dimensional complex cancer datasets (Lancashire et al., 2009). The technique is one of the deep learning methods applied to solve data analysis and computer vision. It was efficient to classify DNA damages based on comet assay images (Atila et al., 2020). A neural network tool can be beneficial to organize the virus sequences into their family in this context. The virus sequences are positive RNA strand, mainly thirty thousand bases in size. A classification and clustering study were made among the different virus sequences using a support vector machine (SVM) and haar wavelet (Paul et al., 2021). This study explores the combine performance of CGR and ANN to detect the virus sequences from the same family. For the purpose, the Coronavirus family, including SARS-CoV, SARS-CoV2, MERS and Alphacoronavirus (Alpha CoV), were taken in the analysis.
Material and methods
The CGR method converts the one-dimensional sequence into a twodimensional graphical form. First, the DNA nucleotides were assigned to the four vertices in a unit square Euclidean plane. The coordinates of the nucleotide vertices were A = (0,0); C = (0,1); T = (1,0) and G = (1,1). The mapping process plots a dot starting from the center P 0 = (0.5, 0.5) of the plane. It reads the DNA string (S n ) and checks the first character (S n=0 ) to plot in the plane. This stage points the position (P 1 ) to the half distance between the last dot (center) and the vertex matching to the first nucleotide. For the next point(P 2 ), it is precisely at the halfway of P 1 (now acts as the initial point) and corresponding nucleotide vertex. Similarly, the process continues to the Nth or last character (S n=N ) of the DNA sequence.
A short sequence 'ACTTGAATG' (S n , wherelength, N = 9) was taken as an illustration for the CGR process shown in Fig. 1a. Inversely, from a given CGR coordinate, the original sequence can be traced back in Fig. 1b. The genomic position of the sequence governs the geometric coordinates of the CGR plane. Each coordinate of the CGR plane acts as a memory of the source sequence for the specific nucleotide, pointed by the coordinate (Jonas et al., 2001). For example, the third nucleotide (from left to right, P 3 = T) in S n can be tracked to recover the other nucleotides (first and second, P 1 = A, P 2 = C) subsequence until the third position in Fig. 1b. First, P 3 was found in the fourth quadrant of the CGR plane in the giant square where the correspondent nucleotide vertex is 'T'. Now, CGR plane needs to zoom only in the mentioned quadrant. Here, in this square (green square) P 3 comes in the first quadrant or 'C' side square. Similarly, in the next step, the point P 3 lies in the 'A' vertex side (yellow square). If the vertexes of these squares are arranged in ascending order based on the size, the controlling vertexes 'ACT' can be retrieved as the recovered sequences.
Frequency Chaos Game representation (FCGR)
The Frequency Chaos Game Representation (FCGR) is an inherited method of CGR. It converts a DNA sequence into a 2-D plot based on a kmer occurrence of the DNA. In the FCGR method, the CGR unit square plot is divided into 2 k *2 k squares (Lichtblau, 2019). Each sub-square captures the k-mers according to the geometric position of the corresponding nucleotide of the DNA string. For K = 1 or one order FCGR (FCGR 1 ), CGR is divided four sub-square. Each sub-square holds the 'K' subsequent nucleotide, i.e. A, C, T and G (poor and Yaghoobi, 2019). Fig. 2a shows the FCGR 1 plot of the sequence S n and the distributions of monomers are A = 3, C = 1, G = 2, and T = 4. Similarly, there are sixteen sub-square with dinucleotide feature for FCGR 2 , which is shown in Fig. 2b. Here, the dimer distributions are like AA = 1, AC = 1, GA = 1, AT = 1, CT = 1, TG = 2, GT = 1, TT = 1 and the rest are zero. The same way FCGR 3 , FCGR 4 or FCGR n can be determined.
FCGR to numerical coding
Each k-mer occurrence can be found from the FCGR k process. This frequency matrix of FCGR can be restructured with the occurrence of kmer. The dimer (k = 2), trimer (k = 3) or k-mer can be determined from the nucleotide sequences. And the total number of dimers are 4 2 = 16 , trimers 4 3 = 64 and so on (4 k for k-mer). Here, FCGR 3 (K = 3) is shown as an example, similarly any FCGR k (k = 3) can be found. The smallest unit square of FCGR 3 plane was mapped based on the occurrence of the trimers. As an example, the SARS-CoV sequence (accession number: AY278741) of length 29,727 base pairs was represented in the form of FCGR 3 (Drosten et al., 2003). The DNA 2-D image of the corresponding sequence is shown in Fig. 3. The color indicates the presence of codons in the genome sequence. The darker color represents the fewer presence, and the lighter color is for the most significant number of the existence of the trinucleotides.
Classification approach by neural network
Here, a feed forward MLP model was designed based on supervised learning algorithms. Generally, in the beginning, the weight of the model was fed with random values during the training phase. Later, each feeding of training data or epoch, the model follows some specific T. Paul et al. Expert Systems With Applications 194 (2022) 116559 learning protocol with the adjustment of weight to minimize the error between desired and actual output. Here, the learning process used scaled conjugate gradient method. The output y k (x) is defined by Eq. (2).
Where, x i is the input neurons, and the propagated weight is ω i . The 'b' in (2) stands for bias.
The FCGR 3 matrix elements were taken as input of the neural network as shown in Fig. 4. There are four virus family members as an output to detect through the ANN. The ANN was designed with 64 input neurons for the 64 datasets of FCGR 3 matrix and 4 output for the classified result of the four types of viruses. In the hidden layer, 10 neurons were taken for the ANN model in the Neural network pattern recognition application in Matlab 2019b.
Datasets
The different coronavirus sequences are available in the National Center for Biotechnology Information (NCBI) (NCBI-Database, 2020). The data for the present study was downloaded with Genbank accession number, location and the published date. A total number of 1787 sequences were taken with FCGR 2 and FCGR 3 related sequences as a supplementary file (S1).
Result and discussion
The CGR method is capable of translating the genome sequence into graphical and numeric data at the same time. Here, all the sequences are mapped into FCGR 2 and FCGR 3 formats. Then, they are classified through ANN. Out of 1787 sequences, randomly selected fifteen percentage were taken for each validation and testing separately. The rest, seventy percentage sequences were used as training data. The four different coronavirus types were classified, and a confusion matrix was plotted for training, validation, test data as shown in Fig. 5a and Fig. 5b for the FCGR 2 and FCGR 3 formats, respectively. The target classes were marked 1,2,3 & 4 for MERS, SARS-CoV-2, Alpha CoV & SARS-CoV, respectively.
The size of FCGR 2 dataset is sixteen and for the FCGR 3 formats, it is sixty-four. Here, the virus genome sequences (approximately 30,000 bases) were mapped into the CGR formats (64 and 16 lengths). Fig. 5a and Fig. 5b show that FCGR 3 data performs better than the FCGR 2 dataset for data classification. FCGR 3 achieved 99.8% overall accuracy and, on the hand, FCGR 2 was 90.4% accurate to accomplish the classification target. The mapped datasets (FCGR 2 and FCGR 3 ) achieved the desired outcomes. Therefore, the other FCGR k (k = anything other than 2 and 3) sequence was not prepared as the dataset would have been very small or very long in length.
The dendrogram plot is a hierarchy clustering representing a tree. It was made from the CGR data and corresponding eleven genomes sequences as leaf nodes, in Fig. 6a and Fig. 6b. A random eleven number of sequences were taken as a representative for the easy readability of the plot, however, the programming is adaptable for all 1787 sequences in one tree (some examples given in supplementary file). Randomly, 2-4 genomes were taken from each species of the 1787 coronavirus dataset. Hierarchical binary clustering method was applied to make four distinguish cluster for the groups (SARS-CoV, SARS-CoV2, MERS and Alpha CoV) of the virus family. The branch colors are indicating different groups. To verify the result, a phylogenetic tree in Fig. 7 was plotted with the same genome sequences that were used in Figs. 6 (a and b). The Dendrogram tree of CGR shows that the genome features are preserved even after transforming the long genome sequences into short FCGR 3 and FCGR 2 sequences. The dendrogram tree ( Fig. 6a and Fig. 6b) and phylogenetic tree (Fig. 7) were made from the same genome sequences, which were selected randomly from the dataset (given in the supplementary files). The input sequence length for the phylogenetic tree is nearly thirty thousand bases (coronavirus sequence length). On the other hand, only 16 and 64 length datasets were represented the virus genome in the dendrogram trees. It is evident that the phylogenetic tree shows the desired result as total length of the sequence without applying any mapping or transformation of the data (Deng et al., 2006). Strikingly, the dendrogram tree or the hierarchical cluster tree from CGR data were nested with the series of subsets which were defined as SARS-CoV, SARS-CoV-2, MERS and Alpha CoV. In contrast, both the trees, dendrogram tree and phylogenetic tree are not identical ( Fig. 6 and Fig. 7), but importantly, the subsets of both the trees are having the same group members. The only difference, which was made by Alpha CoV subset, made the difference in the evolutionary distances for both the trees.
The CGR process is one-to-one sequential mapping in the numerical representation of genome data. As a result, the nature of the genome could be preserved and sustained after the transformation. The great advantage of the transformation is the possibility to reconstruct the original genome data up to a given point of the CGR sequence. Moreover, the CGR behaves as a genome signature in a unique twodimensional plot (Hoang et al., 2016;Deschavanne et al., 1999). The CGR pattern of the genome sequences is unique to each species. The species go through a slight variation along the whole genome. The variation in CGR pattern among the species is primarily significant on few factors, i.e. the base concentration, unusual repetition and the stretches of the bases of the genome (Deschavanne et al., 1999). Here, for the coronavirus, there is a very small sequential difference in the virus strain at different locations of the same virus. Therefore 2-D representation of CGR plane shows a mild pattern (color) difference in some smallest unit squares (Fig. 3b). However, there is a difference in the protein structure for different viruses even from the same family, that gives a more diverse 2-D CGR plane compared to the same genome variant. An ANN is an efficient categorizing tool, which can classify the 2-D CGR plane matrix data by using the learning algorithms.
Conclusion
In this study, a virus classification method was introduced in a novel way. Firstly, a virus protein sequences were translated into CGR plane. There are sixty-four codons and twenty amino acids, which are responsible to make proteins. Coincidentally, CGR plane consists of sixty-four smallest unit square box. Therefore, each codon was distributed on the specific smallest box on the CGR plane. All the genome sequences were plotted into CGR plane that gave an 2 k × 2 k matrix for each sequence. The protein features were mapped into the CGR 2-D plane. Visualization of the CGR plot gives a primary idea about the sequences. Here, the matrix was created with various genome of different coronavirus to show the CGR plane is efficient to classify the group of viruses. From the result, it can be concluded that the clustering was done with high accuracy rate. Therefore, the method will be more efficient for diversified virus sequences for the inter-family clustering. The classification method was designed based on the CGR method and ANN tool was used for higher accuracy. Besides, the technique was also used in distance analysis. Virus classification using the CGR and phylogenetic analysis was compared for genome sequences and found to be comparable.
The work resulted in an artificial intelligence-based algorithm to detect virus protein sequences. The main benefit of the method is encoding the genome sequence (big data) into an organized, well represented small data. The FCGR 2 and FCGR 3 have 0.053% and 0.213% data size, respectively, compared to the size of their actual genome sequence. Two more dendrogram trees were plotted and given as supplementary files (S2 and S3). The sequences were taken randomly from the dataset (S1) for the plot S2 and S3. There is a small clustering error (1 out of 36 samples for Fig. 6a, S2 and S3) in plot S3. Where a SARS-CoV2 was grouped with the Alphacoronavirus family. Irrespective of the small clustering error, these trees give an idea about the virus segmentation for the majority (35 out of 36) of the studied samples. This error was possible, as the number of taken CGR sequences is very less (11 to 13 sequences) for a hierarchical clustering tree. But when the system was trained with a large number of samples (1787 samples) then the system performed with efficiency of 99.8% for FCGR 3 and, 90.4% for FCGR 2 . Which implies a good efficiency to work on small data sets. Consequently, using this method will reduce the processing time and memory occupancy of a genome database. This will improve the efficiency of the virus detecting tool, as the training can be done with a larger number of genome samples compared to earlier methods. The virus protein sequence will be converted into image sequences which will be efficient for categorizing the dataset into virus families.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-01-22T14:09:54.704Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "ab0f9db2f54d979efd2c1781aa04e807e4a40882",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.eswa.2022.116559",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e01229a9bc4e5949f99b2e84d4514f02ebe78369",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218908660 | pes2o/s2orc | v3-fos-license | Improving pairwise approximations for network models with susceptible-infected-susceptible dynamics
Network models of disease spread play an important role in elucidating the impact of long-lasting infectious contacts on the dynamics of epidemics. Moment-closure approximation is a common method of generating low-dimensional deterministic models of epidemics on networks, which has found particular success for diseases with susceptible-infected-recovered (SIR) dynamics. However, the effect of network structure is arguably more important for sexually transmitted infections, where epidemiologically relevant contacts are comparatively rare and longstanding, and which are in general modelled via the susceptible-infected-susceptible (SIS)-paradigm. In this paper, we introduce an improvement to the standard pairwise approximation for network models with SIS-dynamics for two different network structures: the isolated open triple (three connected individuals in a line) and the k-regular network. This improvement is achieved by tracking the rate of change of errors between triple values and their standard pairwise approximation. For the isolated open triple, this improved pairwise model is exact, while for k-regular networks a closure is made at the level of triples to obtain a closed set of equations. This improved pairwise approximation provides an insight into the errors introduced by the standard pairwise approximation, and more closely matches both higher-order moment-closure approximations and explicit stochastic simulations with only a modest increase in dimensionality to the standard pairwise approximation.
Introduction
The spread of any epidemic can be conceptualised as a process on a network, where individuals are represented as nodes and epidemiologically relevant contacts as edges between nodes. An abundance of different network-based approaches to disease spread have been developed over the years, varying in scope, application, and sophistication. These range from, at one extreme, Markovian state-based models, where the probability of a system being in a certain state is given exactly by its master equations (see Kiss et al., 2017 for an introduction to such methods), to explicit stochastic simulations of epidemics on networks (see Goodreau et al., 2017 andWhittles et al., 2019 for recent examples) at the other. Both approaches have limitations. The exponentially increasing state-space with network size for state-based models mean these exact descriptions are computationally unfeasible for most networks of real-world interest; and while stochastic simulations can deal with networks of these sizes, such methods offer lit-tle or no analytical tractability, making sensitivity to network structure hard to quantify and the causal determinants of the resulting dynamics hard to identify.
One network approach that aims to bridge this gap is momentclosure approximation. In a population, the rate of change of the number of infected individuals will depend upon how many susceptible-infected pairs there are. The rate of change of these pairs, in turn, depends upon the number of triples, and so on up to the full size of the population. Moment-closure approximation methods obtain a closed set of ordinary differential equations (ODEs) for the disease dynamics by approximating the dynamics of higher-order moments (e.g. triples) in terms of lower-order moments (e.g. singletons and pairs). By doing so, one obtains a relatively simple ODE model that retains much of the tractability of mean-field approximation models (the standard approach to modelling the spread of infectious diseases) but that also explicitly accounts for some aspects of network structure. Hence, there has been much interest and research into such methods, and into the errors such approximations introduce into a model (Keeling et al., 2016;Pellis et al., 2015;Sharkey, 2011;Taylor et al., 2012).
There has been considerable progress in this moment-closure method for diseases that can be modelled via the susceptibleinfected-recovered (SIR) paradigm: the determinants of errors in such methods are detailed by Sharkey (2011); the exactness of a closure at the level of triples for tree-like networks is proven by Sharkey et al. (2015); this framework is extended by Kiss et al. (2015) to more realistic network structures that include loops; (Trapman, 2007) defines a reproduction number for pairwise approximation; (House, 2015) provides an algebraic momentclosure for such diseases based on Lie algebraic methods; while (Pellis et al., 2015) explore the exactness of closures when infective periods are of a constant duration.
By comparison, progress has been modest for diseases with susceptible-infected-susceptible (SIS) dynamics, equivalent to the network-based contact process (Liggett, 2013), where recovery from infection does not lead to immunity. Despite its lower dimensionality than the SIR model, the possibility of reinfection can cause correlations between indirectly connected individuals to accrue over time. Consequently, moment-closure approximations on networks with SIS-dynamics are in general not exact, and their analytical tractability is limited. Of the progress that has been made: important formal results on their derivability from exact state-based models have been achieved by Taylor et al. (2012), Taylor and Kiss (2014) and Keeling et al. (2016) compare three systematic momentclosure approximations against stochastic simulations; (House et al., 2009) develop a motif-based approach that outperforms simpler methods for particular network topologies; while develop a compact pairwise approximation that agrees well with ODE models of a much higher dimensionality.
Capturing network structure is at its most important when edges between nodes are sparse but relatively long lasting. This, alongside the more well-defined nature of epidemiologically relevant contacts, means that moment-closure methods are potentially most valuable for understanding the spread of sexually transmitted infections (STIs). However, most STIs are modelled using the SIS-paradigm (though notably not HIV). Thus, both understanding the errors introduced by moment-closure approximations for diseases with SIS-dynamics, and improving upon these approximations, is vital for the successful application of such methods to public-health problems.
In this paper, we introduce improvements to the standard pairwise approximation for diseases with SIS-dynamics. In particular, we do this for the isolated open triple and for k-regular networks, by explicitly obtaining equations for the rates of change of the errors between triples and their standard pairwise approximation. By applying a closure to these equations, we obtain a closed set of equations that better approximate the true dynamics of infection, with only a modest increase in dimensionality. In the case of the isolated open triple, such a model is exact, while for k-regular networks, closures at the level of order-four structures have to be applied. Specifically, in Section 2 we discuss the isolated open triple, obtaining exact expressions for the appropriate errors and their rates of change, thus obtaining an exact set of equations describing the disease dynamics on this network topology. In Section 3, we use the results from the isolated open triple to inform our improved approximation on k-regular networks, i.e. networks with no loops and where each individual has k neighbours. We consider both higher-order moment-closure approximations and explicit stochastic simulations for this type of network, to act as benchmarks for our improved pairwise approximation. In Section 4, we compare this improved approximation to the standard pairwise approximation, to higher-order approximation models, and to stochastic simulations. In Section 5, we discuss some of the limitations to such an approach, and highlight some potential areas where we believe further research could be fruitful.
The isolated open triple
In this section, we consider the errors introduced by performing pairwise approximation on isolated open triples for a disease with SIS-dynamics. We define an isolated open triple as a central individual c connected to two neighbouring individuals x and y, where x and y remain unconnected, as illustrated in Fig. 1. By investigating this topology, the errors introduced by a pairwise approximation are not obfuscated by errors introduced from any external source, and exact results using the master equation approach (Kiss et al., 2017) can be generated.
We consider a diseases with SIS-dynamics, that is, upon recovery from infection (I) an individuals returns to the susceptible (S) class. We can described this process on the 3-network in terms of its states, of which there are eight -corresponding to whether each individual belongs to the S or I class -so a particular state A 2 fS; Ig 3 . We denote the probability of being in a certain state Pðx ¼ X; c ¼ C; y ¼ YÞ as ½X x C c Y y , where X; C; Y 2 fS; Ig. If we consider recovery from infection, c, and transmission across partnerships, s, to be Poisson processes, then the above situation is a continuous-time Markov process, and can be fully described by its Master equations (see Kiss et al., 2017, Chapter 2 for an introduction to this approach).
We set initial probabilities of each state by assuming random initial conditions, i.e. by taking I 0 $ Uð0; 1Þ and setting ½I x I c I y 0 ¼ I 0 Â I 0 Â I 0 and so on. Note, under this assumption, we have the symmetries ½S x S c I y ¼ ½I x S c S y and ½S x I c I y ¼ ½I x I c S y . Thus, the dynamics of the isolated open triple are fully and exactly described by the following six ODEs: _ ½I x S c I y ¼ cð½I x I c I y À 2½I x S c I y Þ À 2s½I x S c I y ð 5Þ _ ½I x I c I y ¼ À3c½I x I c I y þ 2sð½S x I c I y þ ½I x S c I y Þ Note that the disease-free state ½S x S c S y is absorbing, and so given long enough this system will always evolve to this state. Hence, without an external source of infection, a disease cannot persist indefinitely with an isolated open triple (or indeed, within any isolated graph of finite topology). If we wish to consider initial conditions that do not assume random mixing, e.g. pure initial conditions, eight equations are required. These are given in full in Appendix A.
The pairwise approximation for the isolated open triple
We now introduce the pairwise approximation for the open triple. It is important to note that we are considering a local momentclosure approximation, i.e. we are tracking the dynamics and errors introduced for a particular subgraph, as opposed to a global moment-closure approximation, where we apply closures at a population level.
We begin by considering equations for the probability of individuals (nodes of the open triple) being in a certain state A 2 fS; Ig, where we denote Pða ¼ AÞ as ½A a . ODEs describing the rate of change of these states can be obtained by summing the rates of change from the appropriate triples, e.g.
We observe that the state of an individual depends on the probability of pairs of individuals being in certain states: we denote Pða ¼ A; b ¼ BÞ as ½A a B b and also obtain these by summing the appropriate triples. We arrive at the following equations: where ½I a ¼ 1 À ½S a and ½I a I b ¼ ½I b À ½S a I b ¼ ½I a À ½I a S b . Thus, we see that the rate of change of the probability of the infection status of individuals depends on the infection status of certain pairs, which themselves depend on the infection status of certain triples. This set of equations is unclosed, as we do not have expressions representing the time evolution of the disease status of these triples. Typically, studies have obtained a closed set of equations by assuming that the infection status of individuals x and y are conditionally independent given the infection status of individual c (Sharkey, 2008;Sharkey, 2011;Pellis et al., 2015). That is, we make the following assumption: Observing that ½S x S c ¼ ½S x À ½S x I c , and that ½S c ¼ ½S x S c À ½I x S c , we obtain a closed set of three equations, which we refer to as the pairwise approximation for the isolated open triple, given in full below: Model 2 -The pairwise approximation for the isolated open triple
Quantifying errors
We can now compare the pairwise approximation model to the exact model (Eqs. 1-6). The approximate model captures the dynamics of the system at low values of the transmission rate s, but if s is sufficiently high, the approximate model behaves qualitatively different to the exact model -there is no absorbing state, and we have a non-zero stationary probability of individuals being infected (Fig. 2). While in Model 1 ½S x S c S y never decreases, in Model 2 its approximation ½S x S c ½S c S y =½S c can decrease. This decrease occurs because of the rate of change of ½I c to ½S c . In Model 1, this transition only affects ½S x S c S y from the state ½S x I c S y , which only ever increases the probability of ½S x S c S y . However, in Model 2 the decoupling of the two pairs and single means that this transition, with certain within pair correlations, can lead to a decrease in ½S x S c ½S c S y =½S c .
Comparing the exact value for triples with their approximation at any given time, we observe this approximation underestimates the probability of the state ½I x S c I y , and overestimates the probability of the state ½S x S c I y . Indeed, the underestimate of ½I x S c I y is exactly the overestimate of ½S x S c I y (Fig. 3).
To understand why, consider the quantities a ½SxSc Iy :¼ ½S x S c I y ½S c À ½S x S c ½S c I y and a ½IxSc Iy :¼ ½I x S c I y ½S c À ½I x S c 2 , borrowing notation from Sharkey et al. (2015), which quantify the difference between triples and their approximations. By expanding ½S c ¼ ½S x S c S y þ 2½S x S c I y þ ½I x S c I y ; ½S x S c ¼ ½S x S c S y þ ½S x S c I y ; and ½S c I y ¼ ½S x S c I y þ ½I x S c I y and cancelling the appropriate terms, we arrive at the fact that both quantities are equal but opposite in sign, and thus we now define a S as: a S ¼ a ½IxSc Iy ¼ Àa ½SxSc Iy ¼ ½S x S c S y ½I x S c I y À ½S x S c I y 2 Noting further that a ½SxSc Sy ¼ a S , while clearly a ½IxSc Sy ¼ a ½SxSc Iy ¼ Àa S , we observe that the difference between true and approximate triple values for all triples with susceptible central individuals depends upon one quantity a S . Similarly, the difference between true and approximate triple values of all triples with infected central individuals depends only on one quantity, which we denote a I :
Improving the pairwise approximation
If instead of using the approximations from Eq. (11), we let Eq.
(9), and let ½I x S c I y ¼ ð½I x S c 2 þ a S Þ=S c in Eq. (10), we obtain the rates of change of pairs in terms of singletons, pairs, and a S . To obtain a closed set of equations, we must consider _ a S , where the rates of change of triples can be obtained from the exact model.
where / S ¼ ½S x S c S y ½I x I c I y þ ½S x I c S y ½I x S c I y À 2½S x S c I y ½S x I c I y ð19Þ Thus, the rate of change of a S depends in turn on the rate of change of a I , which is given by: where We insist that / S and / I are 0 if either ½S c ¼ 0 or ½I c ¼ 0. Using the above equations, we arrive at a closed set of equations that describes exactly the disease dynamics of the open triple, without any reference to the particular states of triples themselves, by tracking the error terms a S and a I . Model 3 below describes in full this improved pairwise model, with / S and / I described as above: _ ð 28Þ _ a I ¼ À4ca I þ 2sð/ I À a I Þ; with / I as in Eq:ð24Þ ð 29Þ By including a S and a I and their time-evolution in Model 3, we obtain a system of ODEs that describes exactly the dynamics of the open triple. However, it is worth noting that this new model is of no lower dimensionality than Model 1. Despite this, we believe this is still a valuable model to have obtained explicitly. There are two principal reasons for this: firstly, by creating a system where errors a S and a I are tracked explicitly, we can obtain results and gain an understanding about the ways in which the standard pairwise approximation (which ignores the action of a S and a I ) fails to capture the disease dynamics of the isolated open triple; and secondly, the derivation of this model informs our strategy of how to derive an improved pairwise approximation for k-regular networks, where there is a significant reduction in dimensionality.
Upon numerical evaluation, interesting results about the error terms a S and a I arise. When considering the whole state space, both error terms can be either negative or positive (a S ; a I 2 ½À1=4 1=4). However, this is not the case when starting from either random or pure initial conditions; in both scenarios, a S P 0 and a I 6 0. This is numerically demonstrated in Appendix B. Consequently, assuming random or pure initial conditions, we arrive at the following bounds: Of these, the bound ½I x I c 2 =½I c P ½I x I c I y is of particular interest. In previous moment-closure studies, it has been suggested heuristically that moment-closure models underestimate the probability of ½I x I c I y triples (Taylor et al., 2012). This does hold if the system is closed at the level of individuals, i.e. if we assume that the infection status of neighbours are independent. The above result demonstrates that the opposite is true if the system is closed at the level of pairs: For random initial conditions, a S and a I appear to be uniquely defined by the pairs ½S x S c and ½S c I y , in other words a S and a I appear to be functions of ½S x S c and ½S c I y . In theory, given values of ½S x S c and ½S c I y , one could determine the values of a S and a I exactly, consequently reducing the dimensionality of Model 3, as equations for their time evolution would no longer be necessary. As a S and a I appear to be functions of two variables, they can be represented visually as surfaces, with ½S x S c and ½S c I y as x and y-axes, and with a S or a I as the z-axis. Included in supplementary material are animations of the evolution of the shape of these surfaces as we increase s. These animations confirm the above bounds. overestimates the probability of ½SxIcSy. In both cases, the overestimate of one is equal to the underestimate of the other. In both plots we set s ¼ 1; c ¼ 1.
k-regular networks
In Section 2, we considered the accuracy of the standard pairwise approximation on the isolated open triple, and derived a closed exact set of equations describing the errors such an approximation makes. We could do so because we could compute exactly the probability of the states of the open triple (Model 1), and working backwards we could derive expressions for _ a S and _ a I solely in terms of ½S x S c , ½S c I y , a S , and a I -i.e. solely in terms of pairs and errors terms. Informed by these results, we move on to consider pairwise approximations for k-regular networks. k-regular networks are defined as networks in which each individual has k neighbours. Here, we consider k-regular networks which are infinite and contain no loops 1 . Being infinite, the disease dynamics on such a network cannot be described exactly by a closed set of ODEs, unless a closure at some level is exact, as in Sharkey et al. (2015) for diseases with SIR-dynamics. As stated previously, the possibility of reinfection induces correlations between distantly connected individuals, meaning the method used by Sharkey et al. (2015) is not successful for diseases with SIS-dynamics. However, one can close the system at a higher level than pairs and by doing so, we can obtain expressions for _ a S and _ a I solely in terms of pairs and error terms. While these are still approximations to the true disease dynamics on a k-regular network, doing so makes a considerable improvement on the standard pairwise approximation. This is the strategy we employ in this section. While these k-regular networks are clearly idealisations far removed from any real-world sexual network, we believe that they are a useful example to study for a number of reasons. The impact of a small number of contacts, and the resulting dynamical correlations between non-adjacent individuals, is still relatively poorly understood (Keeling et al., 2016). In these idealised networks, the errors such correlations introduce into moment-closure approximations are at their most pronounced, and are not muddied by errors introduced from other sources, such as clustering or heterogeneity. While heterogeneity in the number of contacts individuals have is apparent in any real-world sexual network, and is important to capture when modelling STIs, the effect of heterogeneity has been studied extensively (Eames et al., 2002;Simon and Kiss, 2015), and can oftentimes be modelled by introducing multiple risk-groups into a mean-field approximation model (e.g. Edwards et al., 2010). Additionally, in the case of an infinite network, each individual has exactly the same properties, allowing us to bridge the gap from local to global moment-closure approximation.
In this section, we define global moment-closures for k-regular networks. That is, we define a closure in terms of populationlevel quantities rather than for the probabilities of particular individuals being in certain states. Accordingly, we use the notation ½S to represent the proportion of individuals who are susceptible, ½SI to represent the proportion of pairs where one individual is susceptible and one individual is infected, and so on. While it is standard within the moment-closure literature to refer to numbers of these quantities, we find that dealing with proportions avoids much of the combinatorial rigmarole involved, and has a more obvious correspondence with the methods described in Section 2. The following results hold true whether referring to proportions or numbersin Appendix C, we provide a conversion table to transform the results from this section to numbers, and provide the model derived in this section in terms of numbers.
While the derivation of this moment-closure is independent to that of the previous section, and can be treated as a separate mod-elling exercise, we will observe that there are clear analogies between the two. This correspondence occurs because k-regular networks are isotropic -number of partnerships, as well as transmission and recovery rates, are homogeneous across the population. An alternative conceptualisation is that if we were to randomly sample one individual (or a higher-order motif) from a k-regular network, the probability of it being in a given state is directly equal to the proportion of the population in that state. Conversely, if we consider a population of infinitely many isolated open triples from Section 2, then the proportion in a given state is equal to the probability of one triple being in that state. Therefore while Section 2 is formulated in terms of probabilities and Section 3 is formulated in terms of proportions, we are effectively modelling interchangeable quantities.
Mean-field and pairwise approximations for k-regular networks
The following equation describes the rate of change of ½S for any network (Simon et al., 2011): In the case of k-regular networks, k ¼ ks. By assuming the disease status of constituent individuals in pairs are uncorrelated, i.e. ½SI % ½S½I, we arrive at the mean-field approximation for the k-regular network, which is equivalent to the standard SIS-model: Model 4 -The mean-field approximation for k-regular networks If instead we want to close the system at a higher-order moment, we must consider the rate of change of ½SI: _ ½SI ¼ cð½II À ½SIÞ À s½SI þ ðk À 1Þs½SSI À ðk À 1Þs½ISI ð 32Þ To close this system of equations, we must approximate the proportion of triples ½SSI and ½ISI. We use the standard pairwise approximation of Rand (1999) and Keeling, 1999, commonly attributed to Kirkwood (1935). Using straight line brackets to denote numbers of individuals, etc. this is expressed as: When terms are expressed in terms of numbers this must be scaled by the factor ðk À 1Þ=k; this scaling factor disappears for k-regular networks when expressed in terms of proportions. This can be shown by converting either formulation of the approximation to the other using the conversion table provided in Appendix C. Using this approximation, we obtain: Model 5 -The pairwise approximation for k-regular networks where ½S ¼ ½SS þ ½SI, ½I ¼ 1 À ½S, ½IS ¼ ½SI, and ½II ¼ 1 À ½SS À 2½SI.
Improving pairwise approximations for k-regular networks
Once again, we can look to improve the pairwise approximation by considering the rate of change of triples. Reintroducing subscripts (the position of individuals is illustrated in Fig. 4), the state of x À c À y triples depend upon topologies consisting of four connected individuals: line graphs of length 4 ½A a X x C c Y y and ½X x C c Y y B b , capturing the external force of infection acting upon 1 diseases with SIS-dynamics on k-regular networks have been studied before are referred to in the theoretical literature as the contact process on the homogeneous tree T kÀ1 (Liggett, 2013).
individuals on the periphery of the triple, and star graphs with three outer individuals, ½X x C c Y y Z z , capturing the external force of infection upon the central individual.
The rates of change for the triples in an infinite k-regular network, derived from House et al. (2009), are given in Appendix D.
As before, we define a values as the difference between triple values and their standard pairwise approximation. Once again, the following relations hold: Thus, as for the isolated open triple, the difference between triple values and pairwise approximations depend only upon two quantities: a S and a I , which are as defined in Eqs. (15) and (16). We can use the triple equations from Appendix D to obtain expressions for _ a S and _ a I for this type of network: À ðk À 2Þð2½S x S c I y I z ½SII À ½I x S c I y I z ½SIS À ½S x S c S y I z ½IIIÞ Despite being calculated for triples within a k-regular network, we find that U S ¼ / S and U I ¼ / I as previously defined for the isolated open triple in Eqs. (20) and (24), and so use the / S and / I terms henceforth. We therefore obtain a closed set of equations by once again setting But now we must also make some approximation for order four terms. We do this by making the following closures: Thus, we can again express _ a S and _ a I as (complicated) functions of ½SS; ½SI; a S and a I . Using this, we arrive at a system of four ODEs, which we call the improved pairwise approximation for k-regular networks: Model 6 -The improved pairwise approximation for kregular networks _ ½SI ¼ cð½II À ½SIÞ À s½SI þ ðk À 1Þs ½SS½SI À a S ½S ð49Þ À ðk À 1Þs where / S and / I are defined as in Section 2.
Higher-order moment-closure approximations
To assess the accuracy gained by modelling the error terms a S and a I , we compare our model to higher-order moment-closures. The first of these we refer to as a neighbourhood closure, previously described by Lindquist et al. (2011) and Keeling et al. (2016), where we model a central individual and their number of infected neighbours explicitly. This system is described by 2 Â ðk þ 1Þ ODEs. The second of these we refer to as an extended triple closure, where we explicitly model a central triple and every neighbour of this triple. This system is described by 2 3kÀ1 equations (though its dimensionality can be reduced by accounting for symmetries). In both cases, we approximate the external force of infection on outer individuals by exploiting the symmetry of the topology of the k-regular network. While each model is still an approximation towards the true dynamics of a k-regular network, in virtue of closing the system at a higher order, these models are expected to have a greater accuracy. From these higher-order models, we can also obtain estimates of the terms a S and a I , with which we can compare the a terms obtained from the improved pairwise model for the k-regular network (Model 6).
The neighbourhood closure
For the neighbourhood closure, we model a central individual and their number of infected neighbours explicitly. Visually then, we are modelling a star topology. The rate of change of state of the 'star' will depend upon both the internal configurations and the immediate neighbours of the star. We show this visually in Fig. 5. To close this system of equations, we make the assumption that the configuration of two overlapping 'stars' are conditionally independent given the infection status of the two shared individuals of the combined configuration. As we only need to consider the effect of an external force of infection if the relevant neighbour is susceptible (S), there are only two quantities relevant to the external force of infection on that individual, depending on the infection status of the original central individual (S or I), which denote k S and k I accordingly. These terms are constructed by summing all configurations of the external neighbours including an infected individuals, multiplied by the number of infected external neighbours in that configuration, divided by the sum of all possible configurations of external neighbours. Denoting a central individual in state A 2 fS; Ig with i 2 f0; 1; . . . :; kg infected neighbours as ½A i , the neighbourhood model can thus be described by the following set of equations: Model 7 -The neighbourhood approximation model for kregular networks where k S and k I are given by: To obtain estimates for a S (and a I ) from this model, we must derive the proportion of triples implied by the assumptions of the neighbourhood model. This can be calculated as follows. For a given triple ½XCY, we let l indicate whether X and Y are infected.
3.3.2. The extended triple closure For the extended triple closure, we model a triple and each of its neighbours explicitly, requiring 2 3kÀ1 equations. The state of this system will depend on the infection status of the neighbours of these neighbours, i.e. the rate of change of states in the extended triple depend upon order 4k À 2 configurations (illustrated in Appendix E). We approximate these external forces on the extended triple by assuming that the state of these higher-order structures amount to overlapping extended triple topologies conditionally independent given the state of their shared individuals, of which there are 2k. A detailed explanation of the extended triple closure model is provided in Appendix E.
Stochastic simulations
We use explicit stochastic simulations as our final benchmark for the accuracy of our approximate models. It is not computationally possible to construct infinite loopless networks for simulations. Instead, large random graphs where each individual has k neighbours can be constructed using the Molloy-Reed algorithm (Molloy et al., 1995), which should behave similarly for very large network sizes. We use the methods outlined by Keeling et al., 2016 to remove short loops and to efficiently calculate the quasiequilibrium prevalence of infection. Here we illustrate the external force of infection on a neighbourhood in the neighbourhood approximation for k-regular networks, for the example of k ¼ 3. Shaded blue is our triple of interest, shaded in orange are any additional individuals that are modelled explicitly, while shaded in white are individuals not explicitly modelled who exert a force of infection on the explicitly modelled neighbourhood. In this approximation, we model a central individual c, and the number of infected neighbours c as (here shown by x, y, and z). The external force of infection on the explicitly modelled neighbourhood will depend upon order-six structures: ½XxCcYyZzX0 x0 X1 x1 , ½XxCcYyZzY0 y 0 Y1 y 1 , and ½XxCcYyZzZ0 z0 Z1 z1 . To close the system, we make the approximation that, e.g. ½XxCcYyZzX0 x0 X1 x1 % ð½XxCcYyZz  ½XxCcX0 x0 X1 x1 Þ=½XxCc. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Comparing models
In this section, we compare the previous described k-regular network models; in order of dimensionality, these are: the meanfield approximation model (Model 4), the pairwise approximation model (Model 5), the improved pairwise approximation model (Model 6), the neighbourhood approximation model (Model 7), and the extended triple approximation model. As we are considering a disease with SIS-dynamics, the models evolve to an endemic prevalence of infection (given a sufficiently high transmission rate) -we use this as the primary metric for model comparison. All of these models are approximations of the true system, where there are infinitely many individuals, but we expect as we increase the dimensionality of approximation we also increase the accuracy of the model. We compare all approximate models to explicit stochastic simulations on networks of 10,000 individuals.
In Fig. 6, we compare the endemic prevalence generated by the four models that do not explicitly model a to stochastic simulations -the improved pairwise approximation (which utilises the dynamics of a) is considered in Figs. 7 and 8. While we notice large differences between mean-field and pairwise models, the difference in prevalence between models decreases as we increase the dimensionality of the model. For k ¼ 3, there is little difference between the neighbourhood and extended triple approximation models, and there is excellent agreement between the extended triple model and stochastic simulation. For k ¼ 4 and k ¼ 10, the extended triple model is omitted, as the neighbourhood approximation models match closely to stochastic simulations. This indi-cates that including further complexity into a model may be unnecessary, or may not be worth the increasing complexity or computational expense. For k ¼ 2, there is still a significant difference between simulation and the extended triple model. However, this is unsurprising, as previous research (Keeling et al., 2016) has shown that errors persist when much larger neighbourhoods are modelled explicitly. Fig. 6 also illustrates that as we increase k, models tend towards the mean-field approximation (which can be considered the k ! 1 limit). In Appendix F, we provide a proof of this for the pairwise model, and outline how this would be proved in the general case. We also see that as we increase k, the difference between pairwise and neighbourhood approximation models decreases, although the pairwise model consistently predicts higher endemic prevalences. Now, we turn our attention to the improved pairwise approximation (Model 6), which tracks the errors a S and a I explicitly. Here we focus on the examples k ¼ 2 and k ¼ 3, though comparable results are found for all higher values of k. The error in our pairwise model depends on only one term: a S . This term captures the error between the 'true' value of triples and the standard pairwise approximation of their values. We can obtain estimates for a S from each of our higher-order models, noting that the improved pairwise approximation (Model 6) is based on consideration of four connected nodes. Comparing a S between models allows us to assess the extent to which the improved pairwise approximation is successful in capturing the errors introduced to the pairwise approximation induced by dynamics of higher-order structures.
By plotting a S as a function of the pairs ½SS and ½SI we obtain surfaces; their shape informing our intuition of the behaviour of a S as we move through ð½SS; ½SIÞ-space. Firstly, we observe that the numerical result a S P 0that was true for the isolated open triple also holds true for each of these models (numerically demonstrated in Appendix G). Hence, the bounds obtained for triples ½S x S c S y ; ½I x S c I y ; and ½S x S c I y in Section 2 for the isolated open triple also hold for k-regular networks. Secondly, we observe the similarity between a S surfaces obtained from the improved pairwise and neighbourhood approximation models. We do, however, see these are smaller than a S from the extended triple. In other words, models that include higher-order correlations, such as the extended triple, have higher values of a S than are obtained from the improved pairwise model. Comparing the prevalence of infection obtained from these models, we observe only a minor difference between improved pairwise and neighbourhood approximations. By including just two more equations (for a S and a I ), we arrive at a model with an endemic prevalence much closer to results obtained from stochastic simulation, with only a marginal increase in dimensionality.
Unlike the isolated open triple, a I can be positive when k > 2 in each of the approximate models. However, this only occurs at very high transmission rates -typically when endemic prevalence I Ã > 0:8 (Appendix G).
In an attempt to further improve the accuracy, and to reduce the dimensionality, of the model, we consider the effect of ignoring a I on the shape of a S in the improved approximation; noting that the values of a S from the extended triple approximation are consistently larger than from the other lower-order approximations (Fig. 7). We do this by setting a I ¼ 0, which is equivalent to using the standard pairwise approximation for triples with infected central individuals. This is in part justified by the fact that values of a I are typically much smaller in magnitude than a S (Appendix H). This assumption further reduces the dimensionality of the system, as we have one less variable. Moreover, as a I is typically 6 0, ignoring it will increase _ a S , meaning we will generate higher values of a S . (Positive values of a I can only occur at very high values of s; at such values, the disease dynamics on the k-regular network are already well approximated by the standard pairwise approximation). Indeed, comparing shapes of a S (Fig. 7), we see this assumption provides a closer match to the values from the extended triple. In Fig. 8, we compare the endemic prevalence obtained using this a I ¼ 0 assumption against the extended triple approximation, as well as against the improved pairwise approximation where a I is a dynamic variable. Ignoring a I provides an estimate closer to the extended triple approximation than accounting for a I explicitly, Fig. 7. Exploring the shape of aS for different approximate models. Here we compare the shape of the error term aS as a function of ½SS and ½SI for improved pairwise models ((a) and (d)), and as a function of ½SxSc and ½IxSc for neighbourhood and extended triple approximations ((b) and (c)), for the example k ¼ 3. We observe that aS in the improved pairwise (a) and the neighbourhood (b) approximation models are extremely similar, but that the improved pairwise approximation model underestimates this error compared to the extended triple approximation (c). By assuming aI ¼ 0 and _ aI ¼ 0 (d), the resulting aS surface more closely resembles that of the extended triple model. In all plots, we set s ¼ 1; c ¼ 1.
which in the case of k ¼ 3 matches stochastic simulations closely. In Appendix I we consider the time evolution of these models.
Discussion
Whenever detailed information on underlying network structure is available, detailed stochastic simulation of an epidemic on a network is always the 'gold standard' for any real-world application. In the absence of such information, moment-closure approximation methods for the spread of infections promise relatively simple models that allow us to understand the effect of network structure on the dynamics of an epidemic. The success of such a method, however, depends upon understanding the errors introduced by moment-closure approximations, and upon refinements that minimise such errors. While this approach has been successfully applied to diseases with SIR-dynamics, the dynamic buildup of correlations between distant individuals for diseases with SIS-dynamics means success for infections with this natural history has been more limited. However, as the dynamics of most STIs can be well approximated by the SIS-paradigm, and given the importance of network structure in this case, further research into this area is paramount. Indeed, there is already a considerable body of literature concerning moment-closure approximations for SISdynamics (Taylor et al., 2012;Taylor and Kiss, 2014;Keeling et al., 2016;House et al., 2009;Simon and Kiss, 2015), as well as other network approaches to diseases of this type (Floyd et al., 2012;Lee et al., 2013;Wilkinson and Sharkey, 2013), demonstrating this as an active research area.
This study improves upon the standard pairwise approximation by explicitly tracking the errors between the 'true' value of triples and their estimate from this approximation. We show that these errors are fully described by the quantity a S for triples with susceptible central individuals, and by the quantity a I for triples with infected central individuals. By tracking the time-evolution of these error terms, we improve upon the standard pairwise approximation by incorporating these terms into the modelling framework. For the isolated open triple (just three individuals connected in a line), both _ a S and _ a I are exactly described as functions of ½S x S c ; ½I x S c ; a S and a I ; hence, in this case, the improved pairwise model is itself exact. For k-regular networks, _ a S and _ a I depend upon order-four structures. However, by approximating the prevalence of these structures via higher order moment-closures, we obtain expressions for _ a S and _ a I solely in terms of pairs, a S and a I . While such a model is not exact, explicitly modelling the time-evolution of these errors markedly improves upon the standard pairwise approximation for k-regular networks, obtaining prevalence estimates comparable both to models closed at even higher orders and to explicit stochastic simulations.
The findings of this paper contribute towards understanding the shape and direction of errors introduced by pairwise approximations. We show that the errors between triples and their standard approximation are quantified by just two values: a S and a I . Interestingly, we find numerically that a S P 0 and a I 6 0, which inform us as to whether the standard pairwise approximation underestimates or overestimates the proportion of certain triples. While both bounds hold for the isolated open triple, only a S P 0 holds in general for k-regular networks. This result also appears to hold for the constituent triples of all other investigated topologies (line graphs up to length 10, star graphs with up to 10 neighbouring individuals, the extended triple with no external force of infection), while the result a I 6 0 only appears to apply when central individuals in a triple have no other connections outside of the triple. We hence believe that an analytical exploration of such bounds could be fruitful, and would make an important contribution to this research area if such bounds could be proven generally. A deeper understanding of the shape, direction, and magnitude of such error terms is not only of interest to those concerned with using the improved pairwise approximation model described in this paper, but to any researcher interested in applying the standard pairwise approximation to a network model of a disease where recovery from infection does not lead to immunity.
In this paper, we compare approximations to the dynamics of kregular networks closed at increasingly higher levels of complexity -from individual, to pair, to neighbourhood, to an extended neighbourhood. As we increase the dimensionality of a model, we expect to obtain more accurate results. On the other hand, models of high dimensionality are difficult to understand intuitively and are much more computationally expensive. Whether including such complexity is worthwhile depends on the task at hand. We believe that our improved pairwise approximation provides a reasonable compromise between intuition and complexity -this model is still described by a small number of ODEs, and has dynamics closely resembling those from the model closed at the level of neighbourhoods, more closely matching prevalence estimates obtained from Fig. 8. Comparing improved pairwise approximations against higher-order approximations for k ¼ 2 and k ¼ 3-regular networks. We compare endemic prevalence I Ã obtained from improved pairwise model (full in orange; aI ¼ 0 in purple) against neighbourhood (blue) and extended triple (green) approximations, as well as against explicit stochastic simulations (points), as we vary k ¼ sk, for (a) k ¼ 2 and (b) k ¼ 3-regular networks. In both (a) and b) I Ã obtained from the improved pairwise approximation is very similar to I Ã obtained from the neighbourhood approximation. By assuming aI ¼ 0 and _ aI ¼ 0, the dynamics of the improved pairwise approximation are closer to those of the extended triple approximation, and match I Ã from stochastic simulations well for k ¼ 3. For all models we set c ¼ 1. For stochastic simulations, each I Ã point is calculated as the average of 150 runs, and error bars indicate 95% confidence intervals. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) stochastic simulations. An unexpected result is that by ignoring a I , i.e. using the standard pairwise approximation for triples with infected central individuals, one appears to obtain a better approximation to the true dynamics. It is important to establish if such a result holds generally, and if so why, or whether this result is a spurious convenience for k-regular networks.
The results here consider the two most ideal networks: the isolated open triple is the simplest possible network topology including three individuals, while in k-regular networks each individual has exactly k neighbours and there are no closed loops within the network. We consider these idealisations as it is in these networks that network structure is most dominant and the errors introduced by moment-closure approximations are most pronounced. But this means there is fertile ground for further exploration on both local and global scales. On a local scale, a taxonomy of the errors that occur for a variety of different small topologies, as has been done by Pellis et al. (2015) for diseases with SIR-dynamics, would be useful contribution to understanding the impact of local moment-closures for diseases with SIS-dynamics. On a global scale, understanding whether tracking the dynamics of error terms explicitly would be worthwhile in heterogeneous networks (building upon the work of Simon and Kiss (2015)), and assessing whether the same techniques can be applied in the presence of clustering, are important next steps.
This paper makes three assumptions common to the literature on the mathematics on epidemics on networks: first, that epidemiologically relevant contacts (the edges between nodes) are fixed throughout the epidemic and not dynamic; second, that these contacts are identical in kind, such that probability of infection for an individual from any partner of theirs is equal to any other partner; third, that individuals have exponentially distributed periods of infection (the Markovian assumption). Each of these are in some senses unrealistic: people's sexual partnerships change over time (it is a question of theoretical importance the extent to which the dynamics of epidemics on dynamics networks can by approximated by the dynamics of epidemics on static networks, which has begun to be explored (Volz and Meyers, 2007;Bansal et al., 2010)); for individuals in more than one partnership, the frequency of sexual contact will be different for each partnership, hence the probability of transmission across partnerships will also be different; whilst periods of infection may be better modelled as having a constant duration. For SIR-dynamics, a variety of dynamic network models incorporating moment-closure approximations, or other low-dimensional ODE models have been developed (Ball and Neal, 2008;Volz, 2008). So too are there a variety of dynamic network models for SIS-dynamics (e.g. Bauch and Rand, 2000;Leng and Keeling, 2018). Incorporating improved moment-closure approximations into such models, and exploring how the introduction of partnership formation and dissolution effects the errors introduced, are important next steps. While studies into the contribution of steady and casual partnerships to the spread of STIs has been explored (Xiridou et al., 2003;Hansson et al., 2019), heterogeneity in edge type is an underexplored topic for momentclosure approximations, even for diseases with SIR-dynamics. Assuming constant periods of infection, instead of making a Markovian assumption, can make closures exact for different network topologies in the case of SIR-dynamics (Pellis et al., 2015).
Exploring this alternative assumption and its effect on errors a S and a I may prove interesting avenues of research.
With regards to modelling the spread of STIs, it is clear that research should continue to develop more realistic and more sophisticated stochastic simulations. However, we believe that approximate methods have an important role to play, in both developing an intuitive understanding of the effect of network structure on the fate of the spread of STIs, and as a benchmark to compare such simulations against. It is in this context that improving the accuracy of such approximate methods is paramount, and it is in this context that we believe we make a valuable contribution to the literature.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
We gratefully acknowledge the Engineering and Physical Sciences Research Council and the Medical Research Council for their funding through the MathSys CDT (grant EP/L015374/1). We also thank the anonymous reviewers of this paper for their careful and considered comments that have led to a much improved manuscript.
A.1. Appendix A -An exact model for the open triple
Not assuming random mixing at initial conditions, eight ODEs are required to describe the dynamics of the open triple exactlythese are given below.
_ ½S x S c I y ¼cð½S x I c I y þ ½I x S c I y À ½S x S c I y Þ À s½S x S c I y ð 59Þ _ ½I x S c S y ¼cð½I x I c S y þ ½I x S c I y À ½I x S c S y À s½I x S c S y ð 60Þ _ ½S x I c S y ¼cð½S x I c I y þ ½I x I c S y À ½S x I c S y Þ À 2s½S x I c S y ð 61Þ _ ½S x I c I y ¼cð½I x I c I y À 2½S x I c I y Þ þ sð½S x S c I y þ ½S x I c S y À ½S x I c I y Þ ð62Þ _ ½I x I c S y ¼cð½I x I c I y À 2½I x I c S y Þ þ sð½I x S c S y þ ½S x I c S y À ½I x I c S y Þ ð63Þ _ ½I x S c I y ¼cð½I x I c I y À 2½I x S c I y Þ À 2s½I x S c I y ð 64Þ _ ½I x I c I y ¼ À 3c½I x I c I y þ sð½S x I c I y þ ½I x I c S y þ 2½I x S c I y Þ
A.3. Appendix C -Converting the improved pairwise model from proportions to numbers
While we find that considering proportions is a more convenient way to express the results from Section 3, we appreciate that others may prefer to use our results under the convention of terms referring to numbers of motifs. In this appendix we provide a conversion table to transform the terms from this section from proportions to numbers, and derive the improved pairwise model in terms of numbers. We stress that the improved pairwise model presented here is equivalent to the model presented in Section 3.
First, to express the quantities in Section 3 in terms of numbers, we must be able to count the number of motifs relative to every individual. In a k-regular network, for every individual there are k pairs, for every pair there are ðk À 1Þ triples, and for every triple there are ðk À 1Þ line graphs of length 4 and ðk À 2Þ 4-stars. Using straight line brackets jXj to denote the number of individuals in state X etc. Table 1 below outlines equivalent terms: Using these conversions for example on Eqs. (30) and (32), we obtain the formally derived equations obtained by Taylor et al. (2012) (Theorem 1). We can also use these to convert our closures from proportions to numbers. Applying these, we obtain the unintuitive result that the closure ½XCY % ½XC½CY=½C in terms of proportions is equivalent to the closure jXYZj % ðk À 1Þ=kÂjXCjjCYj=jCj in terms of numbers. Applying these conversions to this and to Eqns. (46) and (47) To obtain the improved pairwise approximation in terms of numbers, we again consider the term a between a triple and its approximation. Below we consider a jISIj : As before, we find a jISIj ¼ a jSSSj ¼ Àa jSSIj ¼ a jSj =ðk À 1Þ. Defining a jIj as jIIIjjSISj À jSIIj 2 , we similarly find a jSISj ¼ a jIIIj ¼ Àa jSIIj ¼ a jIj =ðk À 1Þ. By applying the conversions from the table to the equations from Appendix D, we can obtain expressions for the rate of change of a jSj and a jIj : and where b jSj ¼ 2ðk À 1ÞðjI a S x S c I y jjSSj À jI a S x S c S y jjSIjÞ þ 2jS x S c I y I z jjSSIj À jS x S c S y I z jjISIj À jI x S c I y I z jjSSSj ð 74Þ _ a jIj ¼ À4ca jIj þ sðb jIj þ 2/ jIj À 2a jIj Þ ð 75Þ where / jIj ¼ 2jSISjjISIj À 2jSSIjjSIIj ð 76Þ and where b jIj ¼ 2ðk À 1ÞðjI a S x I c I y jjSIj À jI a S x I c S y jjIIjÞ À 2jS x S c I y I z jjSIIj þ jI x S c I y I z jjSISj þ jS x S c S y I z jjIIIj ð 77Þ Finally, rearranging Eq. (68) and its analogues, substituting in a jXj we can obtain the closure for triples in the improved pairwise approximation: Thus we arrive at the improved pairwise approximation for kregular networks, expressed in terms of numbers rather than proportions: Model 6 in terms of numbers -The improved pairwise approximation for k-regular networks _ jSSj ¼2cjSIj À 2s ðk À 1Þ 2 jSSjjSIj À a jSj kðk À 1ÞjSj A.4. Appendix D -The rate of change of triples in a k-regular network The state of triples in a k-regular network depend upon the state of order-four network structures: line-graphs of length four (½A a X x C c Y y ; ½X x C c Y y B b ) and star graphs with three outer individuals (½X x C c Y y Z z ). Assuming random initial conditions, by the symmetry of the system ½X x C c Y y B b ¼ ½B a Y x C c X y , meaning only one length four line-graph term is needed in the equations below. Given that ½SSI ¼ ½ISS and ½SII ¼ ½IIS, the rates of change of these triples are described by six ODEs, which can be derived from the system of Eqs. (12) described by House et al. (2009) by omitting terms that include closed loops and by converting the equations from numbers to proportions via the table in Appendix C.
_ ½SSS ¼ cð2½SSIþ½SISÞ À sðk À 2Þ½S x S c S y I z À 2sðk À 1Þ½I a S x S c S y ð 83Þ _ ½SSI ¼ cð½SIIþ½ISIÀ½SSIÞ À s½SSI À sðk À 2Þ½S x S c I y I z þ sðk À 1Þð½I a S x S c S y À ½I a S x S c I y Þ ð84Þ _ ½ISI ¼ cð½IIIÀ2½ISIÞ À 2s½ISI À sðk À 2Þ½I x S c I y I z þ 2sðk À 1Þ½I a S x S c I y ð 85Þ _ ½SIS ¼ cð2½SIIÀ½SISÞ À 2s½SIS þ sðk À 2Þ½S x S c S y I z À 2sðk À 1Þ½I a S x I c S y ð86Þ _ ½SII ¼ cð½IIIÀ2½SIIÞ þ sð½SISþ½SSIÀ½SIIÞ þ sðk À 2Þ½S x S c I y I z Þ þ sðk À 1Þð½I a S x I c S y À ½I a S x I c I y Þ ð87Þ _ ½III ¼ À3c½III þ sð2½SIIþ2½ISIÞ þ sðk À 2Þ½I x S c I y I z þ 2sðk À 1Þ½I a S x I c I y A.5. Appendix E -The extended triple model For this model, we model a triple and each of its neighbours explicitly. Thus, for a k-regular network, 3k À 1 individuals are modelled explicitly, meaning 2 3kÀ1 equations are required to describe this model. By accounting for symmetries in the extended triple topology, one could reduce the dimensionality of this system. However, the method constructing the set of ODEs algorithmically described below models each state explicitly. Writing an algorithm that accounts for such symmetries, while possible, would be somewhat cumbersome, and as such we did not decide to pursue this. We approximate the external forces on this topology by assuming that the higher-order structures that the rate of change of states depend on can be approximated by conjoined extended triple topologies conditionally independent on the state of shared individuals.
We construct the extended triple model in two steps. Firstly, we construct a model with SIS-dynamics on the finite topology of the extended triple. To construct this model, we provide an algorithm for constructing SIS-models on graphs with any arbitrary finite topology. Secondly, we add on external force of infection to this model, which we achieve via relabelling.
E.1. An algorithm for constructing SIS-models on graphs with arbitrary topology
In this section we outline an algorithm for constructing a model with SIS-dynamics on networks of arbitrary topology. We can rewrite the full equations for the open triple in matrix form as follows: if we let x ¼ f½SSS; ½SSI; ½SIS; ½SII; ½ISS; ½ISI; ½IIS; ½IIIg T . Then dx dt States are ordered in this way so that they are interpreted as a binary string (e.g. ½SSS as 000). For the open triple, these are given by: of ½S x S c S y ; S x 0 I x 1 I c 0 I y 0 I y 1 will depend upon some order 10 terms: ½S x S c S y ; S x 0 I x 1 I c 0 I y 0 I y 1 ; I x 00 S x 01 , ½S x S c S y ; S x 0 I x 1 I c 0 I y 0 I y 1 ; S x 00 I x 01 , and ½S x S c S y ; S x 0 I x 1 I c 0 I y 0 I y 1 ; I x 00 I x 01 . We make a closure at this level by assuming, to take the first of these as an example: ½S x S c S y ; S x 0 I x 1 I c 0 I y 0 I y 1 ;I x 00 S x 01 % ½S x S c S y ;S x 0 I x 1 I c 0 I y 0 I y 1 Â ½S x 0 S x S c ; I x 00 S x 01 I x 1 I c 0 S y ½S x S c S y ; S x 0 I c 0 However, as we have not modelled x 00 and x 01 explicitly, the probability of state ½S x 0 S x S c ; I x 00 S x 01 I x 1 I c 0 S y remains undefined. However, as we start from random initial conditions, and given that a kregular is isotropic, all extended triples within a k-regular network are equivalent. Because of this, we have: Thus, we obtain an expression for this state by taking into account the symmetry of a k-regular network, and by relabelling individuals so that states containing individuals not explicitly modelled are defined in terms of explicitly modelled individuals exclusively. We can now arrive at an expression for the external force of infection acting upon state A (k A ), which is given by: where 1 P¼I and 1 Q ¼I are indicator functions.E.3. Relabelling generally The particular relabelling depends upon the particular state of the external triple, and upon the particular neighbouring individuals whose external force of infection you are considering. The requires labellings for the k ¼ 3 case are given in Table 2, and the required relabellings for a general k is given in Table 3. The header row gives the neighbouring individual whose external force of infection we are considering, while the leftmost column gives the new positions of states in a given column now occupy. External nodes that contribute to the external force of infection always occupy the relabelled x i positions.E.4. Constructing the extended triple model To make the extended triple model, we begin by constructing the model for the relevant finite topology with SIS-dynamics, as outlined previously in this section. To construct a model approximating a k-regular network, we must add an external force of infection to individuals neighbouring the central triple. The procedure is as follows: Table 3 Relabelling for general k.
Individual 1. Construct ODEs for the SIS-dynamics for a graph of the relevant topology, with the central triple as the first three rows of the adjacency matrix. 2. Express each state N as a binary b (of length l ¼ 3k À 1). 3. For each vector b, loop through entries i 2 f4; . . . ; lg. If bðiÞ ¼ 1 calculate the external force of infection on this node, I ext , by relabelling.
5. Let e be the decimal number obtained by changing bðiÞ from 1 to 0, and let E be the state corresponding to this number. Add on the sNI ext to this ODE (i.e. _ E ¼ _ E þ sNI ext Þ.
A.6. Appendix F -Convergence to the mean-field approximation as k ! 1 We believe that as k ! 1, all models converge to the meanfield approximation. In this section, we show this is true for both the pairwise and improved pairwise approximation models, and outline how this would be approached in the general case.
For all models Eqs. (30) ( _ ½S) and (32) ( _ ½SI) hold exactly -only beginning to differ at the level of triples. Our contention is that as k ! 1, ½SI ! ½S½I. First, we note that because ½SI ¼ ½S À ½SS, ½SI ¼ ½S½I () ½SS ¼ ½S 2 . We consider _ ½SS, Fig. 11. Numerical exploration of aS and aI for different approximate models of the k-regular network. We consider how minðaX Þ and maxðaX Þ, X 2 fS; Ig vary with sk for different approximate models of the k-regular network: Improved pairwise (left column), neighbourhood (centre column), extended triple (right column). These plots demonstrate the bound aS P 0 holds for all approximations of the k-regular network, but that aI 6 0 only holds for the case k ¼ 2. For k > 2, maxðaIÞ > 0 given s is sufficiently high. These transmission rates correspond to high endemic prevalences -in all cases I Ã > 0:8. In all plots we set c ¼ 1. Now, we introduce k ¼ sk, which remains constant as k increases. We make the assumption that ½SS ¼ ½S 2 initially and consider their time evolution: These equations are equal, and therefore the relationship ½SS ¼ ½S 2 continues to hold, conditional on ½SSI ¼ ½S 2 ½I. In general we need to show that the relationship ½SSI ¼ ½S 2 ½I continues, given that it holds initially.F.1 -Convergence for the pairwise approximation model Under the standard pairwise model, ½SSI ¼ ½SS½SI=½S. Assuming that ½SS ¼ ½S 2 , it is clear that ½SSI ¼ ½S 2 ½I. Given that at t ¼ 0; ½SS ¼ ½S 2 , and that ½SS ¼ ½S 2 ) _ ½SS 2 the convergence of the standard pairwise model is proved by induction.F.2 -Convergence for the improved pairwise approximation model Under this model ½SSI ¼ ð½SS½SI À a S Þ=½S, i.e. ½SSI ¼ ½S 2 ½I () ð½SS ¼ ½S 2 ; a S ¼ 0Þ. Let us assume that ½SS ¼ ½S 2 , a S ¼ 0, and a I ¼ 0. Then ½SSI ¼ ½S 2 ½I and by examining Eqns. (40) and (43), we find that _ a S ¼ 0 and _ a I ¼ 0. Given that at t ¼ 0; ½SS ¼ ½S 2 ; a S ¼ 0; a I ¼ 0, and that ½SS ¼ ½S 2 ; a S ¼ 0; a I ¼ 0 ) _ ½SS ¼ _ ½S; _ a S ¼ 0; _ a I ¼ 0, the convergence of the improved pairwise model is proved by induction.F.3 -Convergence in the general case More generally, we believe that as k ! 1, spatial correlation at a particular level is only introduced by spatial correlations at a higher level. For example, correlations only enter the pairwise model if there are correlations at the level of pairs, correlations only enter the improved pairwise model if there are correlations at the level of pairs and triples (a terms), etc. Given that by assumption we start with no spatial correlation at any level, it follows that correlations are never introduced. However, we believe that the proof of this more general claim is beyond the remit of this paper.
A.7. Appendix G -Exploring a S and a I for different approximate models of the k-regular network Fig. 10,11 A.8. Appendix H -Exploring the shape of a I for different approximate models Fig. 12 Fig. 12. Exploring the shape of aI for different approximate models. Here we compare the shape of the error term ÀaI as a function of ½SS and ½SI for the improved pairwise model (a), and as a function of ½SxSc and ½IxSc for neighbourhood and extended triple approximations ((b) and (c)), for the example k ¼ 3. We observe that aI surfaces in all three models are very similar, and that their magnitude is much smaller than their corresponding aS surfaces (Fig. 8). In all plots, we set s ¼ 1; c ¼ 1. | 2020-05-28T09:14:40.781Z | 2020-05-23T00:00:00.000 | {
"year": 2020,
"sha1": "08a7b5c9d36d7b874d565ee3b2d96d6c96bd2cf8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jtbi.2020.110328",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "efd4584e9ef198c5b046b0d087c21af91d0ee78e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
52544571 | pes2o/s2orc | v3-fos-license | Strategies to Extend Bread and GF Bread Shelf-Life : From Sourdough to Antimicrobial Active Packaging and Nanotechnology
Bread is a staple food worldwide. It commonly undergoes physico-chemical and microbiological changes which impair its quality and shelf-life. Staling determines organoleptic impairment, whereas microbiological spoilage causes visible mould growth and invisible production of mycotoxins. To tackle this economic and safety issue, the bakery industry has been working to identify treatments which allow bread safety and extended shelf-life. Physical methods and chemical preservatives have long been used. However, new frontiers have been recently explored. Sourdough turned out an ancient but novel technology to preserve standard and gluten-free bread. Promising results have also been obtained by application of alternative bio-preservation techniques, including antifungal peptides and plant extracts. Active packaging, with absorbing and/or releasing compounds effective against bread staling and/or with antimicrobials preventing growth of undesirable microorganisms, showed up an emerging area of food technology which can confer many preservation benefits. Nanotechnologies are also opening up a whole universe of new possibilities for the food industry and the consumers. This work thus aims to provide an overview of opportunities and challenges that traditional and innovative anti-staling and anti-spoilage methods can offer to extend bread shelf-life and to provide a basis for driving further research on nanotechnology applications into the bakery industry.
Introduction
Bread is a staple food worldwide and it comes in many types, shapes, sizes and texture, depending on national and regional traditions.It can be consumed as artisan bread, freshly prepared every day by bakers, or it can be found as commercially packaged sliced bread.According to the International Association of Plant Bakers (AIBI), there is a great number of differences in the patterns of bread production and consumption among European countries.In Greece, Turkey and Italy, craft bakeries represent the highest share and are a rooted food tradition, while in Bulgaria, the Netherlands and the UK, there are high percentages of market share of industrial bakeries, which meet the growing demand for sliced and wrapped bread [1].
Bread is a dynamic system undergoing physical, chemical and microbiological changes which limit its shelf-life.Physical and chemical changes determine loss of freshness, in terms of desirable texture and taste, and lead to the progressive firming-up of the crumb.Microbiological spoilage by bacteria, yeasts and moulds consists on visible mould growth, invisible production of mycotoxins and formation of off-flavours, which might be produced even before fungal outgrowth is visible.Spoiled bread hence represents a matter of concern, as it causes enormous food waste (i.e., 5-10% world bread production losses) [2] and economic losses both for the bakery industry and the consumer [3], as well as human intoxication due to contamination with fungal mycotoxins.The latter are, in fact, often associated with several acute and chronic diseases in humans [4].
To tackle this economic and safety issue, the bakery industry has long been working to identify and implement strategies and methods which allow a longer bread shelf-life, the lowest number of changes in bread organoleptic quality, and also bread safety.
Physical methods like ultra violet (UV) light, Infrared (IR), microwave (MW) heating, ultra-high pressure (UHP) treatments are used to destroy post-baking contaminants [5].Chemical preservatives, such as acetic acid, potassium acetate, sodium acetate, and others are applied in accordance to the limits laid down by the Regulation (EC) No. 1333/2008 on food additives [6].Sourdough has also recently become an established form of food bio-preservation and the role played by lactic acid bacteria (LAB) as bio-agents and inhibitors to bread spoilage has been scientifically explored and highlighted.
Active packaging is one more option, with the rationale of absorbing and/or releasing compounds effective against bread staling and/or antimicrobials preventing the growth of undesirable microorganisms [7].Nanotechnologies have been also applied in order to design active packaging and are opening up a whole universe of new possibilities for both the food industry and the consumers.
The aim of this work is to provide an overview of the opportunities and challenges that traditional and innovative anti-staling and anti-spoilage methods can offer to extend bread shelf-life and to provide a basis for driving further research on nanotechnology applications into the bakery industry.In details, an overview of factors causing bread staling and spoilage is first provided, and traditional and current strategies used to extend bread shelf-life are discussed.Future trends of packaging systems in food preservation are then presented with emphasis on antimicrobial active packaging and nanotechnology applications.A hint at promising results obtained from the application of traditional and innovative methods to extend gluten-free (GF) bread shelf-life is also given.
Literature Search
The study layout was first designed and an extensive literature search for papers on major literature databases such as SCOPUS, PubMed, ScienceDirect was conducted September to December 2017.Several combinations of terms related to bread shelf-life and food packaging were used: bread shelf-life, bakery products shelf-life, gluten-free bread shelf-life, bread staling, bread spoilage, sourdough, bread packaging, active packaging, bread and nanotechnology, bread and nanoparticles, bread and silver nanoparticles, bread and montmorillonite, bread and essential oils, bread and antimicrobials.During the search, time limits were also set-the year of publication was to be later than 2007, in order to collect the most up-to-date published works.
The websites of authoritative Institutions, namely the European Food Safety Authority, Food Agriculture Organization and World Health Organization were also consulted.
Including and Excluding Criteria
Duplicate papers, articles not accessible for authors, or research studies dealing with foods other than bread and bakery products were excluded.
Reference lists of articles were also scanned to further identify relevant papers that were not found in electronic databases.A screening of the full text resulted in a further exclusion of papers.
Bread Staling
Bread staling refers to all the chemical and physical changes that occur in the crust and crumb during storage and that gradually decrease consumer acceptance, as it is no longer considered "fresh".It is accompanied by loss of crispiness, increase in crumb firmness and crumbliness (loss of cohesion), and loss or change of taste and aroma [8].Staling is, in fact, mainly detected organoleptically by the changes in bread texture, taste and aroma.
The overall staling process thus consists in two separate phenomena: the firming effect caused by moisture transfer from crumb to crust during storage, and the intrinsic firming of the cell wall material which is associated with starch re-crystallization during storage [9].In the first case, the crust readily absorbs moisture from the interior crumb, which has a moisture content of about 45%.Evidence shows that during a storage period of 100 h, the crust moisture may increase to 28% [9].Crumb staling is, on the other hand, a more complex, as well as less understood phenomenon, and the failure to understand the mechanism of the process is the key hindrance to the development of a preventive strategy for bread staling.
Many theories have been nevertheless proposed and discussed so far, such as the important role of starch retrogradation, specifically of amylopectin retrogradation, despite it is not directly responsible for bread staling, the role of gluten proteins and of the gluten-starch interactions.
Storage temperature, moisture migration, crumb-crust redistribution of moisture, and moisture redistribution among components are other factors affecting staling rate.Some anti-staling inhibitors are amylases and debranching enzymes, lipases, lipoxygenases, non-starch polysaccharide-modifying enzymes, proteases, surface-active lipids, and others.
As far as gluten-free (GF) bread is concerned, staling represents one of the major issues, as it is mainly based on starch [10].Moreover, gluten-free bread often contains a greater density of fat than its gluten-containing counterparts [11], hence it likely undergoes lipid oxidation.Off-flavours formation might thus impair GF bread sensory profile.
Bread Spoilage
Bread ingredients are supportive to growth of microorganisms and multiplication thereof at various stages of bread production, processing, packaging and storage.Moulds, yeasts and bacteria are the main causative agents of bread microbial spoilage.They are able to grow under a great variety of conditions, also where other microorganisms are not competitive, and are able to survive in the bakery environment [12].
Mould growth is the most common cause of bread spoilage.Moulds are actually responsible for post-processing contamination.Bread taken fresh out of the oven is, in fact, free of moulds and spores thereof, since they are inactivated by heat during the baking process; however, loaves can be contaminated by moulds during cooling, slicing, packaging and storage, as the environment inside a bakery is not sterile and is a likely source of contamination [9].Mould development on bread is slow and if the relative humidity of the atmosphere is below 90% they do not grow; however, moulds can grow rapidly in a humid atmosphere and especially on loaf inside a wrapper.When bread is wrapped hot from the oven, water droplets condense on the inside surface of the wrapper and mould growth is promoted.Sliced wrapped bread is even more susceptible to mould spoilage as a wider surface is exposed to mould infections.Several factors can influence the rate of mould growth: the type of flour, the processing method, the packaging and the storage conditions.
Rhizopus nigricans, with its white cottony mycelium and black dots of sporangia, the green spored Penicillium expansum or P. stolonifer, and Aspergillus niger with its greenish to black conodial heads are the most commonly moulds involved in bread spoilage and they are thus referred to as "bread mould" [9] (Table 1).In detail, in wheat bread Penicillium, Aspergillus, Cladosporium, Mucorales and Neurospora species have been observed, with Penicillium spp.being the most common type of bread mould.In black bread, Rhizopus (nigricans) stolonifer is the most common spoilage mould and it appears as a white cottony mycelium and black sporangia.Some moulds are also responsible for mycotoxin production, thus they present a severe risk to public health.
Although the dominant bread spoilage microbiota is comprised of moulds, sporeforming bacteria represent one more serious matter of concern for bread quality and safety.Sporeforming bacteria are likely present on the outer parts of grains, and subsequently in the air of the bakery environment, and hence ingredients and/or bakery equipment are the primary source of contamination [13].
The main causative microorganism of bacterial spoilage is Bacillus subtilis, whose spores form endospores and easily survive baking, and can then germinate and grow within 36-48 h inside the loaf to form the characteristic soft, stringy, brown mass with an odour of ripe pineapple or melon due to the release of volatile compounds, such as diacetil, acetoin, acetaldehyde and isovaler-aldehyde [9].Bacteria also produce amylases and proteases that degrade the bread crumb.
Other species, such as Bacillus pumilus, Bacillus amyloliquefaciens, Bacillus megaterium, Bacillus licheniformis, and Bacillus cereus, have been also identified [13] (Table 1).According to a recent study by Valerio and colleagues (2012) [14], B. amyloliquefaciens might actually be the main species related to the occurrence of rope spoilage, as in previous works it was mis-identified as B. subtilis.
Yeast spoilage is the least common of all types of microbial spoilage, and yeasts, like moulds, do not survive the baking process.Contamination rather occurs during cooling, and, in case of industrial bread, it especially happens during the slicing step.
More than 40 species of fungi have been described as contaminant agents of baked foods [15].They are responsible of yeast spoilage, which especially determines bread off-odours.In particular, when spoilage is due to fermentative yeasts, an alcoholic or estery off-odour is usually recorded.Saccharomices cerevisiae, which is generally used as baker's yeast, tends to be encountered most often.
Contamination can be also due to filamentous yeasts.In that case, the phenomenon known as "chalky bread" occurs.It means that white spots develop in the crumb.This type of spoilage is sometimes confused with mould growth; however, the distinction can be made because yeasts produce single cells and reproduce by budding.H. burtonii, P. anomala and Scopsis fibuligera are responsible for early spoilage of bread products, growing in low, white, spreading colonies that sometimes look like sprinkling of chalk dust on the product surface [16].The most common and troublesome chalk mould is, however, P. burtonii which grows very fast on bread, and is very resistant to preservatives and disinfectants.Otherwise, yeast spoilage can be caused by Z. bailii, T. delbrueckii, Pichia membranifaciens, and Candida parapsilosis [17] (Table 1).The same moulds, bacteria and yeasts are causative agents of GF bread microbial spoilage.
3.1.3.Gluten-Free Bread Shelf-Life GF bread shelf-life deserves a separate analysis and discussion, because of the different formulation of this bread in comparison with standard bread, which implies additional challenges in identifying and optimizing preservative strategies.
GF bread is mandatory formulated with GF flours, and the lack of gluten implies at technological level some differences from standard breadmaking [10].When GF ingredients are mixed, the suspension of air resulting from the mixing process and the carbon dioxide obtained from yeast fermentation cannot be entrapped in the gluten network which does form in standard breadmaking [10].The result is hence the formation of irregular and unstable cells, leading to lack of cell structure, a reduced volume and a dry, crumbly and grainy texture.GF dough also has a more fluid-like structure and generally contains higher water levels than wheat-based dough, if acceptable crumb wants to be obtained [18].Upon formation of a complex emulsion-foam system and of a large number of minute air bubbles, surface-active ingredients (e.g., egg whites, lipoproteins) are also generally added, as they allow the entrapment of bubbles by the formation of a protective film around the gas bubbles, and also prevent them from coalescing.Gums, stabilizers, and pre-gelatinized starches are also used, so that gas occlusion and stabilization can occur [10,19].However, the very high level of water, the addition of fat ingredients and/or GF starches determine in GF bread a more rapid firming behaviour and a higher susceptibility to microbial spoilage [18,20,21].Some researchers also proposed that in wheat bread the gluten network slows down water migration from crumb to crust and moisture loss, hence in GF bread a faster ageing can be observed [20,22].
Staling represents one of the major issues in GF bread, as this latter is often mainly based on starchy ingredients [10].Interestingly, the ingredients themselves determine a more rapid or slow staling process.For example, it was observed that rice-based GF bread is more prone to retrograde during storage than wheat bread [23].This might be explained by the fact that a predominant factor regarding bread staling is likely related to starch retrogradation as well, which involves a progressive association of gelatinized starch segments into a more ordered structure, but also the water activity (a w ).Rice bread has a very high a w (≈0.987) which determines a lower microbial stability and hence shelf-life.
However, more recently, the Irish research group led by E. Arendt has observed that other factors other than the presence of absence of gluten may influence the staling rate of GF bread, among which the ratio of amylase to amylopectin [24].
Physical Treatments
The bakery industry has traditionally relied on the use of physical methods to extend bread shelf-life, and UV light, IR radiation, MW heating or UHP are some examples.
In detail, UV light is a powerful anti-bacterial treatment, with the most effective wavelength being 260 nm.It is used to control the occurrence of mould spores on bread, and among applications there is direct UV irradiation of the surfaces of wrapped bakery products which allows an extension of shelf-life.It is nevertheless worth mentioning a generally poor penetrative capacity, and the difficulty to treat a multi-surfaced product, as mould spores likely present in the air cell walls within the bread surface are protected from the irradiation [8].
MW heating allows heating rapidly and evenly loaves of bread without major temperature gradients between the surface and the interior.Generally, a 30-60 s treatment allows making wrapped bread mould-free.However, the application of this treatment is limited by the fact it can cause condensation problems which can hence adversely affect the appearance of the product [8].
IR treatment can also be used to destroy mould spores, with the advantage of not adversely affecting the quality and appearance of the product or the integrity of the packaging material [8].Moreover, IR treatment minimizes problems due to condensation or air expansion.Among disadvantages, it is worth mentioning it is quite costly for multi-sided products which are required either to rotate between heaters or to be treated in two separate ovens [8].
Chemical Treatments
Chemical preservatives can be used alternatively.Weak organic acids (e.g., propionic and sorbic acid) are used to stifle the growth of undesired microorganisms and hence extend bread shelf-life.However, application limits have been laid down within the European Union, and are currently regulated by Regulation (EC) No. 1333/2008 of the European Parliament and of the Council of 16 December 2008 on food additives [6].
Generally speaking, potassium, sodium or calcium salts of propionic and sorbic acid are the forms most generally used because of the higher water solubility and easier handling than their respective corrosive acids [25].The limits of 0.2% (w/w) and 0.3% (w/w) are established for sorbate and propionate addition, respectively (EEC, 2008), in both prepacked sliced bread and rye bread.In case of prepacked unsliced bread, a maximum of 0.1% propionate is only permitted.
It was observed that addition of high concentrations of sorbate or propionate are desired for antifungal activity, but it likely implies the alteration of bread sensory properties.Moreover, prolonged usage of these preservatives against spoilage fungi may lead to the development of fungal resistance to them [26,27].
In vitro screening experiments have shown that the addition of propionate to rye sourdough bread is not recommended due to the resistance of P. roqueforti [28], as well as to the fact that propionate has only slight effect in mould inhibition when included in bread at pH 6 [2].
As regards sorbate, it seems to be more efficient than propionate at inhibiting bread spoilage, but it is rarely used in breadmaking because of its negative impact on bread volume [29].
Addition of ethanol is one more traditional method and it is somehow preferable to other chemical preservatives.Ethanol concentrations ranging between 0.2% and 12% are reported to increase bread shelf-life [30].Moreover, its addition on bread surface (0.5% w/w) contributes to improving sorbate and propionate effect [31].Berni and Scaramuzza (2013) [32] have recently observed ethanol potential to inhibit Crysonilia sitophila, more commonly known as "the red bread mould", and H. burtoni, also known as "the chalky mould", on packed and sliced bread at very low (0.8%) and medium (2.0%) ethanol concentrations, respectively.Interestingly, it is also worth mentioning that no restrictions apply to the use of ethanol as a food preservative, although its presence on labels must be listed.Being it an effective additional barrier to inhibit fungal growth in bread and/or bakery products in general, promising results have been stressed by Hempel and colleagues (2013) [33] upon addition of ethanol emitter in active packaging.
Sourdough
In the past, natural and flavoured bread with a long shelf-life was obtained instinctively, using a traditional long fermentation process: sourdough.Based on that, the bakery industry has recently started to reconsider this traditional fermentation method to possibly replace chemical preservatives and thus guarantee a clean label.Sourdough has thus become an established form of food bio-preservation and the role played by LAB as bio-agents and inhibitors to bread spoilage has been scientifically explored and highlighted.
The use of LAB-fermented sourdough in itself, nevertheless, allows achieving just a low preservative effect.Acidification through sourdough fermentation was found to inhibit the endospore germination and growth of Bacillus spp.responsible for rope spoilage [37].However, the pH drop and the acidification, which is usually associated with the production of lactic and acetic acids, are reportedly parameters which can extend bread shelf-life only to a limited extent and/or do not extensively influence mould inhibition [2].
The anti-bacterial, anti-microbial and anti-fungal ability, that sourdough LAB have shown to possess, is related to the active compounds that they produce and/or release and which are complementary to chemical preservatives or can even substitute their use.The metabolites which mainly exert anti-fungal activity are specifically low molecular mass compounds, such as cyclic dipeptides, hydroxyl-fatty acids, phenyl and substituted phenyl derivatives (e.g., 3-phenyllactic, 4-hydroxypenyllactic, and benzoic acid), diacetyl, hydrogen peroxide, caproate, reuterin, and fungicidal peptides.
Heterofermentative LAB specifically release anti-fungal organic acids [38].Lactobacillus sanfranciscensis CB1 produces, for instance, a mixture of organic acids, such as acetic, butyric, caproic, formic, n-valeric, and propionic ones.The anti-mould activity of this microorganism against Fusarium, Penicillium, Aspergillus and Monilia spp. is mainly due to these compounds [39].Strains of Lb. plantarum have shown to exert a broad anti-fungal activity, thanks to the production of inhibiting compounds like 4-hydroxyphenlyllactic and phenyllactic acid.It is also evident that sourdough started with anti-fungal strains of Lb. plantarum allows to reduce the content of calcium propionate in wheat bread by ≈30%, with no negative effect on bread shelf-life [40].Lb. reuteri releases active concentrations of reutericyclin, a low molecular weight antibiotic active against Gram-positive LAB and yeasts, as well as reuterin, a compound containing the hydrated monomeric and cyclic dimeric forms of 3-hydroxypropionaldehyde, and having antimicrobial activity toward several food spoilage organisms, among which Gram-positive and -negative bacteria, yeasts and moulds.
These compounds are nevertheless present with a relatively high minimal inhibition concentration, which ranges from 0.1 to 10,000 mg/kg [2], despite being produced in low amount in the fermentation substrate.For that reason, it has been hypothesized that the antifungal inhibitory mechanism likely originates from complex mechanisms of synergy among the low molecular mass compounds [2].
As regards the synergistic activity of compounds and the antifungal effect of sourdough LAB, Lb. reuteri, Lb. plantarum and Lb.brevis showed to delay fungal growth by eight days in the presence of calcium propionate (0.2%, w/w), The antifungal activity which Lactobacillus buchneri and Lactobacillus diolivorans have against growth of moulds on bread has been often attributed to a combination of acetate and propionate.The preservative effect of Lb. amylovorous has been attributed to the synergy among more than ten antifungal compounds, including cyclic dipeptides, fatty acids, phenyllactate and phenolic acids.Interestingly, it has been also observed that production of antifungal compounds in sourdough is species-and substrate-specific [2].
The synergy of LAB metabolic versatility, favoring adaptation to the various processing conditions; the mechanisms of proto-cooperation with autochthonous yeasts during sourdough fermentation; the carbohydrate and amino acid metabolism; the synthesis of organic acids, exopolysaccharides and antimicrobial compounds, as well as the conversion of phenolic compounds and lipids by LAB are rather the key parameters to investigate, in order to understand the role played by LAB as a key biotechnology in bread preservation [38,41,42].The antifungal effect of sourdough LAB is attributed to the synergistic activity of several compounds.
The role played by yeasts other than baker's yeast (i.e., S. cerevisiae) is also worth mentioning.Their application has been, in fact, suggested as a promising alternative for bread preservation.Wickerhamomyces anomalus LCF1695 is, for instance, used as a mixed starter in combination with Lb. plantarum 1A7 [43]; Meyerozyma guilliermondii LCF1353 harbour marked antifungal activity toward P. roqueforti DPPMAF1; sourdough fermented with a combined starter culture-M.guilliermondii LCF1353, W. anomalus LCF1695 and Lb.plantarum 1A7 strain-allows to obtain excellent results in terms of extended shelf-life [44].
In addition to the antifungal metabolites of lactic acid bacteria, the preservative effect of inhibitory peptides derived from the substrate has been also observed.A water-extract from beans in combination with sourdough fermented with Lb. brevis AM7 contained three natural inhibitory compounds, two phaseolins and one lectin.The combined activity thereof determined a delay in fungal growth of up to 21 days, leading to a shelf life for the bread that was comparable to that found when using Ca propionate (0.3% w/w).
Novel Strategies to Improve Bread Shelf-Life: Active Packaging
Following the development of the Active and Intelligent Packaging Regulations by the European Commission [45], active packaging can be defined as packaging intended to extend the shelf-life of packaged foods or to maintain and/or improve the condition thereof by releasing or absorbing substances into or from the food or its surroundings.Alongside intelligent packaging, active packaging belongs to innovative packaging systems that are supposed to interact with the food and are not only a mere passive barrier protecting and preserving packaged food from physical, chemical and biological damage, as conventional packaging is.
Different types of active packaging systems are available.Generally speaking, they can be mainly categorized as absorbing and releasing systems [46].The former remove undesired compounds, such as oxygen, from the package environment, while the latter release compounds, such as antioxidants, preservatives and antimicrobials, to the packaged food or into the head-space of the package [46].Absorbers and releasers can come in the form of a sachet, a label or a film.Commonly sachets are placed in free form in the package head-space, while labels are fixed into the lid.Any direct contact with food should be prevented, as the function of the system might be impaired and migration might occur.Nanotechnology also enabled designing polymers with improved barrier function against oxygen.
As far as bread and GF bread are concerned, active packaging absorbing oxygen and releasing antimicrobials has been used in order to extend the shelf-life thereof.
Active Packaging with Oxygen Absorbers
Inclusion of an oxygen absorber in the packaging has been used in bakery products, such as bread and cakes, and in prepared foods, e.g., sandwiches and pizza [47].
Oxygen increases the rate of bread and bakery products staling, and promotes the lipid oxidation in bread containing fats, such as rye bread and GF bread.As a consequence, removing oxygen from the packaging will contribute to preserve the bread desirable texture and taste.
Strategies, such as the removal of oxygen from package by vacuum technology, are not suitable for bakery products.As a matter of fact, vacuum packaging evacuates most of the oxygen present in the package to levels less than 1%, and oxygen is removed also from bread interior pores.This would cause the collapse of bread and rolls, and bread organoleptic properties commonly appreciated by consumers, such as softness, would be lost.
The use of Modified Active Packaging (MAP) to extend bread shelf-life has some drawbacks, as well.In detail, the highly porous structure does not permit complete oxygen elimination and the interchange with gas flowing through the package.Hence, oxygen may persist in the food package.The quantity of oxygen detected in the package headspace may also depend on packaging material permeability to this gas.In case of packaging permeability, oxygen can accumulate over time to a level sufficient to support mould growth [48].
The addition of oxygen absorbers in the packaging to ensure oxygen removal has been proposed as an alternative strategy to overcome vacuum and MAP packaging drawbacks.
Oxygen absorbers, such as ATCO ® (Standa Industrie, Caen, France) or Ageless ® (Mitsubishi Gas Chemical Co., Tokyo, Japan) have been used to reduce the concentration of oxygen in food packaging.The effectiveness of ATCO ® oxygen absorbers in extending the microbial shelf-life of sliced bread has been investigated and it was observed that the oxygen concentration decreased to below 0.1% within a few days of packaging.In addition, the absorbers did not have any effect on the sensory quality of bread over the storage [49].Hence, oxygen absorbers enabled to prevent both bread staling and spoilage, since oxygen is an essential factor of growth of mould and strictly aerobic microorganisms.In 1998, Berenzon and Saguyf [50] studied the effect of oxygen absorbers in reducing lipid oxidation of military ration crackers at various storage temperatures (i.e., 15, 25 and 35 • C) over 52 weeks.Nielsen and Rios (2000) [51] investigated the effect of oxygen absorbers on decreasing spoilage organisms such as Penicillium commune and P. roqueforti.They also observed that A. flavus and Endomyces fibuliger persisted at oxygen levels of 0.03%.However, the combination of oxygen absorbers with essential oils from mustard (Brassica spp.), cinnamon (Cinnamomum spp.), garlic (Allium sativum) and clove (Syzygium aromaticum) were found effective [51].More recently, Latou and colleagues (2010) [52] found that the use of an oxygen absorber in combination with an alcohol emitter was as effective as chemical preservatives are (e.g., calcium propionate and potassium sorbate) in decreasing the growth of yeasts, moulds and B. cereus.The system also inhibited lipid peroxidation and rancid odours for 30 day treatment.
In general, oxygen scavenging technologies are based on iron powder oxidation, ascorbic acid oxidation, catechol oxidation, photosensitive dye oxidation, enzymatic oxidation, unsaturated fatty or immobilised yeast on a solid material [46].Nevertheless, the majority of oxygen absorber systems is based on the ability of iron to form non-toxic iron oxide under appropriate humidity conditions [46].As a consequence of iron oxidation, rust formation can be observed.The system is contained in a sachet to prevent the iron powder from imparting colour to the food.Nevertheless, the use of sachets have some drawbacks.They could leak out and contaminate the product.Hence, the absorbers might be accidentally ingested by the consumer.Polymer films and labels have been developed to overcome these issues [46].
Oxygen absorbers must meet specific criteria to be effective and succeed commercially.In detail, they should absorb oxygen at an appropriate rate, should be compact and uniform in size, should not be toxic, nor produce unfavorable side reactions.The choice of oxygen absorbers is influenced by food properties, such as size, shape, weight and a w of the food, the amount of dissolved oxygen in the food, the desired shelf-life of the product and the permeability of the packaging material to oxygen [53].
Active Packaging with Releasers: Antimicrobial Releasing Systems
Antimicrobial active packaging is the most common active packaging system which releases antimicrobial agents into the food surface (where microbial growth predominates) inhibiting or retarding microbial growth and spoilage.The main goals of an antimicrobial active packaging system are (i) safety assurance, (ii) quality maintenance, and (iii) shelf-life extension; as a consequence, antimicrobial packaging could play an important role in food safety assurance.
Several antimicrobial agents may be incorporated in the packaging system, namely chemical antimicrobials, antioxidants, biotechnology products, antimicrobial polymers, natural antimicrobials and gas.The most commonly used are organic acids, fungicides, alcohols and antibiotics [46].
Organic acids, such as benzoic acids, parabens, sorbates, sorbic acid, propionic acid, acetic acid, lactic acid, medium-size fatty acids and mixture thereof, have strong antimicrobial activity and have been used as preservatives in food preparations.
Fungicidal activity was reported for benomyl and imazalil.Antioxidants were also reported effective antifungal agents, due to the restrictive oxygen requirement of moulds [46].
Among alcohols, ethanol has shown a strong antibacterial and antifungal activity, despite it is not effective against the growth of yeast.However, the use of ethanol in food packaging has some drawbacks due to strong undesirable chemical odour.As far as bread and bakery products are concerned, active packaging ethanol emitter systems have been used to extend the shelf-life thereof.The use of ethanol in food packaging is under 2011/10/EC Regulation [54].It has been generally regarded as safe (GRAS) in the United States as a direct human food ingredient.Labuza and Breene (1989) [55] report the use of Ethicap ® , a food grade alcohol adsorbed onto silicon dioxide powder and contained in a sachet made up of a copolymer of paper and ethyl vinyl acetate.The polymer releases ethanol vapor at a concentration ranging 0.5-2.5% (v/v) that acts as an antimicrobial agent when condensing on the food surface.Vanilla and other compounds are used to mask the alcohol flavour.Ethicap ® has several advantages: (i) ethanol vapor can be generated without spraying ethanol solutions directly onto products prior to packaging; (ii) sachets can be conveniently removed from packages and discarded at the end of the storage period; (iii) low cost.Franke and colleagues (2002) [56] also reported the use of Ethicap ® in pre-baked buns (a w = 0.95) packaging.They found that packaging into gamma sterile PE-LD bags with Ethicap ® delayed mould growth for 13 days, at room temperature.Previously, Smith and colleagues (1990) also observed that ethanol vapor generators were effective in controlling 10 species of moulds, including Aspergillus and Penicillium species, 15 species of bacteria, including Salmonella, Staphylococcus and Escherichia coli, and the species of spoilage yeast [57].More recently, ethanol emitters have been used in combination with essential oils.Koukoutsis and colleagues (2004) [58] evaluated the water-ethanol (WE) and mastic oil-ethanol (ME) emitters to control the growth of microorganisms in high-moisture and high pH bakery products.Besides preventing or delaying bread spoilage, ethanol is effective against bread staling since it acts as a plasticizer of the protein network of the bread crumb [53].
Antibiotics might be also used as antimicrobials, but they are not approved for the purpose of antimicrobial functions, and their use is also controversial due to the development of resistant microorganisms.
Unfortunately, no antimicrobial agent effectively works against all spoilage and pathogenic microorganisms.As a consequence, the microorganism properties, such as oxygen requirement (aerobes and anaerobes), composition of the cell wall (Gram positive and Gram negative), the growth-stage they are at (spores and vegetative cells), the optimal temperature for growth (thermophilic, mesophilic and psychrotropic) and resistance to acid/osmosis are fundamental to select the more appropriate antimicrobial agent.
Nanotechnology Application in Active Packaging
Currently, the food industry is pioneering the application of nanotechnology in active food packaging, in order to extend food shelf-life and improving food safety (Figure 1).Silver nanoparticles, metal oxide (such as titanium dioxide (TiO2), zinc oxide (ZnO) and magnesium oxide (MgO)) and metal hydroxide (such as calcium hydroxide (Ca(OH)2) and Magnesium hydroxide (Mg(OH)2)) nanoparticles have been used in antimicrobial food packaging applications [65].
The mechanism of action of silver nanoparticles in microorganisms and moulds has been investigated, and it has been shown that they can penetrate into the outer and inner membranes of the cells, disrupting barrier components, such as lipopolysaccharides and proteins.The antimicrobial activity thereof has been attributed to both their ability to inhibit respiratory chain enzymes and disrupt the normal DNA replication and cellular proteins activation processes, as well [62][63][64][65][66][67][68][69].Moreover, silver nanoparticle antimicrobial activity is due to the ability to produce reactive oxygen species that cause oxidative stress to microbial cells [70].
Silver nanoparticles have been also integrated or combined in systems used for bacteria inactivation and have been used in antifouling applications.Orsuwan and colleagues (2016) [71] integrated silver nanoparticles in agar and banana powder films, and composites systems were obtained.They exhibited antimicrobial activity against food-borne pathogenic bacteria, such as E. coli and Listeria monocytogenes.Kanmani and colleagues (2014) [72] incorporated silver nanoparticles in gelatin and they found that bacterial pathogens, such as S. typhimurium, L. monocytogenes, E. coli, S. aureus, and B. cereus, were significantly inhibited in a dose-dependent manner.In detail, Gram-negative S. typhimurium was found to be more susceptible to silver nanoparticles, followed by Gram-positive B. cereus and S. aureus.Pathogens, such as L. monocytogenes and E. coli, were less susceptible to the silver nanoparticles in the gelatin films.
Silver nanoparticles were also integrated in Graphene Oxide and the resulting surfaces were found to inhibit almost up to 100% of bacteria attachment [73].
Silver nanoparticles anchored on common surfaces, such as glass, also inhibit the formation of biofilms [74], then they were used as antifouling systems.
Clay and silicates have been also used as nanoparticles in the production of intercalated and exfoliated nanocomposites [75].The former have a multilayered structure with alternating polymer/filler layers lying apart by few nanometers, while in the latter the filler layers are delaminated and randomly dispersed in the polymer matrix [76].Anyway, the presence of the filler in polymer increases the tortuosity of the diffusive path for a penetrant molecule, hence it provides the material with excellent barrier properties [59].Montmorillonite, a hydrated alumina-silicate Nanotechnology has been applied in the production of nanocomposites and in the encapsulation of active compounds.
Nanocomposites are multiphase materials characterized by a polymer (continuous phase) merged to nano-dimensional material (discontinuous phase) that can come in form of inorganic or organic fibers, flakes, spheres or particulates, commonly referred to as "fillers" [54,55].Hence, nanocomposites are a fusion of traditional packaging polymers with nanoparticles.
Generally speaking, the inclusion of fillers in nanoscale improves mechanical strength of food package materials and reduces the weight thereof.Nanocomposites have also improved barrier ability against oxygen, carbon dioxide, ultraviolet radiation, moisture and volatiles.In addition, they may (i) let air and other enzymes out but not in, (ii) degrade ripening gas, such as ethylene, and (iii) have antimicrobial activity [56,[58][59][60][61][62][63].Hence, nanocomposites may be used to extend food shelf-life, thus reducing the addition of man-made preservatives in foods.
The mechanism of action of silver nanoparticles in microorganisms and moulds has been investigated, and it has been shown that they can penetrate into the outer and inner membranes of the cells, disrupting barrier components, such as lipopolysaccharides and proteins.The antimicrobial activity thereof has been attributed to both their ability to inhibit respiratory chain enzymes and disrupt the normal DNA replication and cellular proteins activation processes, as well [62][63][64][65][66][67][68][69].Moreover, silver nanoparticle antimicrobial activity is due to the ability to produce reactive oxygen species that cause oxidative stress to microbial cells [70].
Silver nanoparticles have been also integrated or combined in systems used for bacteria inactivation and have been used in antifouling applications.Orsuwan and colleagues (2016) [71] integrated silver nanoparticles in agar and banana powder films, and composites systems were obtained.They exhibited antimicrobial activity against food-borne pathogenic bacteria, such as E. coli and Listeria monocytogenes.Kanmani and colleagues (2014) [72] incorporated silver nanoparticles in gelatin and they found that bacterial pathogens, such as S. typhimurium, L. monocytogenes, E. coli, S. aureus, and B. cereus, were significantly inhibited in a dose-dependent manner.In detail, Gram-negative S. typhimurium was found to be more susceptible to silver nanoparticles, followed by Gram-positive B. cereus and S. aureus.Pathogens, such as L. monocytogenes and E. coli, were less susceptible to the silver nanoparticles in the gelatin films.
Silver nanoparticles were also integrated in Graphene Oxide and the resulting surfaces were found to inhibit almost up to 100% of bacteria attachment [73].
Silver nanoparticles anchored on common surfaces, such as glass, also inhibit the formation of biofilms [74], then they were used as antifouling systems.
Clay and silicates have been also used as nanoparticles in the production of intercalated and exfoliated nanocomposites [75].The former have a multilayered structure with alternating polymer/filler layers lying apart by few nanometers, while in the latter the filler layers are delaminated and randomly dispersed in the polymer matrix [76].Anyway, the presence of the filler in polymer increases the tortuosity of the diffusive path for a penetrant molecule, hence it provides the material with excellent barrier properties [59].Montmorillonite, a hydrated alumina-silicate layered clay consisting of an edge-shared octahedral sheet of aluminum hydroxide between two silica tetrahedral layers is the most widely studied type of clay fillers [59].Agarwal and colleagues (2014) [77] compared the shelf-life of bread stored in polypropylene films (control packaging) and in polypropylene films coated with montmorillonite-nylon 6 (MMT-N6) nanofibres.They determined the fungal and microbial growth at the end of 5th storage day and observed fungal growth on bread packed in control packets, while no growth was found in test packets.As far as the microbial count is concerned, bread samples packed in polypropylene packets at the end of 5 days showed 2.9 × 104 CFU/g, while in nanocoated packets the microbial count was 92 CFU/g.At the end of 7th day of storage, control bread showed microbial growth in the range of 7.25 × 104 CFU/g of bread sample, and bread packed in MMT-N6 coated packets showed 230 CFU/g of bread sample.Hence, the use of MMT-N6 coated films enables to increase bread shelf life of almost 2 days, which is quite significant for both industry and consumer.Montmorillonite has been used also combined to silver nanoparticles to extend the shelf-life of foods other than bread.
Silver nanoparticles have been widely used in food packaging, even combined to metal oxides.Cozmuta and colleagues (2014) [78] investigated the effect of silver/titanium dioxide (Ag/TiO 2 )-based packaging on bread shelf-life and they found that proliferation of yeast/moulds, B. cereus and B. subtilis was reduced compared to bread stored in open atmosphere or in common plastic package.Moreover, they found that the degradation rate of the main nutritional compounds also decreased.More recently, Peter and colleagues (2016) [79] investigated the possibility of using paper packages modified with Ag/TiO 2 -SiO 2 , Ag/N-TiO 2 and Au/TiO 2 to extend white bread shelf-life.They found packages with Ag/TiO 2 -SiO 2 , Ag/N-TiO 2 paper enabling to extend bread shelf-life by 2 days, while no effect was observed by using Au/TiO 2 paper.
Silver nanoparticles have been also included in polypropylene food containers such as Fresher Longer™ Plastic Storage and BagsFresherLonger™ Miracle Food Storage.They were reported to keep bread and also fruits, vegetables, herbs, cheeses, soups, sauces, and meats fresher 3 or even 4 times longer and to reduce the bacterial growth by 98% compared to conventional food containers [80].
Nanoencapsulation of antimicrobial agents is also an example of application of nanotechnology to extend bakery products shelf-life Generally speaking, encapsulation protects the antimicrobial compounds against chemical reactions and undesirable interactions with food components and controls the delivery thereof [81].Compared to microencapsulation that guarantees protection of antimicrobial compounds against degradation or evaporation, the high surface area to volume ratio of the nanoencapsulation systems enables to concentrate the antimicrobials in food areas where microorganisms are preferably located [82].
Nanoencapsulation has been also used in order to obtain antimicrobial packaging systems.This technology has been applied to essential oils that can act as potent antimicrobials, may present antifungal activity and/or have antioxidant properties.A major drawback in using essential oils is that they must be added in small amounts to foods to prevent the deterioration of food sensory properties.The nanoencapsulation of essential oils enables to overcome this issue.It consists in coating essential oils within another material at sizes in the nanoscale in order to increase the protection thereof, reduce evaporation, promote easier handling, and control their release during storage and application.The combination of essential oils with paper, edible films based on milk proteins, chitosan, or alginates has been experimented with.Otoni and colleagues (2014) [83] reported the incorporation of microand nanoemulsions of clove bud (Syzygium aromaticum) and oregano (Origanum vulgare) essential oils into films from methylcellulose in order to extend slice bread shelf-life.They studied mould and yeast growth over 15 days and found that at 15 days of storage, bread placed in the antimicrobial film from methylcellulose and nanoemulsions of clove bud and oregano essential oils showed the lowest count of yeasts and moulds, followed by a bread sample added with a commercial antifungal (sorbic acid, calcium propionate, ethanol, and alcohol) and then by a bread sample placed in oil-free methylcellulose film/metalized polypropylene bags (which were sealed and stored at 25 ± 2 • C in an effort to simulate usual commercialization conditions of bakery products bread).
In 2011, Gutierrez and colleagues [84] investigated the effect of an active packaging with cinnamon essential oil label combined with MAP to increase the shelf-life of gluten-free sliced bread.They found that the active packaging considerably increased the shelf-life of packaged food maintaining the sensory properties of the gluten-free bread.
Souza and colleagues (2013) [85] investigated the effect of different amounts of cinnamon essential oil on antimicrobial activity, mechanical and barrier properties of films from of cassava starch, glycerol and clay nanoparticles and they found that all films showed effective antimicrobial activity against P. commune and E. amstelodami, fungi that are commonly found in bread products.
Safety Concerns of Active Packaging and Nanotechnology Application in Food Products/Legislation
The EC Regulation No. 1935/2004 sets out the general principles of safety and inertness for all Food Contact Materials, namely food packaging [86].The principles set out in the above-mentioned regulation require that materials do not release their constituents into food at levels harmful to human health and do not change food composition, taste and odour in an unacceptable way.
Actually, active packaging is not inert, by its design, and may release or absorb substances to or from the food or its surrounding environment.Hence, active packaging is exempted from the general inertness rule in Regulation (EC) No. 1935/2004 and is regulated by EC regulation No. 450/2009 [45].The released substance has to be authorized by food legislation and should undergo a safety assessment by EFSA before being authorized.Moreover, it can be released only in authorized quantities.Regulation (EC) No. 450/2009 also foresees the establishment of a list of substances permitted for the manufacture of active materials.
As far as the application of nanotechnology to food packaging, little is known about the fate and toxicity of nanoparticles and there is an urgent need for specific guidelines for testing of nanofoods.
In 2011, the European Food Safety Authority (EFSA) published a scientific opinion (EFSA Scientific Committee, 2011) to be intended as a practical approach to assessing potential risks of nanomaterials application to food and feed chain.In the above-mentioned document, EFSA stated that data about interaction between nanomaterials and food matrices, behaviours of nanomaterials in human body and methods to determine such interactions and behaviours are missing, despite their relevance for risk assessment.Also FAO and WHO jointly developed a technical paper on the State of the art on the initiatives and activities relevant to risk assessment and risk management of nanotechnologies in the food and agriculture sectors.In the document, national and international scientific (i.e., risk assessment related) and regulatory (i.e., risk management) activities on applications of nanotechnology in food and agriculture were reviewed in order to set up the context for future needs and perspective.
Actually, the difficulty to characterize, detect and measure nanoparticles alone and in complex matrices, such as food and biological samples [87] leads to the lack of exhaustive and complete toxicological data.
Conclusions and Future Perspectives
Active packaging is an emerging area of food technology that can confer many preservation benefits on a wide range of food products.The main goal of active packaging systems is to maintain sensory quality and extend the shelf life of foods while maintaining nutritional quality and ensuring microbial safety.This results in decrease of food waste at the same time.
Nanocomposite films have/display anti-microbial properties thanks to anti-microbial agents and because of their improved structural integrity, which results from the barrier properties created by the addition of nanofillers.In addition to the use as packaging materials, nanocomposites can be also used as delivery systems by helping the migration of functional additives, namely antimicrobials.Nanoclays can be used as carriers for the active agents, as well.Further developments of active packaging and the application of nanotechnology to packaging will depend on safety issue and consumer acceptance.
So far, the application of nanotechnology to food extend bread and bakery products shelf-life has been poorly exploited, however, it might enable to retain bread organoleptic properties, especially of GF bread and reduce bread spoilage, thus reducing bread waste.
The main advantages of active packaging for bakery product distributors are increased stock rotation cycle times and extension of the geographic distribution network.Consumer can also take advantage from active packaging, since bakery product can be stored unrefrigerated for longer time and ready anytime as fresh tasting meal or snack.
Sourdough fermented with anti-fungal and anti-mould strains of LAB is also an area of increasing focus, besides allowing the production of GF bread with enhanced nutritional value, quality and safety.Moreover, it is so far in-line with consumer's quest for "natural" products, that is, products containing fewer additives.Promising results have been also obtained by the application of other alternative bio-preservation techniques, including the utilization of antifungal peptides and plant extracts.The latter can be also added to bread formulations or incorporated in antimicrobial films for active packaging of bread.
Fermentation 2018, 4 , x 11 of 18 Figure 1 .
Figure 1.Application of nanotechnology and mechanism of action of active packaging in extending bread and GF bread shelf-life, and increasing food safety.
Figure 1 .
Figure 1.Application of nanotechnology and mechanism of action of active packaging in extending bread and GF bread shelf-life, and increasing food safety.
Table 1 .
Major causative agents of bread microbial spoilage. | 2018-09-16T21:42:46.666Z | 2018-02-02T00:00:00.000 | {
"year": 2018,
"sha1": "4b9555c6a1a8fd7145b45a4a418afb6287080df3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2311-5637/4/1/9/pdf?version=1517561051",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4b9555c6a1a8fd7145b45a4a418afb6287080df3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
214802909 | pes2o/s2orc | v3-fos-license | Stylistic Dialogue Generation via Information-Guided Reinforcement Learning Strategy
Stylistic response generation is crucial for building an engaging dialogue system for industrial use. While it has attracted much research interest, existing methods often generate stylistic responses at the cost of the content quality (relevance and fluency). To enable better balance between the content quality and the style, we introduce a new training strategy, know as Information-Guided Reinforcement Learning (IG-RL). In IG-RL, a training model is encouraged to explore stylistic expressions while being constrained to maintain its content quality. This is achieved by adopting reinforcement learning strategy with statistical style information guidance for quality-preserving explorations. Experiments on two datasets show that the proposed approach outperforms several strong baselines in terms of the overall response performance.
Introduction
Most early research on dialogue response generation focused on generating grammatically correct and contextually relevant responses (Ritter et al., 2011;Chen et al., 2017;Martinovsky and Traum, 2003). While good performance has been achieved (Wen et al., 2016;Wang et al., 2016), syntactically coherent responses alone do not guarantee an engaging and attractive chatbot. In practice, from an industrial point of view, we found that if a chatbot could possess certain language style that is consistent with his/her basic character (male, female, optimistic, humorous), the users' satisfaction and average rounds of interaction can be notably improved (Song et al., 2019a).
While the definition of language style can be specified in different contexts (Roberts, 2003;Bell, 1984;Bell and Johnson, 1997;Niederhoffer and Pennebaker, 2002;Traugott, 1975), our work refers to language style as any characteristic style of expression, from a purely computational standpoint. For example, gender preference can be regarded as one kind of language style. Considering a conversation context "Let's go out of town to relax this weekend!", it is good for a chatbot with male preference to respond like "That's great bro. I will go with my buddies together!" and with female preference to respond like "That's so sweet of you. I will bring my besties!". Besides gender preference, our work is also in line with previous work on dialogue generation with emotion (Zhou and Wang, 2018;Zhong et al., 2019); response attitude (Niu and Bansal, 2018), and speaker personality (Li et al., 2016b).
The majority of the existing approaches for stylistic response generation Li et al., 2016b;Zhou and Wang, 2018;Zhong et al., 2019;Song et al., 2019b) take the style information as an additional input to the generation model and maximize the probability of generating a response given the input query. However, these methods require large parallel corpora (consisting of conversation pairs with specified styles) and often tend to output dull and generic responses (Li et al., 2016a). As an alternative, reinforcement learning (RL) can provide a more efficient way to optimize the style expressions contained in the generated responses (Niu and Bansal, 2018). Typically, a style-classifier is adopted as a reward agent to evaluate the style score of generated responses and the generation model is then optimized to generate responses with higher scores.
However, the RL framework could overemphasize the expression of style at the cost of response quality: because during the RL process, the generation model could learn to fool the style-classier using simple stylistic patterns. We show some examples from our preliminary experiments in Table 1. As observed, the RL-based approach first generates generic-style text and then appends a simple phrase "I like him" to express a female style, as this phrase receives a high score from the female style classifier. Such tricks bring seemingly high style score but significantly harm the content quality (relevance and fluency). A satisfactory stylistic response should express the desired style on the premise of maintaining a high response quality, as with the last row of Table 1.
To address this, we propose a new informationguided reinforcement learning (IG-RL) strategy to better balance the trade-off between the stylistic expression and the content quality. Our key idea is to restrict the exploration space of the generation model during training, preventing it from collapsing to some trivial solutions. Specifically, we separate the vocabulary into two sets, stylistic words and neutral words, according to the point-wise mutual information (PMI) (Church and Hanks, 1990) between words and styles. At the training stage, given the reference response, the model is constrained to maintain the tokens at the positions of neutral words. On the other hand, at the positions of stylistic words, the model is allowed to freely explore the entire vocabulary space to search for words that maximize the reward of the style-classifier and are coherent to the surrounding context. In this way, the generation model learns to generate possible stylistic expressions while maintaining a high response quality. 1 To facilitate future research in this area, we introduce a new large-scale gender-specific dialogue dataset. Experimental results on this new dataset and another public benchmark dataset demonstrate that the proposed approach fosters dialogue responses that are both stylistic and high in quality. It outperforms standard RL models and other strong baselines in terms of the overall response quality.
In summary, the contributions of this work are: (i) A novel training strategy to train a model to generate stylistic responses under the reinforce-ment learning paradigm. This strategy can properly balance the trade-off between the style expression and the content quality via an informationguided learning process. Human evaluation shows that the proposed approach can generate responses with both high content quality and desired styles. It significantly outperforms existing methods. (ii) A new gender-specific dialogue dataset which contains over 4.5 million query-response pairs. To our best knowledge, this dataset is the first one focusing on gender-specific dialogue generation and can greatly facilitate further work in this area.
Related Work
Stylistic dialogue response generation has been an active research area in recent years. Li et al. (2016b) proposed a model that represents personality as embeddings and incorporate it into a decoder of a seq2seq model. appended the emotion embeddings into decoder states to generate responses with desired emotions. Zhong et al. (2019) proposed to embed VAD information into words to control the generation of emotional responses. used emotion category embedding, internal emotion state and external emotion memory for emotional dialogue generation. However, explicitly incorporating style information into the model configuration may significantly bias the generation process and cause a drastic drop in the response quality.
For RL-based methods, Niu and Bansal (2018) train an attitude classifier as the reward agent to guide the learning process. However, due to the nature of unconstrained sampling, the dialogue agent could learn a simple behaviour that expresses the desired style. As a result, little knowledge is actually learned by the model during the training stage which further undermines the quality of the generated responses.
It should be noted that the stylistic dialogue generation is different from the task of text style transfer. Text style transfer aims to rewrite the input sentences such that they possess certain styles, while rigorously preserving their semantic content (Jin et al., 2019). On the other hand, stylistic dialogue generation does not aim to preserve the semantic meaning of the input sentences. Instead, it aims to generate responses that are adequate to the input query, while expressing pre-specified styles.
Background
RL-based systems (Niu and Bansal, 2018) first train a style-classifier on an annotated dataset as a reward agent. In the training stage, the dialogue generation model generates a response and observes a style score from the reward agent. The parameters of the generation model are optimized to maximize the expected style score.
The learning objective is typically defined as: where p θ is the policy (probability distribution) defined by the model parameters θ and s is the desired style, r s is the score of the desired style provided by the style-classifier.Ȳ = (ȳ 1 , ...,ȳ N ) is the sampled response andȳ t is the token sampled at time step t. The baseline b is used to reduce the variance in the training process. Typically, techniques like Monte-Carlo or topk sampling (Paulus et al., 2018) are used to generate responseȲ in training. We refer to these approaches as unconstrained sampling, since the generated response is solely drawn from the distribution p θ that is defined by the model parameters. Therefore, the model is allowed to freely explore the entire vocabulary to learn a policy (probability distribution) that optimizes the predefined reward.
However, conducting efficient exploration is very hard for the unconstrained sampling since the search space is exponentially large, so only frequent patterns which match the reward function are reinforced as shown in Table 1. Another example is provided in Figure 1. When learning to generate male responses, the model learns a simple mechanism that generates a typical male-stylephrase "I am a man" at the end of the response to "cheat" the reward agent and thus acquire a high male score. Obviously, the learned policy is not ideal since little knowledge other than the simple behaviour is actually learned by the model, and the generated responses can hardly satisfy the users.
Information-Guided Reinforcement Learning
In stylistic dialogue generation task, the training data can be formulated as (X , Y, S), where X is the set of input queries, Y is the set of responses and S is the set of all possible styles. Each data instance follows the format of (X, Y, s), where X = (x 1 , ..., x T ) is the input query, Y = (y 1 , ..., y N ) is the reference response and s ∈ S is the style of the reference response Y.
To address the problem of unconstrained sampling introduced in §3, we propose a new training strategy which uses PMI information to guide the training of the generation model under the reinforcement learning framework. As illustrated in Figure 2, during training, stylistic and styleless words are first identified according to the PMI information. Then the model is learned to generate words same with the reference response ("My", "and", "friends" in Figure 2) at the positions of styleless words, and set free to explore stylistic expressions ("wife", "her" in Figure 2) to maximize the expected style score at the positions of stylistic words. Finally, the model parameters are updated via the REINFORCE algorithm (Sutton et al., 1999). During inference, the model directly generates stylistic responses without any external signals. We denote the proposed training strategy as Information-Guided Reinforcement Learning (IG-RL) since its training policy is guided by some external information other than sampling in the entire action space (the entire vocabulary set) in an unconstrained manner.
Stylistic Words Indication
To indicate whether a token x is stylistic or not given the style s, we use the pointwise mutual information (PMI) (Church and Hanks, 1990) which is defined as where p(x, s) is the frequency that the word x appears in a response with style s in the training corpus. We define a word x is stylistic given the style In the experiments, we empirically set t s = 3 4 × max v∈V PMI(v; s), where V is the whole vocabulary set.
Constrained Sampling
During the RL training stage, we impose dynamic constraints on the sampled response which is then propagated to the pre-trained classifier to acquire a reward signal. Given a reference response Y = (y 1 , ..., y N ), PMI between the tokens and the styles are adopted to determine which tokens are stylistic. At sampling step t, if y t is neutral (styleless), then the model is constrained and only permitted to sample y t . Otherwise, the model will be permitted to freely sample a new token from the entire vocabulary set.
The neutral words in the sampled response construct a neutral training skeleton that is closely related to the query. Based on this training skeleton, the model learns to express the desired style by sampling at the positions selected by PMI. An illustration can be found in the right part of Figure 2, where the model learns to generate male responses. In this example, in reference response "My husband and his friends", "husband" and "his" are denoted as stylistic words. By masking these stylistic words, a neutral training skeleton "My and friends" is constructed. The model is only permitted to sampling new words at the masked positions, and the desired response is "My wife and her friends" which has high content quality and expresses the desired style (male).
The detailed description of the proposed approach is presented in Algorithm 1, where t s is the style-specific threshold as described in §4.1.
Algorithm 1 Constrained Sampling
Input: Input query X = (x 1 , ..., x T ); Reference response Y = (y 1 , ..., y N ); Reference response style s; Output: Constrained sampling trajectoryȲ 1:Ȳ ← ( Start of Sentence ) 2: for i = 1 to N do where r s is the score of the desired style s , The baseline b is used to reduce the variance during the training process and X is the input query.
To optimize this objective,Ȳ should satisfy both the reward agent and the conditional language model. Since the sampling process is dependent on a neutral skeleton, the model has to learn to sample words that not only express the desired style but are also compatible with its context (the surrounding neutral skeleton).
To stabilize the training process, we incorporate a standard Maximum Likelihood Estimation (MLE) objective. Given the input query X and the reference response Y, the objective is defined as log p θ (y i |y 1 , ..., y i−1 ; X).
In addition, the MLE objective tends to train a model that is overfit to the training set (Pereyra et al., 2017), therefore the model is less willingly to explore other possibilities during the RL training process. To mitigate this side effect, we use label smoothing (Szegedy et al., 2016) as an auxiliary regularization. Instead of using a uniform distribution over all words in the vocabulary as target, we introduce a new form of target distribution. In which case, we use the bigram frequency distribution that acquired from the training corpus as target and the detailed computation is shown as where #(y i−1 , v) is the bigram count of token y i−1 and v in the training corpus. The final learning objective is defined as where α and β are weights of different parts.
Datasets
To facilitate future research in this area, we constructed a gender-specific dialogue dataset. At the first step, based on the gender information of users, we collected 100,000 query-response pairs whose responses generated by female users from Douban 2 . Similarly, we also collected 100,000 query-response pairs from male users. Then we hired 6 professional annotators (3 of them are female and others are male) to further verify the collected results. Because there is no rigorous guideline on how to quantify the gender preference expressed in daily life conversation, we let the annotators to judge the results based on their own understanding. The annotators are asked to assign a female(male) label to the response if it is very unlikely uttered by a male(female) user. Otherwise, a neutral label will be assigned. To ensure a reasonable annotation, each response is annotated by all six annotators, and we only keep the pairs whose response label is agreed by at least five annotators.
From the 200,000 collected pairs, 5,184 responses are annotated as male instances and 10,710 responses are annotated as female instances. To keep data statistic balanced, we randomly select 15,000 neutral instances to build the high-quality gender classification dataset. Then we fine-tuned a Chinese BERT (Devlin et al., 2019) on the constructed dataset to build a genderclassifier. And the classification accuracy is about 91.7%. We further use this classifier to automatically annotate the STC-Sefun dataset (Bi et al., 2019) to obtain a big gender-specific dialogue dataset 3 whose data statistic is shown in Table 2.
For a comprehensive evaluation, in addition to the proposed gender-specific dialogue dataset, we also conduct experiments on a public emotional dialogue dataset .
Implementation Details
The proposed model is implemented using Pytorch (Paszke et al., 2017). We use two-layer LSTMs with 500 hidden units to construct the encoder and decoder of the generation model. The word embedding size is set to 300 and it is randomly initialized. The vocabulary size is limited to 15,000.
We use Adam (Kingma and Ba, 2015) to optimize our model with a batch size of 64 and learning rate of 1e-3. For all experiments, we first pretrain a seq2seq model with the MLE objective for 3 epoches on the training set. Then the learned parameters are used to initialize the policy networks. We set the reference reward b and α, β in the learning objective as 0.3, 0.2, and 0.25 respectively. Similar to recent works (Fan et al., 2018;Qin et al., 2019), we use top-k sampling during the inference stage with k set to 20.
Compared Models
We compare the proposed approach with several representative and competitive baselines.
Speaker: Model proposed by Li et al. (2016b) which incorporates distributed style embeddings into the structure of decoding cell to control the generation process.
ECM: Model proposed by which adopts internal and external memory modules to control the generation of stylistic expressions in the generated responses.
Polite-RL: Approach proposed by Niu and Bansal (2018) which leverages RL to teach the model to generate stylistic responses. In our experiments, we also use BERT as the reward agent.
Ablation Study
IG-RL: The full model proposed in this work. For a fair comparison, we construct our generator using the same structure as the one in Niu and Bansal (2018).
w/o G: In the ablated model, we examine how the guidance provided by PMI knowledge effects the model's performance. To this end, in the RL training stage, instead of using PMI to guide the sampling process, we let the model sample a token with a equal probability of 0.2 or just simply copy the corresponding token in the reference response.
Evaluation Metrics
The quality of the responses generated by a dialogue system is well known to be difficult to measure automatically (Deriu et al., 2019); therefore we rely on human evaluation. In our experiments, each response is judged by five independent annotators who are hired from a commercial company.
To prevent possible bias from the annotators, all results are randomly shuffled before being evaluated and are evaluated following metrics below.
Quality: This metric evaluates the content quality of the generated responses. The annotators are asked to give a score within 5-point scale where 5 means perfectly human-like response (relevant, fluent and informative), and 1 means unreadable.
Style Expression: This metric measures how well the generated responses express the desired style. The annotators give a score within 5-point scale, where 5 means very strong style, 3 means neutral or no obvious style and 1 means very conflicted style. The style conflict means the generated style is conflicted to the desired one (e.g. female to male, positive to negative emotion).
Ranking: The annotators are further asked to jointly evaluate the content quality and the style expression of the generated responses from different approaches. Then the annotators give a ranking to each result where top 1 means the best. We measure the agreement of our annotators using Fleiss kappa (Fleiss et al., 1971): for the gender-specific dialogue generation, the results of Quality and Style Expression are 0.442 and 0.654, which indicate "moderate agreement" and "substantial agreement" respectively. As for emotional dialogue generation, the results are 0.432 and 0.628, which indicate "moderate agreement" and "substantial agreement" respectively.
Main Results
The evaluation results are shown in Tables 3 and 4 in which we also present the averaged evaluation scores among different styles.
From the results, we can see that the IG-RL method achieves top two performances on both the quality metric and the style metric for both datasets. Compared to other methods, it ensures both high quality and desired stylistic expression. For the ranking metric which jointly evaluates both the content quality and the style expression, the proposed approach outperforms all other baselines by a substantial margin. In addition, we also measure the diversity of the generated responses with two automatic metrics: Distinct-1 and Distinct-2 (Li et al., 2016b), and the results show that the IG-RL method generates the most diverse responses among all methods. It can be observed that Polite-RL generally obtains the highest style expression score but gets much lower performance on the quality and the ranking metric comparing to the proposed approach. This confirms our early analysis that the vanilla RL methods may achieve high style intensity at the cost of the content quality.
The performance on individual styles also provides some insights. In the happiness style, the proposed approach achieves the highest scores on all three metrics. The reason is that the happiness style is similar to the neutral style and we have relatively sufficient data. A similar phenomenon can also be found in the female style response. Therefore, we can conclude that for those styles with sufficient data, the proposed IG-RL can achieve high performance on both of the quality and style aspects. On the other hand, when stylistic data is limited, it also maintains a well balance between the response quality and the style expression.
Further Analysis
Here, we present further discussion and empirical analysis of the proposed method.
Style Acceptance
A fundamental requirement of a stylistic dialogue system is to generate responses that are not conflicted with respect to the desired style. For instance, generating male style responses is not acceptable for a female chatbot; likewise, for a positive emotional chatbot (e.g. like, happiness), generating negative responses (e.g. disgust, sadness and anger) is not acceptable. To quantitatively evaluate how acceptable a stylistic dialogue system is, we propose two novel metrics: human style acceptance rate (H-SAR) and, automatic style acceptance rate (A-SAR). We compute H-SAR based on the style expression scores in human evaluation. It is defined as the ratio of generated results whose style expression score is greater or equal to 3. As for A-SAR, we use the pretrained style-classifier to compute the ratio of the Figure 4: Balance between Quality and Style: The ≥ 3ratio means the ratio of responses whose both scores are greater or equal to 3; ≥ 4-ratio means the ratio of responses whose both scores are greater or equal to 4. generated responses that display a style which is not conflicted to the desired one.
The results are shown in Figure 3 and we can see that H-SAR and A-SAR are highly correlated. Considering results in Table 3 and 4, although the proposed approach does not generate responses with the highest style expression score, it is the only system which achieves the best H-SAR and A-SAR performances, suggesting that our system is more robust than others since it makes fewer style conflict mistakes.
Balance between Quality and Style
A satisfactory stylistic dialogue system should express the desired style while maintaining the content quality. Based on the human evaluation metric, 3 is the marginal score of acceptance. So we deem a response as marginally acceptable by actual users when both the quality and style expression scores are greater or equal to 3. On the other hand, 4 is the score that well satisfies the users, so responses with both scores greater or equal to 4 are deemed as satisfying to actual users.
The ratios of both scores ≥ 3 and ≥ 4 are shown in Figure 4, from which we see that our system outperforms all other systems on ≥ 3-ratio and ≥ 4-ratio. Obviously, the proposed IG-RL best balances the trade-off between the response quality and the style expression and therefore generating the most acceptable and satisfying responses.
Ablation Study
We analyze the effect of removing guidance provided by the PMI signal. Comparing the ablated model (w/o G) with our full model (IG-RL), from results in Tables 3 and 4, we can observe that the quality score is slightly influenced, but the style expression score drops significantly. This demonstrates that although utilizing reference response helps in maintaining the response quality, the guidance provided by PMI information is indispensable for generating stylistic responses.
Case Study
We use an input query that is unseen in both datasets to generate responses with different styles using different systems (example responses presented in Table 5). Due to limited space, we only compare different approaches with respect to genders. As for emotions, we present the results of IG-RL only.
We can see that although the result from Seq2seq approach is relevant but it is conflict to female style. As for other compared methods, the memory networks-based approaches (Speaker, ECM) can generate responses with the desired style but not very relevant to the input query. For Polite-RL approach, it first generates part of the response that relates to the input query and then simply generates a phrase which expresses intense desired style (e.g. "I like him" in female responses). Given all genders and emotions, only the responses generated by IG-RL generally maintain high content quality and properly express the desired style.
Conclusion
We have proposed a new training strategy that leverages stylistic information as guidance to conduct a quality-preserving learning process. To facilitate future research, we have also constructed and annotated a new dataset for gender-specific dialogue generation. Our experimental results demonstrate that the proposed IG-RL approach outperforms existing baselines in terms of the overall response performance. | 2020-04-07T01:42:47.501Z | 2020-04-05T00:00:00.000 | {
"year": 2020,
"sha1": "24107405a96a53d4c292b08608300a6c7e457ffe",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "24107405a96a53d4c292b08608300a6c7e457ffe",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
208479492 | pes2o/s2orc | v3-fos-license | The Level of Knowledge, Attitude and Practice of the Physicians and Nurses About Suitable Healthcare Personnel (HCP) Attire in Hospitals of Tehran University of Medical Sciences.
Objective: The purpose of this study was to investigate the status of knowledge, attitude and practice of medical team about suitable " Healthcare personnel (HCP) attire". Materials and methods: This is a descriptive study that was approved by the Research Ethics Committee of Tehran University of Medical Sciences, and evaluated knowledge, attitude and performance of physicians and nurses about "Health care personnel (HCP) attire" by a questionnaire. In order to create the questionnaire a panel of experts’ reviews was set and a questionnaire was made through Focus group discussion (FGD). The Variables included age, gender, work experience, type of employees' time, type of jobs, education level, type of employee. Results: This study was conducted on 441 physicians and nurses who were working in Tehran University of Medical Sciences. The mean percent of KAP score was 72.6 ± 14.3. The score of the questionnaire in general was 14.91 ± 70.99 for knowledge, 73.5 ± 13.3 for attitude and 73.7 ± 17.1 for performance. Conclusion: According to this survey, the questionnaire score in the general knowledge, attitude and performance about the "Healthcare personnel (HCP) attire" is low.
Introduction 1
As traditionally, professional' attire is an important of the physicians and nurses in culture. Professional' attire is the first impression for the medical team. The purposes of Professional' dress is neatness, cleanliness, identification, and also make-up, hairstyle, facial expressions (as a part of a suitable professional) have roles in public verbal and non-verbal communication.
Sociologists and psychiatrists have the great Correspondence: Nikzad Eisazadeh Email: n-iesazadeh@tums.ac.ir emphasized influence on a positive relationship by professional ' dress, and show the style and wearers' confidence (1)(2)(3)(4)(5)(6). The reflection importance of covering the physicians in the relationship between a doctor and a patient can also see in Hippocrates's words, which sentences about professional ethics and their attire (6)(7)(8)(9)(10)(11)(12). On the other hand, Physicians and nurses are probably at risk in exposure to a great contaminated with the range of pathogens through contact with infectious body fluids, injuries, blood and mucous membrane extraction, and another source of infections. So they should use the best dress to protect themselves from these factors (13)(14)(15). California by Damon E. Anastacia (16), Clavelle Joanne T., Goodwin Miki, Tivis Laura J. (17), in their studies show that more respect and attention can attract by the medical team to their clothes. Despite this, it seems that lack of evidence that regard to the effect of Physicians and nurses ' Knowledge, Attitude and Performance bout suitable dress characteristics that can be affect from some criteria. Finding the lack of such studies, we decided to evaluated attitude and performance of the physicians and nurses about their "Healthcare personnel (HCP) attire" in Tehran University of Medical Sciences of Iran.
Materials and methods
This is a descriptive study that approved by the Research Ethics Committee of Tehran University of Medical Sciences, and evaluated knowledge, attitude and performance of physicians and nurses about "Healthcare personnel (HCP) attire" by a questionnaire. (Registration number: IR.TUMS.REC.1394.2184).
Informed consent was taken from all the participants. In order to create the questionnaire panel of expert's reviews was set and a questionnaire through Focus group discussion (FGD) was made.
The questionnaire was evaluated by qualitative and quantitative methods (so Cranach's alpha test as fair (0.8) reported and the reliability (questions correlation) was calculated near 92%). For domain items of "HCP attire", some articles were reviewed and consulted. 15 items that divided into 3 domains with closed questions was created. This questionnaire had 15 questions (5 questions in each knowledge, attitude and performance). The Likert scale used for the answer options and maximum score was set to be 5 and total score of the questionnaire measured from100 scale. All participants filled this questionnaire. The variables included age, gender, and work experience, ward, type of employees' time, type of jobs and education level asked, too. Data were analyzed using by using statistical package of social science Software (SPSS version 16, Chicago, IL, USA). T-Test and ANOVA used for quantitative variables and chi-square for qualitative variables, too. The significance level considered as < 0.05.
Results
In this study, 441 physicians and nurses working in Tehran University of Medical Sciences studied. The demographic characteristics of people listed in table 1. The score of the questionnaire in the general 14.91 ± 70.99 was for knowledge, 73.5 ± 13.3 was for attitude 73.7 ± 17.1 was for performance. Participants' knowledge, attitude and performance according their demographic characteristics were shown in table 2.
Discussion
Professionalism and infection control had different aspects of "professional attire" (18,19). In article by Brandt et al it be found that today young Physicians and nurses students may choose informal rather than casual attire, so, in the New Millennium the Old Dress Codes' Values is more concerned and "professional attire" is known as a measure of truth for our patients comforted feels' but obviously physician' team members may not aware about ignoring Hippocrates' advice that so he said that" they should be well-dressed, anointed sweet smelling and so clean in person" (20). In a study by Gjerdingen et al. (9) it is founded that physician's appearance through questionnaires were evaluated from all physicians during 3 residency programs. The name tag, shirt and dress pants white coat, dress shoes, nylons skirt or dress is the most important things that them had positive reaction to them as traditional physician attire but some Negative responses were seen about scrub suits, clogs, sandals, blue jeans, athletic shoes) and also, more older than did younger participants had favored traditional and professional attire. It became clear that the type of "Healthcare personnel (HCP) attire "has a great influence on treatment confidence but in other hands, it is more neglected the knowledge and attitude of "Physicians and nurses "in hospitals about their Clothing. In this study, 441 physicians and nurses working in Tehran University of Medical Sciences studied, according to our result the mean percent of KAP score was 72.6 ± 14.3, so, this shown, Physicians and nurses had moderate knowledge, attitude level and performance respectively and maybe it's be as a problem in protecting themselves from pathogen agents (13)(14)(15) or perhaps they cannot have good communication with their patents (21)(22)(23).
In sum up, it seems that today there is a decline in formal attire among Healthcare members but patients still prefer a formal dress than informal appearance for Healthcare personnel, so it is a concern by medical members staff in their opinions about the formal dress, and they should be more attention about their attire than their patients (24)(25)(26).
Conclusion
According to this survey, it can successfully establish that "Healthcare personnel (HCP) attire" is part of the treatment approach and so, It should be considered and to extend research in this area. So, some educational programs with the focus on ethical approaches and with consideration of standard precautions are recommended and also it is important to change dressing medical healthcare staffs to more suitable dress this may affect on therapeutic doctorpatient interaction, too. | 2019-10-24T09:10:10.797Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "8b46cda5f14b1dffea4cda52e752932ab8a26349",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18502/jfrh.v13i1.1613",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37ebc3efc40a2c94a99a7711f8b1f1a02ee15415",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
110650492 | pes2o/s2orc | v3-fos-license | A comparison of fatigue loads of wind turbine resulting from a non-Gaussian turbulence model vs. standard ones
This project funded by the federal ministry of education and research from the research group 'Wind turbulence and its significance in the use of wind energy' handles a comparison between the load ranges for horizontal axis wind turbines resulting from different turbulence models, i.e. between the usual models as defined in the standards and a new model designed by Friedrich and Kleinhans. This should enable an evaluation of the relevance of this new model for wind modelling for wind turbines and if so, provide the community with new tools in wind simulation. Indeed, spectral models do not well reproduce extreme wind increments as met in gusts. Those models simulate using purely Gaussian statistics. However, measurements show that those increments do not follow normal statistics. The new model developed aims at correcting this problem. The turbulence models used are the Kaimal, von Karman and Mann models as defined in the IEC guidelines and the Friedrich-Kleinhans model, based on stochastic processes called Continuous Time Random Walks. The comparison is based on load ranges resulting from an RFC analysis of 100 time series obtained for 100 different seed numbers. Five wind speeds are investigated. The aeroelastic code used is FLEX5. The main conclusion that can be drawn from this study is that the non-Gaussian Friedrich-Kleinhans model produces loads that are significantly different from the loads obtained with the Kaimal model. That proves that the form of the tails of the increment distribution has a major influence on the loads of the wind turbine and should be considered when making fatigue calculations.
Increment distribution
In this paper, a comparison is undertaken between the load ranges resulting from different turbulence models, i.e. between the usual models as defined in the standards and a new model designed by Friedrich and Kleinhans. This should enable an evaluation of the relevance of this new model for wind modelling for wind turbines and if so, provide the community with new tools in wind simulation.
Indeed, spectral models do not well reproduce extreme wind increments as met in gusts. Those models simulate using purely Gaussian statistics, as can be seen in figure 1. However, measurements show that those increments do not follow normal statistics as shown in figure 2. The new model developed aims at correcting this problem. Atmospheric wind measurements show a high variability in space and time and, therefore, cannot be modelled by Markovian stochastic processes. From a mathematical point of view, Continuous Time Random Walks processes represent the minimal model to produce non-Markovian data sequences by means of stochastic processes [12]. The wind speed data for figure 2 was surveyed by a cup anemometer at 81 m above ground at a sampling frequency of 1 Hz. From this data pool, data sets with a 10-min mean wind speed of 10±0.25 m/s were selected and analyzed with respect to the increment statistics.
Contents
First, the turbulence models are presented along with the wind generation method used. A comparison between these models is shown. Then, the parameters chosen for the wind simulation are given as well as the load cases that are investigated in this paper. With these time series, the loads are calculated as explained in part 4 and the results are commented in part 5. In part 6, a conclusion is drawn. The results are shown in the appendix.
Standard models: Kaimal and von Karman
The Friedrich-Kleinhans model is compared with the turbulence models taken from the IEC 61400-1 of 2004/2006 [1] and [7]. The corresponding design load case is NTM (Normal Turbulence Model) as defined in [1]. The edition of 2004 proposes the Kaimal and von Karman models, in the edition of 2006, the von Karman model has been withdrawn and replaced by the Mann model [4]. The Mann model is handled in part 2.3. All the standard models are Gaussian.
The turbulence models are defined by their power spectra S and their coherences Cho, whose equations are given in the standards. The only parameters intervening in these equations are lengths, also precisely defined in the standards. In our case, the hub height is 100m, which means integral length scales for the Kaimal model of L u = 170.1m, L v = 56.7m, L w = 13.86m and L c = 73.5m (edition 2004). In the edition of 2006 of the IEC guidelines, those lengths are doubled as the parameter Λ is 42m instead of 21m for a hub height higher than 60m. We have for the Kaimal model the following ratios between the standard deviations of the different wind components, as defined in the standards: σ 2 = 0.8*σ 1 and σ 3 = 0.5*σ 1 . Being isotropic, the von Karman model has the ratios: σ 1 = σ 2 = σ 3 .
Those models have been integrated in Windsim7, the wind generator of Flex5.
The spectrum and coherence have been plotted in the following figures.
Wind generation method: Sandia/Veers
The wind is generated with Windsim7 based on the Sandia-Veers Method [2]. Here, the wind components u, v, w are simulated separately, i.e. no u-w correlation is taken into account, which is physically not correct, as measurements hint at such a correlation. The wind field is first described in a spatial and spectral space to be then integrated by an FFT to get wind speeds with time dependency.
The Mann model
The main advantage of the Mann model is that it calculates a u-w correlation. The Mann model should then be a better representation of a turbulent wind.
Wave vector space.
In opposition to the spectral Sandia/Veers method, the Mann model considers a space of wave vectors, which are a way of representing the turbulent eddies. The wave vector ) , , ( is built from the components in the three spatial directions k 1 (longitudinal), k 2 (transversal) and k 3 (vertical). One can understand them as a measure of the size of the eddies in each direction, from the biggest eddies for the smallest k i to the smallest eddies for the biggest k i . The k i are defined with k i =i2π/L i with L i the length scale which represents logically the biggest eddy in the considered direction and also the size of the simulated box one wants to simulate. We have therefore L 1 =U.T, with U the mean wind speed and T the simulation time. The simulation of the time series with the Mann model happens in this wave vector space and the wind speeds are expressed in a spatial space (x,y,z) only through the final FFT. We obtain eventually for each point of the space where we want to obtain a turbulence field, i.e. for each point of the grid, the value of the wind speed in the three directions u, v and w. The Taylor's frozen turbulence theory authorizes such a procedure of translating a field of turbulence through the rotor plan. It is then equivalent to calculate a spatial field of turbulence with no consideration of time steps and to calculate turbulence values at the same spatial position (rotor plan) for different time steps. The first method is used here, the latter in the Sandia/Veers method. The Sandia/Veers method does a Fourier transform from frequencies to time steps, the Mann method does a Fourier transform from wave numbers to spatial steps.
Spectral tensor.
The Mann model starts from the spectral tensor Φ calculated with the energy spectrum E(k) as suggested by von Karman. The spectral tensor is a more natural and direct representation of the turbulent flow, but doesn't contain more information than the cross-spectra. It contains all second-order statistics that are needed to generate time series. Von Karman rings the bell of isotropy and this isotropic tensor is then deformed by the vertical shear to give an anisotropic description of the turbulence in the wave vector space (k 1 , k 2 , k 3 ). The way the tensor is obtained is described in [3], involving a linearization of the Navier-Stokes equations and a modelling of eddy lifetimes, and the equations of the tensor components Φ ij (k 1 ,k 2 ,k 3 ) are described in [7]. A parameter Γ determining the anisotropy for the tensor has to be chosen and Mann proposes in [3] the value 3.9, obtained from a fit with the Kaimal spectrum. Γ is used to calculate β, representing the eddy lifetime.
The following figures show the spectrum and the coherence for Γ=3.9, L=29.4m.
Wind field.
The Mann model obtains from the spectral tensor a stochastic wind field. This latter can be represented in terms of a generalized stochastic Fourier-Stieltjes integral. The equations and process are to be found in [3], [8] and [9] and shall not be developed here.
A few parameters.
The value of Γ and L have been taken equal to the ones proposed in the guidelines i.e. Γ=3.9 and L=29.4m for a hub height of 100m. The discretisation of the space have been done with N1=4096, N2=N3=32 points. The length L2 and L3 are taken equal to 164m for a radius of the rotor of about 42m. The time series produced are 614s long, which gives a time step of 0.15s.
Friedrich-Kleinhans model
A new method for the stochastic simulation of wind fields is used [5]. In contrast to the broad class of simulation algorithms in the Fourier domain [2,3], we consider simulation in real space. While the spectral properties such as the power spectrum are more difficult to achieve in this domain, it is a more natural representation for intermittent, coherent effects including wind gusts. In [6], Friedrich showed that this type of non-Gaussian Statistics can be derived from Navier-Stokes Equations directly.
Continuous Time Random Walk.
The simulation is based on the theory of "continuous time random walks" (CTRW) that forms a generalization of random walk processes [4]. A CTRW x i (t i ) is iteratively defined by η and i τ are the step width and the waiting time, respectively, of the step i and generally are random numbers. By transition of the discrete variable i to a continuous intrinsic time s the process becomes applicable to physical problems. In this context, the abovementioned equations become stochastic differential equations, coupled via the intrinsic time s. The latter equation in this case corresponds to a mapping of the intrinsic time s on the physical time t. Figure 7 shows a CTRW, the thin line marks a realization of the process provided by the discrete equations. The thick line forms an expansion to a continuous process that can then be adapted to special needs and in our case enables the application to turbulence generation.
Γ ref and Γ ij are Gaussian distributed, independent random variables. u 0 is the mean wind speed at hub height of the component u on the whole time series. u ref represents the local mean wind speed at hub height and is defined by the first of the two equations. u ref is distributed normally around u 0 . We have here a difference with the spectral models, which consider a constant mean wind speed, the turbulence fluctuating around this value. In the Friedrich-Kleinhans model, the local mean wind speed varies with the time. The diffusion matrix D ij contains the information on the correlation of two points i and j of the grid. The correlation has been considered as decaying exponentially with the Euclidian distance of the grid points. Finally, the constants α i governs the height profile, which is logarithmic.
With these equations, the wind components are simulated for the intrinsic time s. The last step is to transform this intrinsic time into a physical time t. This is done by an independent stochastic process with the following mapping equation, by which the intermittencies are simulated.
with τ(s)>0 a stochastic variable. So far, truncated and skewed Levy distributions were applied although other classes, such as log-normal distributions, would also be feasible. An example of Levy distributions for different values of the stability parameter alpha is shown in figure 8. For an algorithm for the numerical simulation of Levy-distributed random variables, we refer to [11]. Figure 9 shows the distribution of increments. The distribution is no longer Gaussian and fits the measurements much better than spectral models. Therefore, the probability of extreme wind gusts is better represented.
Turbulence intensity.
The ratios between the turbulence intensities of the three wind components of the Friedrich-Kleinhans wind field have been taken equal to the ones for the Kaimal model.
Physical characteristics
We consider a single turbine of class A in a neutral atmosphere at a height of 100m over a flat terrain (roughness length z 0 = 0.01). The characteristic lengths of the turbulence eddies are taken as defined in 2.1. The turbulence intensity I is defined as in [1], depending on the mean wind speed.
Wind simulation
Time series are generated for 6 different wind speeds: 5m/s, 8m/s, 10m/s, 12m/s, 15m/s, 18m/s. For each wind speed, hundred time series are calculated for hundred different seed numbers. This ensures that the variations in the distribution of turbulence depending on the seed number are well taken into account. We have 9 radial stations and 35 azimuth stations. The time step is 0.15s for 4096 steps. The loads are calculated using a model of the D8 wind turbine from DeWind. Main characteristics are a hub height of 100m, a rated wind speed of 13.5m/s, a diameter of 80m, a rated power of 2MW. It is a pitch turbine with variable rotational speed.
Calculation procedure
The calculations are carried out with FLEX5 for time series of 600s. We are considering the edgewise and flapwise moment as well as the resulting moment at the blade root and the tilt moment MyTt at the tower top. The resulting moment is the module of the flapwise moment and the edgewise moment.
The loads are first obtained as time series from FLEX5 with which we do a Rainflow Counting (RFC) analysis. The slope exponent of the materials for the RFC calculations is 12 (fibre glass) at the blade root and 4 (steel) at the tower top. Fref is taken 1Hz. The loads obtained are represented in cumulated repartitions. The loads are given as percentages of the mean value of the 100 load ranges or of the 100 maximal moments obtained for the Kaimal model. Therefore "higher" or "lower" means higher or lower than the loads obtained for the Kaimal model and the values given as example are related to the 100% of the Kaimal model. "Decrease" is also in comparison with the Kaimal model. In the following section, the cumulated distributions of the load ranges and of the maximal values of the moments investigated are presented for two wind speeds only for each load because of lack of space. This should be enough to show the trends with the wind speed that are announced. The graphics are shown in the appendix A. The results from the Mann-model are not presented here, as they were not available at the time of writing this paper.
Edgewise moment at the blade root: load ranges
The values for the Friedrich-Kleinhans model are lower and decrease with the wind speed from a mean value of 93.5% for 5m/s to 80.5% for 18m/s. See figures A1 and A2.
Flapwise moment at the blade root: load ranges
The values of the load ranges obtained for the Friedrich-Kleinhans are higher, with no trend with the wind speed. For example, for 8m/s we have a mean of 107.9% and for 18m/s, 109.7%. See figures A3 and A4.
Resulting moment at the blade root: load ranges
The values for the Friedrich-Kleinhans model are higher for small wind speed and decrease to get lower for the higher wind speeds. For 5m/s, we have a mean of 107.6% and for 18m/s, 94.7%. See figures A5 and A6.
Tilt moment at the blade root: load ranges
The values for the Friedrich-Kleinhans model are lower with no trend regarding the wind speed. For 5m/s, the mean value of the load is by 11.1% lower, for 18ms by 13.9% lower. See figures A7 and A8. flapwise moments tend to be higher but with no clear trend with the wind speed. We have for 8m/s a maximal moment by 12.2% higher and for 15m/s, 1.3%. See figures A11 and A12.
Resulting moment at the blade root: maximal values
The values for the Friedrich-Kleinhans model are higher for small wind speed and decrease to get lower for the higher wind speeds. For 5m/s, we have 102.4% and for 15m/s, 93.7%. The maximal value of the maximal moments gets lower from +3.4% for 5m/s to -0.9% for 15m/s.
Tilt moment at the tower top: maximal values
The maximal values of the tilt moment for the Friedrich-Kleinhans model are similar with no trend regarding the wind speed. For 5m/s, the mean value of the maximal moments is the same as for the Kaimal model, for 18m/s by 1.3% higher. The maximal values of the maximal moments are higher for the Friedrich-Kleinhans model, up to +14.8% for 12m/s compared to the Kaimal model.
Standard deviation
As far as the standard deviations of the loads are concerned, we have (nearly) always higher values for the Friedrich-Kleinhans model as for the Kaimal model. There is a trend of increasing standard deviations with the wind speed (in comparison to the Kaimal model). For example, in the case of the repartition of the load ranges of the resulting moment, the standard deviation is by 1.5% higher for 5m/s and 94.6% higher for 18m/s. Extreme values are met, e.g. for the tilt moment, where the standard deviation is by 221.7% higher for 18m/s.
Conclusion
From those results, it is not possible to make any definitive "higher-lower loads" conclusion. The edgewise moments are lower for the Friedrich-Kleinhans model as for the Kaimal model whereas the flapwise moments are higher. Concerning the resulting moment though, the loads show a trend of being higher for small wind speeds and lower for higher wind speeds. The tilt moment is for each case studied lower.
The results ought of course to be as much independent from the peculiar turbine used for the study as possible, in order to have as general conclusions as possible. This ideal case is, still, ideal and one should not neglect the effect of the control system. The values for the wind speeds around or higher than the rated wind speed of the D8 (13.5m/s) are therefore to be taken with caution when drawing conclusions. For example, the much higher value of the maximal value of the maximal resulting moments for the Friedrich-Kleinhans for 18m/s (+29.8%) is attributed to the regulation system. For 15m/s and 18m/s, the wind turbine can be shut down if the gusts get too high and it is this shutting down that can produce high loads that are not really representative of the model but rather of the quality of the regulation system.
The main conclusion that can be drawn from this study is that the non-Gaussian Friedrich-Kleinhans model produces loads that are significantly different from the loads obtained with the Kaimal model. That proves that the form of the tails of the increment distribution has a major influence on the loads of the wind turbine and should be considered when making fatigue calculations. Figure A7. Cumulated distribution of the load ranges of the tilt moment, Vwind=5m/s. | 2019-04-13T13:11:45.539Z | 2007-07-01T00:00:00.000 | {
"year": 2007,
"sha1": "31bf14abb2f4c0bbdd5b571729def37781b843f5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/75/1/012070",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b55760c5e3e25abf4e18ac0147cc6d90b6745d13",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
35364043 | pes2o/s2orc | v3-fos-license | Targeting of CCL2-CCR2-Glycosaminoglycan Axis Using a CCL2 Decoy Protein Attenuates Metastasis through Inhibition of Tumor Cell Seeding1
The CCL2-CCR2 chemokine axis has an important role in cancer progression where it contributes to metastatic dissemination of several cancer types (e.g., colon, breast, prostate). Tumor cell–derived CCL2 was shown to promote the recruitment of CCR2+/Ly6Chi monocytes and to induce vascular permeability of CCR2+ endothelial cells in the lungs. Here we describe a novel decoy protein consisting of a CCL2 mutant protein fused to human serum albumin (dnCCL2-HSA chimera) with enhanced binding affinity to glycosaminoglycans that was tested in vivo. The monocyte-mediated tumor cell transendothelial migration was strongly reduced upon unfused dnCCL2 mutant treatment in vitro. dnCCL2-HSA chimera had an extended serum half-life and thus a prolonged exposure in vivo compared with the dnCCL2 mutant. dnCCL2-HSA chimera bound to the lung vasculature but caused minimal alterations in the leukocyte recruitment to the lungs. However, dnCCL2-HSA chimera treatment strongly reduced both lung vascular permeability and tumor cell seeding. Metastasis of MC-38GFP, 3LL, and LLC1 cells was significantly attenuated upon dnCCL2-HSA chimera treatment. Tumor cell seeding to the lungs resulted in enhanced expression of a proteoglycan syndecan-4 by endothelial cells that correlated with accumulation of the dnCCL2-HSA chimera in the vicinity of tumor cells. These findings demonstrate that the CCL2-based decoy protein effectively binds to the activated endothelium in lungs and blocks tumor cell extravasation through inhibition of vascular permeability.
Introduction
Inflammatory chemokines are implicated in several chronic inflammatory diseases including rheumatoid arthritis, inflammatory bowel disease, atherosclerosis, and multiple sclerosis. There is accumulating evidence that chemokines play crucial roles during the establishment of primary cancerous lesions as well as metastases, and they are generally associated with a progressed state of cancer and poor prognosis [1][2][3]. Among inflammatory chemokines, CCL2 has been implicated in several crucial steps during cancer formation and metastasis including promotion of angiogenesis [4], recruitment of myeloid-derived suppressor cells [5][6][7], regulation of invasiveness of cancer cells [8,9], and induction of prosurvival signaling in different cancer cells [7,10,11]. Furthermore, high levels of CCL2 in circulation were associated with poor outcome for breast, prostate, and colon cancer patients due to high incidence of metastasis (reviewed in [3]). Recent studies provided evidence that CCL2-CCR2 signaling represents a crucial axis for the formation of the metastatic microenvironment, which was largely dependent on recruitment of inflammatory monocytes in breast, colon, and lung cancer models [12][13][14][15][16]. Lately, CCL2-mediated endothelial activation in the lungs was shown to be required for efficient tumor cell extravasation [14].
For a full chemotactic function, chemokines need to bind to glycosaminoglycan (GAG) chains, which are part of proteoglycans located at the surface of endothelial cells in the vasculature. This enables the formation of a solid-phase chemokine gradient [17]. Although chemokines can function as monomers and without binding to GAGs in vitro, chemokines require GAG-binding and oligomer-formation capability for their functionality in vivo [18]. Chemokine binding to its receptors induces a potent signaling only when the processed N-terminal part of the chemokine is not modified [19]. In the case of CCL2 and CCL5, a single methionine extension of the N-terminus generates a potent receptor antagonist [17]. An anti-inflammatory CCL2 mutant with enhanced binding affinity to GAGs and containing a CCR2-antagonist mutation has been recently developed and tested in various inflammatory animal models [20,21].
This first generation of CCL2 decoy protein contained two amino acid mutations (S21K and Q23R), which were introduced to increase GAG-binding affinity, as well as Y13A and an N terminal methionine addition to block CCR2 activation. For the second generation of CCL2-based therapeutic mutant proteins, one additional basic amino acid was introduced into the chemokine sequence, S34K, to further enhance the GAG binding affinity. Because the protein consists of 77 amino acids, a rapid elimination from circulation was expected. For chronic therapeutic indications with parental application, however, we aimed to extend its serum half-life to prolong exposure. This was achieved by C-terminal fusion of the mentioned CCL2-based decoy protein to human serum albumin, which improved not only in vivo pharmacokinetic parameters but also the chemokine displacement pattern and the protein oligomerization behavior compared with the unfused decoy protein [22]. This novel fusion decoy protein with high therapeutic value (referred to as dnCCL2-HSA chimera) aims to target specific GAG structures in a similar way as antibodies target antigens. Here we present first in vivo data derived from experiments in which the CCL2-HSA chimeric protein was tested for its activity in a murine metastasis model.
Cell Culture
Mouse colon carcinoma cell line MC-38 stably expressing GFP (MC-38GFP) was grown in Dulbecco's modified Eagle's medium with 10% fetal calf serum (FCS), and Lewis lung carcinoma cells (3LL) were grown in RPMI medium with 10% FCS [23,24]. dnCCL2 and dnCCL2-HSA Chimera Definition The unfused CCL2 mutant (Met-CCL2 Y13A S21K Q23R S34K = dnCCL2) was produced in Escherichia coli and characterized as previously described [21]. The dnCCL2-based CCL2-HSA chimera was produced in Pichia pastoris and was purified by a two-step downstream process. The expression, purification, and characterization of this dnCCL2-HSA chimera are described in detail somewhere else [22]. In Figure 1, the schematic structure of the dnCCL2-HSA chimera is shown.
Surface Plasmon Resonance (SPR)
Binding of CCL2, dnCCL2, and dnCCL2-HSA chimera to unfractionated low-molecular weight heparin (Iduron, Manchester, UK) was investigated on a BiacoreX100 system (GE Healthcare) as described earlier [25]. Briefly, measurements were performed under a steady PBS flow containing 0.005% Tween. Biotinylated heparin was coupled on a C1 sensor chip, and each chemokine was measured at seven different concentrations. Contact times for all injections and dissociations were 120 seconds at 30 μl/min over both flow cells. Affinity constants were determined by a simple 1:1 equilibrium binding model where Req is plotted against the analyte concentration. Data were fitted using the steady-state formula that corresponds to the Langmuir adsorption equation provided by the Biacore Evaluation Software.
Migration Assay
The ability of dnCCL2-HSA chimera, dnCCL2, and CCL2 to induce the migration of freshly prepared human blood-derived monocytes was investigated using a 48-well Boyden chamber with a porous membrane (5-μm pore size; Neuroprobe, MD, USA). Human whole blood was obtained from healthy volunteers by venipuncture into heparinized tubes (Vacuette, GBO, Austria). Monocytes were isolated using Ficoll-Paque PLUS (GE Healthcare). The Buffy coat was aspirated and washed thrice with Hank's balanced salt solution, and cells were resuspended in Hank's balanced salt solution (2 × 10 6 cells/ml). Protein dilutions ranging from 20 to 2000 nM in PBS were placed in the lower chambers, and the chemotactic potential was Figure 1. The schematic structure of the dnCCL2-HSA chimera. CCL2 mutant (Met-CCL2 Y13A S21K Q23R S34K) was fused through a Gly-linker to human serum albumin, expressed and purified as described in Material and Methods. measured for each concentration 3 times. Freshly prepared monocytes (10 5 monocytes) were seeded in the upper chamber and incubated for 1 hour at 37°C. After the incubation, the insert was removed from the chamber, and cells attached on the lower side were fixed with methanol and stained with Hemacolor solutions (Merck, Germany). Cells were then counted at 40× magnification in 5 randomly selected microscopic fields per well.
Pharmacokinetics of dnCCL2 and dnCCL2-HSA Chimera in Mice Animal care and handling procedures were performed in accordance with European guidelines, and all experiments were conducted under conditions previously approved by the local animal ethics committee in Graz. C57BL/6 male mice (Harlan, Italy), 6 to 8 weeks old, were intravenously injected with dnCCL2 (200 μg/kg body weight) or dnCCL2-HSA chimera (1600 μg/kg body weight, corresponding to 200 μg/kg dnCCL2 molar equivalent) in the lateral tail vein. At defined time points, blood was collected by heart puncture of deeply anesthetized mice (groups n = 3/time point) that were then euthanized. The serum concentration of dnCCL2 or dnCCL2-HSA chimera was analyzed using human MCAF ELISA kit (Hölzel, Germany). ELISA setup was performed according to the manufacturer's protocol and previously verified for cross-reactivity with both dnCCL2 and dnCCL2-HSA chimera. Data are reported as ng/ml of dnCCL2 equivalent, the active part of the molecule, to allow direct comparison of the pharmacokinetic profiles.
Mice
Animals were maintained under standard housing conditions, and experiments were performed according to the guidelines of the Swiss Animal Protection Law and approved by Veterinary Office of Kanton Zurich. C57BL/6 mice were purchased from Jackson Laboratory.
In Vitro Transmigration Assay
Primary pulmonary endothelial cells were isolated using a positive immunomagnetic selection as described previously [14]. Briefly, lungs were perfused with PBS and digested with 1 mg/ml of collagenase A (Roche, Basel, Switzerland) purified with anti-CD31 antibody (Life Technologies, Carlsbad, CA) coupled to anti-rat IgG MicroBeads (Miltenyi Biotec, Bergisch Gladbach, Germany). Primary lung microvascular endothelial cells (3 × 10 4 ) were seeded on gelatin coated 24-well Transwell inserts with 8-μm pores (BD, San Diego, CA) and allowed to grow to confluency (2 days). Tumor cells (2 × 10 4 ) were seeded into Transwell inserts with or without monocytes (1 × 10 5 ) in 3% FCS/RPMI in the upper chamber and 10% FCS/RPMI in the lower chamber. The transmigration lasted for 16 hours in the presence or absence of 10 μg/ml or 100 μg/ml of dnCCL2, 10 μg/ml of Maraviroc (R&D Systems, England), or 400 U/ml of Tinzaparin (Leo Pharmaceuticals, Denmark). The number of transmigrated cells (MC-38GFP) was counted on the bottom of the insert membrane with a Zeiss AxioVision microscope (n = 3-4).
Analysis of Myeloid Cells from Blood and Lungs, and Cell Lines by Flow Cytometry C57BL/6 mice were intravenously injected with 3 × 10 5 MC-38GFP cells. Mice treated with a dnCCL2-HSA chimera (800 μg) received 1 intravenous injection 10 minutes before tumor cell administration. After 12 hours, lungs were perfused with PBS and digested with collagenase A and collagenase D (each 2 mg/ml, Roche) for 1 hour. Cells were filtered through a 70-μm cell strainer (BD), and the single cell suspension was stained first with LIVE/DEAD Fixable Aqua Dead Cell Stain kit (Life Technologies) followed by antibody staining: anti-CD45-Pacific Blue (Biolegend, San Diego, CA), anti-CD11b-PE-CF594 (BD), anti-Ly6G-APC-Cy7 (BD), and anti-Ly6C-BV570 (BD). Data were acquired with an LSR II Fortessa machine (BD) and analyzed by FlowJo software (Tree star).
Tumor cell lines MC-38GFP and LLC1, and the macrophage cell line RAW264.7 were detached from cell culture flasks by 2 mM EDTA/PBS for 10 minutes and stained with either anti-CCR2-PE (R&D Systems) or isotype-PE control (Biolegend). Data were acquired with a FACS Canto machine (BD) and analyzed by FlowJo software (Tree star).
Immunohistochemistry (Frozen Sections)
Lungs of naive and tumor cell-injected mice were prepared for cryopreservation as previously described [23]. Lung sections (8 μm) were stained with anti-CD11b, anti-Ly6G (both from BD), and anti-F4/80 (AbD Serotec, Oxford, UK) followed by goat anti-rat AF568 (Life Technologies) antibodies. DAPI was used for nuclear staining. Pictures were taken with a Zeiss AxioVision microscope.
Histology and Immunohistochemistry
Lungs fixed in 4% paraformaldehyde were embedded in paraffin blocks. Lung sections (2 μm) were stained with hematoxylin/eosin or with antibodies anti-GFP (Fitzgerald Industries Int.) and anti-HSA (Sigma Aldrich, St. Louis, MO). Staining was performed on a NEXES immunohistochemistry robot (Ventana Instruments, Switzerland) using an IVIEW DAB Detection Kit (Ventana) or on a Bond MAX (Leica). Images were digitalized on Zeiss Mirax Midi Slide Scanner and analyzed with Panoramic Viewer (3DHISTECH).
was intravenously injected, and lungs were perfused with PBS 30 minutes later as described previously [14]. Lungs were dissected, photographed, and homogenized. Evans blue was extracted with formamide, and the amount was measured with a spectrophotometer (absorbance at 620 nm).
Experimental Metastasis
C57BL/6 mice were intravenously injected with 3 × 10 5 MC-38GFP or 1.5 × 10 5 3LL cells, respectively [14]. Mice were intravenously treated with indicated amounts of dnCCL2-HSA chimera 10 minutes before tumor cell injection and 24 hours post tumor cell injection. Mice were euthanized 28 days later, lungs were photographed, and the number of metastatic foci was determined.
Spontaneous Metastasis
Lewis lung carcinoma cells (LLC1, 300,000 cells) were subcutaneously injected into C57BL/6 mice. Mice were treated with intravenous injection of dnCCL2-HSA chimera (70 μmol = 800 μg) on day 11, on day 13, and at the time of tumor removal (day 15). Mice were terminated 2 weeks after surgical removal of the subcutaneous tumor (total day 29). The lungs were perfused with PBS, and the number of metastatic foci was counted.
RNA Isolation and Reverse Transcription
Thirty milligrams of lung tissue was transferred to a mortar, homogenized in liquid nitrogen, and lysed, and total RNA was extracted using GenElute Total Mammalian RNA Miniprep Kit (Sigma Aldrich) according to the manufacturer's protocol. Purity and quantity of the eluted RNA were determined by measuring the absorption at 260 and 280 nm. Two micrograms of total RNA was transcribed into cDNA using "High Capacity cDNA Reverse Transcription Kit" (Applied Biosystems).
Quantitative Real-Time Polymerase Chain Reaction (qPCR)
The mRNA expression of the target genes was analyzed with a qPCR assay using the SYBR Green I chemistry in an AB 7300 Real-Time PCR System Instrument (Applied Biosystems). Samples were heated to 95°C for 10 minutes, followed by 40 cycles of 95°C for 15 seconds, 55°C for 30 seconds, and 72°C for 1 minute. Quantitative real-time PCR was carried out with a final sample volume of 20 μl, containing 10 μl of Kapa SYBR Fast qPCR MasterMix for ABI Prism (Peqlab), 5 μl of the primer mix (2 μM each, forward and reverse; Invitrogen, Life Technologies), 0.5 μl of template cDNA, and 4.5 μl of nuclease-free water (Ambion, Life Technologies). The housekeeping enzyme glyceraldehyde 3-phosphate dehydrogenase (GAPDH) was used as an endogenous control. Group size was n = 4 to 5. Oligonucleotide sequences used for amplification are shown in Table 1. Expression levels are shown relative to GAPDH or to isolated cells from naive mice.
Statistics
Statistical analysis was performed with the GraphPad Prism software (version 5.0). All data are presented as mean ± SEM and were analyzed by analysis of variance with the post hoc Bonferroni multiple comparison test, unless specified differently.
Pharmacological Blocking of CCL2 Inhibits Tumor Cell Transmigration In Vitro
A signaling-deficient CCL2 chemokine decoy protein with enhanced GAG-binding affinity was previously shown to inhibit recruitment of inflammatory leukocytes in vivo [21]. To further improve the therapeutic potential of CCL2-based decoys, the chemokine has been additionally engineered and fused to human serum albumin (HSA) to extend the serum half-life and to optimize the GAG-binding protein displacement profile with the aim to avoid off-target effects [22]. First, we measured the affinity of the unfused CCL2 mutant (designated as dnCCL2) and the HSA-coupled dnCCL2 (further designated as dnCCL2-HSA chimera; Figure 1) toward heparin using SPR measurement. We observed a significantly enhanced affinity to heparin of both dnCCL2 and dnCCL2-HSA chimera compared with wild-type CCL2 ( Figure 2A). Next, we tested the chemotactic activity of dnCCL2 and dnCCL2-HSA chimera and compared it with the wild-type CCL2 control. Both mutant proteins did not induce monocyte migration when tested in the range of 20 to 2000 nM, which was in contrast to a strong chemotactic activity induced by the wild-type CCL2 ( Figure 2B). Finally, dnCCL2 was tested for its activity in a murine model, which we selected for the analysis of CCL2-CCR2 axis in cancer progression. For this purpose, we selected MC-38GFP cells, which were shown to produce CCL2 and metastasize in a CCL2-dependent manner [14,15] and do not express CCR2 ( Figure 2D). The capacity of dnCCL2 to affect monocyte-facilitated tumor cell (MC-38GFP) transmigration through a monolayer of pulmonary microvascular endothelial cells was tested using the Boyden chamber assay ( Figure 2C). Although monocytes clearly potentiated endothelial transmigration of tumor cells [13,14], the presence of dnCCL2 at concentrations of 10 or 100 μg/ml significantly and dose-dependently attenuated this process. In contrast, there was no effect on tumor cell transmigration in the presence of a CCR5 inhibitor (Maraviroc) or of a low-molecular weight heparin, Tinzaparin. Heparin was tested for its potential to bind CCL2 and thereby interfere with endothelial transmigration. These data indicate that the GAG-mediated CCL2-CCR2 chemokine axis is critical for an efficient tumor cell transendothelial migration and more importantly that dnCCL2 is biologically active also in a murine cell-based system.
dnCCL2 Exposure In Vivo Was Enhanced upon Conjugation to Human Serum Albumin Before assessing the biological potentials of dnCCL2-HSA chimera in vivo, we assessed its pharmacokinetic profile in comparison to dnCCL2 upon intravenous injection of equimolar quantities (Figure 3). Following intravenous dosing of both dnCCL2-HSA chimera and dnCCL2, the mean serum profile showed concentration levels declining with an apparent biphasic distribution and elimination curves. The pharmacokinetic profiles were markedly different, with the former showing detectable levels up to 72 hours from administration, whereas, for the latter, no concentrations were detectable after 18 hours. The initial distribution phase for dnCCL2-HSA chimera showed a rapid decrease in concentrations followed by a slow log-linear elimination phase lasting from 4 hours up to 72 hours. Elimination was nearly complete at the final time point assessed, with the plasma concentration versus time curve (area under curve = AUC 0-last ; 1761.3 h×ng/ml) being similar to the AUC extrapolated to infinity (1797.7 h×ng/ml) A long mean residence time of 12.2 hours was observed. The elimination phase (calculated from 4 hours onward) gave an apparent terminal half-life of 16.8 hours. Clearance value was low, namely, about 0.111 l/h per kilogram. In comparison, following dnCCL2 treatment, the initial distribution phase showed an even faster decrease in concentrations followed by a log-linear elimination phase lasting from 4 hours up to 18 hours only. Elimination was complete at 18 hours, with the AUC 0-last, (44.7 h×ng/ml) and the AUC to infinity (44.8 h×ng/ml) showing equal values. The mean residence time was exceptionally low, namely, 1.0 hour. The elimination phase, evaluated in the 4-to 18-hour interval, showed an apparent terminal half-life of 2.8 hours. Clearance value (4.461 l/h per kilogram) was very high. Taken together, the exposure following dnCCL2-HSA chimera intravenous administration was about 40 times higher than that observed after treatment with dnCCL2.
dnCCL2-HSA Chimera Reduces Tumor Cell-Induced Vascular Permeability and Tumor Cell Seeding to the Lungs
Initial recruitment of Ly6C hi cells was previously shown to promote tumor cell extravasation and thereby metastasis [13,14]. First, we tested whether dnCCL2-HSA chimera affects the number of circulating monocytes. We treated mice twice with 800 μg of dnCCL2-HSA chimera (corresponding to 100 μg of dnCCL2 equivalent) and quantified the number of myeloid cells in the blood ( Figure 4A). We observed comparable numbers of Ly6C hi cells in both naive controls and dnCCL2-HSA chimera-treated mice. To determine whether dnCCL2-HSA chimera can inhibit the recruitment of monocytes in vivo, we analyzed infiltrating leukocytes to the lung 12 and 24 hours Neoplasia Vol. 18, No. 1, 2016 Targeting CCL2 Attenuates Metastatic Seeding Roblek et al.
after intravenous injections of MC-38GFP cells by flow cytometry.
Surprisingly, dnCCL2-HSA chimera treatment did not alter the recruitment of myeloid cells, including Ly6C hi cells, to the lung of tumor cell-injected mice compared with controls ( Figure 4B). We observed a similar increase in myeloid cells (CD11b + cells) after 12 hours, which was diminished after 24 hours (not shown), in the lungs of dnCCL2-HSA chimera-treated and control mice. To further test whether dnCCL2-HSA chimera affects the leukocyte recruitment to the arrested tumor cells in the lungs, we analyzed lung sections for tumor cell-leukocyte association 12 and 24 hours post tumor cell injection using immunohistochemistry. We observed an association of CD11b + cells with tumor cells that was reduced upon dnCCL2-HSA chimera treatment at 12 hours ( Figure 4C). However, this reduction was not detected at 24 hours. We found an initially higher association of Ly6G + cells with tumor cells, which decreased over time independent of a treatment ( Figure 4C). On the contrary, we observed equal association of F4/80 + cells with tumor cells independent of time and treatment. Next, we evaluated whether dnCCL2-HSA chimera treatment affects lung vascular permeability, which was shown to be dependent on tumor-derived CCL2 and endothelial CCR2 expression [14]. Mice treated with dnCCL2-HSA chimera showed reduced vascular leakiness compared with untreated mice as determined by Evans blue assay 24 hours post tumor cell injection ( Figure 4D). To determine whether reduced vascular permeability in the presence of dnCCL2-HSA chimera affects tumor cell survival in the lungs and their extravasation, we analyzed lungs of mice intravenously injected with MC-38GFP cells after 6, 12, 24, and 48 hours ( Figure 4E). Indeed, dnCCL2-HSA chimera treatment in mice significantly reduced the number of living tumor cells in the lungs at 24 hours when compared with control (untreated) lungs, and it remained reduced also after 2 days. These findings indicate that temporal inhibition of the CCL2-CCR2 axis by dnCCL2-HSA chimera treatment diminishes the ability of tumor cells to leave the vasculature.
dnCCL2-HSA Chimera Treatment Reduces Pulmonary Metastasis
To test the hypothesis as to whether the CCL2 decoy protein inhibits metastatic formation in the lungs, we used experimental metastasis model using MC-38GFP cells. We treated mice intravenously with the dnCCL2 or the dnCCL2-HSA chimera 10 minutes before tumor cell injection and 24 hours post tumor cell injection. Significant reduction of lung metastasis after 28 days was observed in mice treated with the dnCCL2-HSA chimera with two different doses: 17.5 μmol = 200 μg and 70 μmol = 800 μg, respectively ( Figure 5, A and B). However, equimolar concentration of dnCCL2 (70 μmol = 100 μg) did not have any effect on metastasis, likely because of its fast elimination. Similarly, mice treated with HSA alone did not have reduced number of metastasis ( Figure 5, A and B). Thus, we concluded that the prolonged serum half-life of the dnCCL2-HSA chimera is needed for the antimetastatic activity. dnCCL2-HSA chimera treatment of mice before injection of Lewis lung carcinoma cells (3LL) also attenuated metastasis ( Figure 5C), indicating that this activity is tumor cell type independent. Finally, we tested the capacity of dnCCL2-HSA to inhibit spontaneous metastasis to the lungs of Lewis lung carcinoma cells (LLC1) upon subcutaneous injection. Indeed, three intravenous injections of dnCCL2-HSA, covering approximately 6 days during the time when tumor cell might be in circulation, attenuated metastasis ( Figure 5D). These data confirmed that GAG-mediated CCL2-CCR2 axis promotes metastatic initiation, and a specific inhibition of CCL2 accumulation at the site of tumor cell extravasation can inhibit this process.
Metastasizing MC-38GFP Cells Induce Syndecan-4 (SDC4) Expression in the Lungs
Chemokine binding to GAGs on the endothelial surface enables formation of an intravascular chemokine gradient [17,18]. To investigate the mechanism of the dnCCL2-HSA chimera activity in the metastatic model, we first analyzed the expression of proteoglycans in the lungs 12 and 24 hours after injection of MC-38GFP cells. We observed no significant changes in mRNA expression of the different syndecans and glypicans, with the exception of SDC4 ( Figure 6A). SDC4 expression was significantly increased 12 hours post tumor cell injection. To confirm that the increase of SDC4 expression corresponded to endothelial cells, we sorted pulmonary endothelial cells from naive and from MC-38GFP-injected mice 12 hours postinjection. A two-fold increase in SDC4 expression was detected in pulmonary endothelial cells isolated from tumor-injected mice ( Figure 6B). Tumor cell arrest in the lung vasculature was shown to lead to local endothelial activation [26]. To test whether SDC4 expression correlates with endothelial activation, we analyzed E-selectin expression. We observed a five-fold increase in E-selectin expression, which confirms the activation status of the endothelial cells. These data showed that tumor cells induce endothelial activation, which correlates with an enhanced expression of SDC4.
Enhanced accumulation of CCL2 in the lungs during the metastatic initiation has been observed [23]. To identify the potential of the dnCCL2-HSA chimera binding to GAGs, MC-38GFP-injected mice were either treated with 800 μg of dnCCL2-HSA chimeric protein or left untreated (control), and lungs were removed for analysis 12 and 24 hours post tumor cell injection. dnCCL2-HSA chimera was detected using anti-HSA antibodies. We observed dnCCL2-HSA chimera staining throughout the lung vasculature, with an enhanced presence detected in the proximity of MC-38GFP cells at 24 hours ( Figure 6E). Treatment with HSA in MC-38GFP-injected mice resulted only in background staining (MC-38GFP/HSA). Similarly, mice injected only with MC-38GFP cells (control) or with HSA but without tumor cells (HSA) showed no staining. These data indicate that tumor cell-induced expression of SDC4 in the lungs correlates with dnCCL2-HSA chimera localization in the lung vasculature in the proximity of tumor cells.
Discussion
Chemokines are mediators of directed cell migration, such as leukocyte recruitment during inflammation, as well as cell activators of several other important cellular pathways (e.g., Jak or Stat signaling). CCL2 has been Neoplasia Vol. 18, No. 1, 2016 Targeting CCL2 Attenuates Metastatic Seeding Roblek et al.
identified as a major cancer-associated chemokine, which promotes cancer progression through modulating both tumor cell metastatic behavior and the metastatic microenvironment [7,10,[12][13][14]. CCL2-mediated recruitment of CCR2 + inflammatory monocytes (Ly6C hi ) facilitates tumor cell extravasation and metastasis. Recently, stromal CCR2 expression has been identified to also promote this process [12,14]. Specifically, activation of CCR2 + endothelial cells in the lungs is required for induction of vascular permeability and efficient tumor cell extravasation [14]. In another study, CCL2-mediated activation of brain endothelial cells resulted in decreased barrier function and increased vascular permeability [27]. Endothelial CCR2 expression in the brain is critical for transendothelial migration of macrophages that was absent in CCR2 −/− brain microvascular endothelial cells upon CCL2 stimulation [28]. These studies showed that CCL2 triggering of endothelial CCR2 increases endothelial and vascular permeability and contributes to efficient leukocyte extravasation. It is well established that CCR2 is required for inflammatory monocytes to leave the bone marrow after an inflammatory stimulus but appears to be dispensable for inflammatory monocytes to be recruited to sites of infection [29]. This observation is further supported by Soehnlein et al., who showed that the recruitment of Gr1 hi monocytes to atherosclerotic vessels is dependent on CCR1 and/or CCR5 axis but is dispensable of CCR2 [30]. Similarly, the chemotactic response of monocytes toward synovial fluid from rheumatoid arthritis patients showed that neither anti-CCR2 nor anti-CCR5 antibodies inhibited the recruitment, but CCR1-mediated recruitment was shown to be essential [31]. Because there are a number of chemokines potentially involved in monocyte recruitment to both the inflammatory sites and the metastatic microenvironment, further studies are required for identification of specific roles of chemokines during both inflammation and metastasis.
During cancer progression, CCL2 initiates endothelial activation and contributes to the recruitment of inflammatory monocytes to metastatic sites [12][13][14][15][16]. The use of a CCL2-neutralizing antibody reduced metastasis of breast cancer cells to the bone and the lungs [12,13]. The reduced recruitment of monocytes to the lung vasculature resulted in attenuated tumor cell extravasation. In a prostate cancer model, the use of neutralizing CCL2 antibody also resulted in attenuation of metastasis [32]. In addition, targeting of CCR2 with a small molecular inhibitor caused a reduction of primary tumor growth and liver metastasis of pancreatic adenocarcinoma cells [33]. The use of CCR2 inhibitor treatment reduced the number of inflammatory monocytes in the liver. However, the anti-CCL2 mAb Carlumab (CNTO888) showed no activity in patients with solid tumors due to transient reduction of CCL2 levels followed by a significant increase [34]. In the present study, we have not detected a reduced infiltration of inflammatory monocytes (Ly6C hi ) to the lungs 12 hours after intravenous tumor cell injection in dnCCL2-HSA chimera-treated compared with control mice. We observed reduced association of CD11b + cells with tumor cells 12 hours postinjection. In contrast, we showed evidence for reduced vascular permeability upon dnCCL2-HSA chimera treatment and reduced metastasis. Finally, CCL2 decoy inhibitor diminished tumor cell transmigration through endothelium in the presence of monocytes in vitro, which is in agreement with previous data [13,14]. These findings indicate that targeting CCL2-induced vascular activation is likely the mechanism of dnCCL2-HSA chimera in reducing metastasis.
Inhibition of the CCL2-CCR2 chemokine axis in various animal models provided a strong evidence for effective control of both primary Neoplasia Vol. 18, No. 1, 2016 Targeting CCL2 Attenuates Metastatic Seeding Roblek et al.
tumor and metastasis [12,13,33]. However, CCL2 appears to modulate monocyte activity depending on a cellular context. In a breast cancer model, CCL2 had a protumorigenic activity at the primary site, whereas it inhibited lung metastasis [35]. Recent findings showed that surgical manipulation enhances the metastatic burden by increasing tumor cell binding to the lung and liver vasculature and as a consequence causes an increased metastatic colonization of these organs [36]. Therefore, understanding the inhibitory mechanism of the CCL2-CCR2 axis is critical for evaluating this strategy for a clinical application. Especially the involvement of GAGs has so far been underestimated, and its mechanisms in vivo have remained elusive. The mechanism of the dnCCL2-HSA chimera action appears to be different to CCL2-neutralizing antibodies [12,13]. Here, we show that dnCCL2-HSA chimera efficiently accumulates around the metastatic tumors cell in the lungs. We also could not observe altered leukocyte numbers in peripheral blood upon dnCCL2-HSA chimera treatment. On the contrary, systemic use of CCR2 inhibitor significantly reduced also the levels of circulating inflammatory monocytes due to its activity at the bone marrow, where the egress of monocytes has been affected [33]. Hence, the dnCCL2-HSA chimera treatment is likely influencing the local metastatic microenvironment in the target tissue (lungs) but does not affect monocyte egress from the bone marrow.
Tumor cell-activated endothelium induced local expression of SDC4, which timely correlated with an increased presence of dnCCL2-HSA chimera around the tumor cells. Whether proteoglycan families other than syndecans and glypicans are involved remains to be determined. Previously, in a mouse model of inflammation, a specific upregulation of SDC4 in the lungs has been detected [37]. Further analysis showed that TNF-α stimulation of endothelial cells upregulates SDC4 expression [38]. GAG binding and oligomerization of chemokines are essential for their activity in vivo [18]. Furthermore, syndecans (1 and 4) on macrophages were shown to bind to chemokines such as CCL5, and the enzymatic removal of GAGs significantly reduced chemokine activity [39]. Rationally designed CCL2 decoy protein with enhanced GAG binding activity and with CCR2-antagonist activity showed a potent antiinflammatory activity in vivo [21]. The further development of the CCL2 decoy protein, dnCCL2-HSA chimera (named as GAGbodyTM), reveals also a potent inhibitory activity of CCL2-mediated tumor cell transendothelial migration and tumor cell extravasation. We show that the dnCCL2-HSA chimera efficiently localizes to the tumor cell-activated vasculature in vivo. Thus, the CCL2 decoy strategy using dnCCL2-HSA chimera represents an attractive alternative for targeting chemokines in a defined microenvironment during metastasis. | 2018-04-03T02:53:18.185Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "c5ed9d4d60a79c4da15a839e1d9163264f64bef9",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.neo.2015.11.013",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c5ed9d4d60a79c4da15a839e1d9163264f64bef9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221015446 | pes2o/s2orc | v3-fos-license | Dictators and Their Subjects: Authoritarian Attitudinal Effects and Legacies
This introductory essay outlines the key themes of the special issue on the long-term impact of autocracies on the political attitudes and behavior of their subjects. Here, we highlight several important areas of theoretical and empirical refinements, which can provide a more nuanced picture of the process through which authoritarian attitudinal legacies emerge and persist. First, we define the nature of attitudinal legacies and their driving mechanisms, developing a framework of competing socialization. Second, we use the competing socialization framework to explain two potential sources of heterogeneity in attitudinal and behavioral legacies: varieties of institutional features of authoritarian regimes, which affect the nature of regime socialization efforts; and variations across different subgroups of (post-)authoritarian citizens, which reflect the nature and strength of alternative socialization efforts. This new framework can help us to better understand contradictory findings in this emerging literature as well as set a new agenda for future research.
Introduction
Today, about half of the world's population lives in either closed or electoral authoritarian regimes. 1 Another 40% live in countries, which experienced autocratic periods in the last 80 years. Taken together, nine out of 10 people in the world today had direct or indirect exposure to authoritarian regimes. 2 Crucially, there is widespread agreement and much anecdotal evidence that this experience has shaped-often in dramatic and lasting ways-the attitudes and behavior of individuals living under such regimes, often for long after the regime has been overthrown. Yet, we have surprisingly limited knowledge of the mechanisms through which authoritarian attitudinal and behavioral legacies emerge and persist. This special issue proposes a new framework and research agenda for a more systematic study of authoritarian attitudinal legacies and brings together four papers that contribute to several key dimensions of this emerging research agenda.
Although in the last two decades there has been a significant revival in the study of authoritarian legacies, the bulk of this literature has focused on aggregate outcomes, such as institutions and elite actors, especially political parties. 3 These issues are undoubtedly very important for understanding postauthoritarian politics, including the prospects for successful democratization and democratic survival, as well as many other aspects of policy making in former authoritarian countries. However, we know from the democratization literature that public support, a democratic political culture, and an active citizenry are also fundamental for the survival of democracy (e.g., Booth & Seligson, 2009;Claassen, 2020;Diamond, 1999;Norris, 1999). Similarly, the political attitudes of citizens matter greatly for the types of economic and social policies that we can expect to emerge from the democratic process.
If citizens' political preferences and behavior are crucial for understanding the resilience and functioning of democracy, it is important to investigate how these are formed. Although this question has received considerable attention in established democracies (e.g., Alvarez & Brehm, 2002;Jennings, 1989;Jennings & Markus, 1984;Zaller, 1992), in a post-authoritarian context an important part of the answer hinges on understanding how political attitudes and behavior are shaped by the authoritarian past. The articles in this special issue contribute to a small (but growing) set of studies focused more squarely on the impact of authoritarian regimes on individual political attitudes and behavior.
At the most basic level, we can think of authoritarian attitudinal legacies as consisting of two necessary steps. The first step is for authoritarian regimes to shape the attitudes and behavior of their citizens. The second step is for these effects to persist across a regime divide, that is, after the end of the regime that inculcated those initial effects (Beissinger & Kotkin, 2014). The first step can be studied directly by focusing on public opinion in contemporary authoritarian regimes. Such an approach, which is exemplified by one of the contributions to this special issue (Tertytchnaya, 2020) and by a growing literature on the contemporaneous attitudinal effects of authoritarian regimes, 4 has the obvious advantage of allowing for a direct test of authoritarian attitudinal effects. However, in addition to their analytical challenges, 5 such studies are limited in the extent to which they can address the durability of these effects, and-by definition-they cannot establish the nature of post-authoritarian attitudinal legacies.
The second approach is to analyze the effects of these regimes on their citizens' political attitudes and behavior after the regime breaks down. This approach, which is the primary focus of three of the four articles in this special issue, as well as of a small but rapidly growing literature on authoritarian attitudinal legacies, has the advantage of being able to address the crucial question of legacy durability. Several existing studies have established the existence and the durable impact of authoritarian regimes on a variety of attitudes including lower support for and satisfaction with democracy (Neundorf, 2010;Pop-Eleches & Tucker, 2014, 2017, demand for democracy (Mattes & Bratton, 2007), support for the previous regime (Mishler & Rose, 2007), the emergence of political trust (Mishler & Rose, 2001), attitudes toward markets and welfare states (Alesina & Fuchs-Schündeln, 2007;Pop-Eleches & Tucker, 2014, 2017, as well as behavior, including lower civic and political participation (Bernhard & Karakoç, 2007;Ekiert & Kubik, 2014;Northmore-Ball, 2014;Pop-Eleches & Tucker, 2014).
Despite producing some promising and valuable insights, the existing research has produced contradictory results, 6 is still limited in its scope, and faces a number of important theoretical and analytical challenges. This introductory essay draws on the contributions to this special issue to lay out a new framework and research agenda that can help us overcome at least some of these limitations of previous studies. First, existing research in this area lacks a unified theoretical framework, which conceptualizes key concepts related to attitudinal legacies. Furthermore, though several of these studies have started to investigate the mechanisms underlying the production and reproduction of these legacies (Darden & Grzymala-Busse, 2006;Lupu & Peisakhin, 2017;Neundorf, 2010;Pop-Eleches & Tucker, 2017;Wittenberg, 2006), a lot more work remains to be done to theorize and test these mechanisms. This introductory essay tries to fill this gap. Second, because much of this work has focused on the post-communist countries of Eastern Europe and the former Soviet Union, the existing studies have largely failed to take advantage of the analytical advances in the literature on varieties of authoritarianism. Third, with a few exceptions, existing studies have not sufficiently addressed the important individual heterogeneities in authoritarian attitudinal legacies. This applies both with respect to differential effects on different subgroups of authoritarian subjects and with respect to different types of attitudes and behavior. We discuss each of these issues in greater detail in the next sections and then touch upon a few additional analytical challenges in our discussion of future research directions in the conclusion.
Authoritarian Attitudinal Legacies-A New Theoretical Framework
To understand how authoritarian regimes shape political attitudes, and how these short-term effects eventually translate into attitudinal legacies, we need to understand the mechanisms through which attitudes are formed and reproduced. In this section, we discuss a new theoretical framework of authoritarian attitudinal legacies, focusing on three key questions: First, how do autocracies affect their citizens? Second, what is the mechanism underlying this authoritarian influence? Finally, what affects the longevity of this effect?
Direction: How Do Autocracies Affect Their Citizens?
To understand the initial attitudinal impact that autocracies have on their citizens, it is important to distinguish between different possible individual reactions to regime efforts, mainly transmitted through indoctrination. Perhaps the most straightforward scenario, which we will call internalization, is that individuals will adopt political attitudes and behaviors in line with the "official line" of the regime. Several findings in this special issue-such as the greater prevalence of left-authoritarians among respondents with greater personal communist exposure (Pop-Eleches & Tucker, 2020) as well as higher nostalgia and weaker democratic support and satisfaction among individuals who spend their formative years under authoritarianism (Neundorf et al., 2020)-suggest that authoritarian indoctrination can indeed produce significant and lasting attitudinal effects, which are in line with the goals of the authoritarian regime.
However, given the legitimacy deficit of many authoritarian regimes, there are also good reasons to expect that indoctrination could be ineffective or even counterproductive. As Tertytchnaya's (2020) analysis in this special issue suggests, the rejection of the authoritarian regime can result either in disengagement from the political process or in embracing the opposition. In attitudinal terms, these two alternative types of resistance should translate either into a lack of correlation between authoritarian ideology and mass attitudes (in the case of disengagement) or to embrace the opposite of whatever attitudes the authoritarian regime is trying to promote (in the case of rejection; Wittenberg, 2006). Dinas and Northmore-Ball's (2020) article in this special issue provides a further example of a rejection effect. They show that respondents from countries with a recent history of right-wing authoritarian regimes were more likely to embrace a leftist ideology subsequently: a pattern that they interpret as reflecting the rejection of the ideological tenets of the illegitimate authoritarian regimes. Furthermore, they show that a greater reliance on repression is associated with a higher rejection of the ideological orientation of both left-and right-wing authoritarian regimes.
Our starting point here is the assumption that autocracies can influence their citizens in two ways. Their indoctrination efforts can either pay off and lead to internalization of the regime doctrine or lead to a resistance among citizens, which can either lead to an outright rejection of the regime's doctrine or at the very least to a disengagement of citizens from the political process. To understand why authoritarian regimes sometimes produce compliant citizens and sometimes trigger resistance or even backlash, we need a better understanding of the mechanisms through which both of these effects occur.
Competing Socialization: What Are the Mechanisms of Authoritarian Influence?
One of the key limitations in the literature on authoritarian attitudinal legacies relates to the mechanisms underlying regime impact. Prior work on authoritarian legacies has highlighted a variety of channels, including political socialization through regime indoctrination (Neundorf, 2010;Pop-Eleches & Tucker, 2017) and intergenerational transmission of attitudes (Darden & Grzymala-Busse, 2006;Lupu & Peisakhin, 2017). Nevertheless, the study of the mechanisms underlying the production and reproduction of these legacies is still in its infancy.
Here, we argue that the key mechanism underlying authoritarian attitudinal legacies-regardless of whether authoritarian subjects internalize or reject the regime doctrine-is driven by political socialization and learning. Drawing on insights from research in advanced democracies on the emergence and durability of political attitudes and behavior (Bartels & Jackman, 2014;Jennings, 1989;Jennings & Markus, 1984;Krosnick & Alwin, 1989;Mannheim, 1952;Zaller, 1992), we argue that initial attitudes are formed when people are young during their so-called formative years. As we know from research on political socialization, children and young people learn about and internalize societal norms, values, and identities through processes of imitation and repetition. These norms then translate into political preferences and behavior. New information and later-life political experiences will be processed in relation to these initial political attitudes formed early on (Mishler & Rose, 2001;Sears & Funk, 1999;Zaller, 1992). More recent work on post-communist countries (Pop-Eleches & Tucker, 2017) suggests that authoritarian socialization continuesand may even strengthen-among adults.
The main question then is what kind of information and which agents are key in the formative socialization and potential revision of political attitudes and behavior in later life. We argue that this process is best conceived as a set of competing socialization efforts. At any point in time, individuals are influenced by a range of different-and potentially competing-socialization agents: on the macro-level, the political regime; on the meso-level, political and societal organizations; and on the micro-level, family and peers. Although any of these different agents can either reinforce or undermine each other's socialization efforts, for the purpose of understanding authoritarian legacies, the key question is how the socialization "project" of the authoritarian regime interacts with the agendas of various meso-and micro-level actors. In the next section, we briefly outline how we expect each socialization agent to impact individuals' political belief system.
First, we expect the political regime to play a key role in shaping the political socialization of its citizens. In their efforts to ensure mass support and compliance, authoritarian regimes try to shape the political attitudes and behavior of individuals in line with official ideology. As Dinas and Northmore-Ball (2020) argue in their contribution to this special issue, schools are a key component in the indoctrination apparatus of any state. Many autocracies directly control the education system, which allows them significant access to young people during the crucial formative years. 7 As Cantoni et al. (2017) show in the Chinese case, regimes can use schools for indoctrination and propaganda purposes. Through textbooks and curriculum design, regimes affect the content and nature of information that young, impressionable individuals are exposed to, which we expect to impact the development of certain political attitudes and behavior, leading to internalization.
Moreover, autocracies often control the media and the broader information environment, which allows them to transmit the regime ideology or mentality to citizens of all ages. In sum, we expect that citizens who are exposed to a singular worldview (that of the regime), transmitted by an education and information environment, which is strongly controlled by the political regime, to internalize these ideas in their own political belief system. However, to understand authoritarian attitudinal legacies, these macro-level regime indoctrination mechanisms need to be complemented by meso-level and individuallevel institutional and social mechanisms responsible for both internalization and resistance.
The political regime does not have full control of people's socialization and learning. Individuals are further exposed to potentially contradictory narratives and information by other societal organizations, for example, churches, unions, and political parties. For example, the Communist indoctrination was clearly challenged in Poland (and other parts of Eastern Europe) by the Catholic Church, which provided an alternative education and ideology (Mazgaj, 2010;Mueller & Neundorf, 2012). Although much work remains to be done in this area, the contributions to the special issue take some steps in the direction of identifying and testing some of these institutional transmission mechanisms. For example, Pop-Eleches and Tucker (2020) test whether the weaker communist socialization effects among women were due to their lower participation rates in the formal workforce, the communist party, and the army, three potentially important sites of indoctrination at the meso-level. Although they find no evidence that any of these channels explain the gender differentials, they do find that higher church attendance among women may account for their greater resistance to communist socialization, which reinforces earlier findings about the role of churches as a key institutional source of anti-authoritarian resistance (see, for example, Wittenberg, 2006). Given this argument, and preliminary findings that meso-level socialization agents could potentially undermine regime's efforts to create mass support and compliance, it is not surprising that autocracies often use different forms of repression to minimize the impact of other actors beyond the regime's main actors and organizations (Escribà-Folch, 2013). But, as already mentioned above, repression can also backfire, creating more resistance among citizens (Dinas & Northmore-Ball, 2020;Rozenas & Zhukov, 2019).
Finally, we expect the family and peers to shape the development of individuals' political attitudes and behavior (Darden & Grzymala-Busse, 2006;Pop-Eleches & Tucker, 2017). As with societal organizations argued above, the individual-level socialization agents can either reinforce or undermine the regime socialization. Intergenerational transmission of political preferences can strengthen the regime's efforts to impact the hearts and minds of its citizens if parents are supporters of the regime, though the opposite is true if parents are in opposition to the regime. The impact of the individual-level agents on authoritarian attitudinal legacies is least understood and tested.
Here, we argue that there are different levels and agents that affect the development of individuals' political socialization. Although we are here primarily interested in how autocracies shape the political attitudes and behavior of their citizens, we argue that this process cannot be properly understood if we ignore the fact that authoritarian socialization efforts can be challenged by competing socialization agents, such as societal organizations or the family. In the next section, we outline how these various competing processes lead to long-term attitudinal legacies.
Reinforcement Mechanism: What Affects the Longevity of This Effect?
The previous section focuses on the initial development of political attitudes and behavior and the impact of authoritarian regimes on this process. The question now is, whether the beliefs, instilled by the regime, persist even after the regimes are overthrown and replaced by democracies or another type of dictatorship. At the most basic level, the answer to this question depends on the relative importance of early versus adult socialization. If, in line with much of the literature of political socialization (Neundorf et al., 2013), initial attitudes and behavior remain quite stable in later life, then individuals who spent their "impressionable years" under authoritarianism should be expected to continue reflecting these patterns well after the authoritarian regimes have fallen. However, if political socialization continues into adulthood (Pop-Eleches & Tucker, 2017) or if initial attitudes were less central and therefore weaker, then political attitudes can be updated when new information becomes available (Zaller, 1992). Under such circumstances, authoritarian attitudinal legacies may have short half-lives, as attitudes and behaviors increasingly reflect the new post-authoritarian political reality.
Based on our competing socialization framework, we postulate that two factors are key in explaining the longevity of authoritarian legacies: the initial strength of these effects and the influence of new information. We expect the strength of the regime's initial attitudinal influence to depend on the nature of competing socialization efforts. If an individual is exposed to strong regime indoctrination, which is reinforced by participation in regime-supporting organizations, such as political parties, as well as a regime-loyal family, we expect them to strongly internalize the regime's ideology and political identities. Such individuals are unlikely to accept new contradictory information to update their political attitudes and behavior. In this case, we expect to see a strong initial authoritarian legacy effect. The opposite should be true for individuals exposed to competing socialization, which would undermine the regime indoctrination.
The second factor, which affects the longevity of authoritarian legacies, relates to the nature of the new information environment. To the extent that the socialization approach of the new regime is not radically different (e.g., because of high elite continuity despite a nominal change in regime), then we should expect the initial attitudinal imprint of the previous authoritarian regime to be highly resilient. However, if the new regime espouses radically different political values than its predecessor-such as in the case of East European countries embracing free markets and liberal democracy after the fall of communism-then the evolution of political attitudes among individuals to these conflicting socialization projects is much more uncertain. In part, this trajectory will depend on the strength of the initial authoritarian socialization success of the former authoritarian regime (as discussed above). However, it also matters how effective the new regime is in its pursuit of its alternative socialization project, which in turn depends both on how committed the new political elite is to this new ideological project and how the economic and political performance of the new regime compares to that of its predecessor. 8 Thus, we argue that the competing socialization framework can be fruitfully applied to studying how post-authoritarian developments affect the durability of authoritarian legacies.
The empirical evidence on the longevity of attitudinal legacies is still rare (but see Pop-Eleches & Tucker, 2017, pp. 247-281). Two of the contributions to the special issue (Dinas & Northmore-Ball, 2020;Pop-Eleches & Tucker, 2020) highlight important variations in the temporal persistence of authoritarian legacies. However, our understanding of what drives these variations is limited. To move this literature forward, we need to pay greater attention to institutions and social practices that either reproduce or undercut the attitudinal and behavioral patterns from the authoritarian period. One obvious example in this respect would be the persistence of authoritarian successor parties, such as the more or less reformed communist parties of Eastern Europe during the early transition period (Grzymala-Busse, 2002;Kitschelt et al., 1999;Kitschelt & Smyth, 2002).
More broadly, one would expect the public discourse surrounding the authoritarian legacy, as well as the overall mnemonic regime (Bernhard & Kubik, 2016), to be influenced by the degree of elite turnover in both economic and political institutions. From this perspective, we might expect variations in transitional justice and lustration programs to shape the extent to which authoritarian attitudinal legacies are preserved and reproduced (Capoccia & Pop-Eleches, 2020). Relatedly, building on our discussion above, future research could engage more systematically with the question about how various post-authoritarian developments, such as institutional reforms or economic and political crises, interact with authoritarian legacies, by either undercutting or reinforcing them. 9 For example, it would be important to establish to what extent the persistence of antidemocratic attitudes in the former communist countries is driven by the effective indoctrination of the communist regimes as opposed to the widespread and systematic shortcomings of post-communist "democratic" governance.
Explaining Heterogeneity in Authoritarian Attitudinal Legacies
In this section, we apply our new theoretical framework to explain one of the crucial features of authoritarian attitudinal legacies, which is reflected both in the contributions to this special issue and in some of the earlier literature: their remarkable heterogeneity across regimes, groups, and individuals. We show that the competing socialization framework offers useful analytical tools to understand why regime indoctrination efforts sometimes achieve effective internalization of authoritarian beliefs and attitudes, whereas at other times they are either ineffective (disengagement) or even trigger the opposite effect (rejection).
Legacy Differences Across Regimes: The Impact of Authoritarian Socialization Strategies
Within our competing socialization framework, an important potential driver of the significant variations in authoritarian attitudinal legacies is a factor that has received insufficient attention in prior work: the important variations in the strategies of indoctrination and political control of different types of authoritarian regimes. 10 Although this limitation was largely due to the fact that most of the existing studies focus either on individual countries 11 or on particular types of authoritarian regimes, 12 it nevertheless means that the literature in this area has largely failed to take advantage of the significant advances in the study of authoritarian regime varieties, which largely focuses on institutions and elites (Svolik, 2012). This is potentially an important omission both because it raises questions about the scope conditions of earlier findings and because it runs the risk of treating authoritarian regimes as black boxes and thus undermines the search for causal mechanisms.
Although most autocracies try to shape the political attitudes and behavior of their subjects, they differ significantly in the methods they use to achieve this goal. To understand the variation in attitudinal legacies, which results from these different approaches, it is important to discuss the different tools, which regimes use to achieve compliance by ordinary citizens.
First, dictatorships use coercion to control citizens (Linz, 2000). Repression can be applied in hard form, which usually includes political killings, torture, and imprisonment. Hard repression has been shown to be counterproductive and lead to a rejection of the regime and its principles (Rozenas & Zhukov, 2019). However, subtler forms of repression, which mainly target restrictions of civil liberties (e.g., freedom of assembly, religion, or movement), have been shown to be more effective in preserving the legitimacy and stability of regimes (Escribà-Folch, 2013).
Second, many authoritarian regimes use carrots, often in the form of private or public goods provision, to buy off the population in exchange for loyalty, as part of an authoritarian bargain, "by which citizens relinquish political rights for economic security" (De Mesquita & Smith, 2010;Desai et al., 2009, p. 93). This "authoritarian contract" can be targeted narrowly to specific groups, or it can take the form of universal public goods provision.
A third tool used by regimes, which is mostly understudied, is indoctrination. Here, we define indoctrination as a deliberate inculcation of a doctrine that legitimizes the regime's existence and actions, which in the most advanced form consists of a set of ideological principles (Brandenberger, 2011, p. 7). The ultimate goal of indoctrination is to instill diffuse system support of the authoritarian regime (Easton, 1965). Indoctrination promotes a single view through various channels, such as control over the media and the use of propaganda (Adler, 2012;Chen & Xu, 2015), mass organizations and culture (Linz, 2000), but most importantly through the educational system (Cantoni et al., 2017;Dinas & Northmore-Ball, 2020).
The literature on authoritarian legacies has not paid much attention to these varying regime tools and the potential heterogeneity in the extent to which these tools are successful in molding citizens' political outlook. We expect the effects and long-term legacies of autocratic regimes to be affected by the tool(s) they use to gain compliance from their citizens. To explore whether authoritarian regimes indeed differ in how they use different tools to manage their citizens, we turn to the data of Varieties of Democracies (V-Dem), which compiled expert survey data on 180 countries from 1900 to today (Coppedge et al., 2018). In Figure 1, we contrast how democracies and autocracies 13 vary in their use of (a) hard repression (e.g., torture and political killings), 14 (b) private civil liberties, as a measure of soft repression, 15 (c) public goods provision, 16 and (d) freedom of expression and the use of alternative, nongovernmental controlled information, as a measure for indoctrination. 17 As argued above, all these measures directly affect ordinary citizens living in these regimes.
As Figure 1 reveals, autocracies vary significantly more in the tools that they use to manage their citizens than democracies. The density functions for all four indicators are nearly uniformly distributed for dictatorships, which indicates that autocracies are fairly evenly split among regimes that use hard repression and those who do not. The same pattern emerges in the use of the other tools. Autocracies seem to vary greatly in their use of soft repression of civil liberties (even though no regime provides full liberties), the extent to which they provide public goods, or allow for freedom of expression and a free media. 18 In contrast, democracies rarely use hard repression, mainly provide public goods, and respect civil liberties and media freedom.
The crucial question for our purposes is how these variations in authoritarian strategies affect the political attitudes of citizens both during and after authoritarian rule. The papers in this special issue highlight a few important variations in this sense: Dinas and Northmore-Ball (2020) show that rightwing dictatorships were more likely to provoke ideological backlash than their left-wing counterparts, and that while indoctrination efforts were generally effective in shaping ideological preferences, greater use of repression was counterproductive. Neundorf et al. (2020) show that economically and politically inclusive regimes were more likely to inculcate antidemocratic preferences than more exclusive autocracies, which concentrated benefits on narrower parts of society. Finally, Pop-Eleches and Tucker (2020) show that even among the communist regimes of the Soviet bloc, the attitudinal impact was stronger for hard-line regimes than for their more ideologically flexible counterparts.
Explaining Individual-and Group-Level Heterogeneity: The Role of Competing Socialization
Just as our discussion in the previous section has argued that institutional differences between different types of authoritarian regimes can have a significant impact on the nature of attitudinal legacies, in this section, we tackle another set of theoretically important sources of heterogeneity. In particular, we address the question of whether and why authoritarian regimes may have different-and possibly diametrically opposed-effects on different individuals as a function of the context in which these individuals experience the authoritarian regime. As we show below, these differences are driven both by regime strategies-that is, how the regime chooses to try to influence different types of social groups-and by constraints on the regime's ability to implement these strategies, for example, because regime socialization efforts clash with alternative modes of socialization, such as from families or churches.
Although authoritarian regimes often try to remake the societies over which they rule to facilitate more effective societal control, even the most ambitious, sustained, and murderous attempts along these lines (such as communist collectivization) have not succeeded in creating completely uniform societies (see, for example, Lankina & Libman, 2019). Such societal heterogeneity is bound to interact with even the most top-down authoritarian political projects and, therefore, we expect them to moderate the attitudinal and behavioral consequences of these regimes.
The first source for such societal heterogeneity comes from variations in how different groups fit into the political project of the authoritarian regime. Although all political systems create winners and losers, the magnitude of these gains/losses is often amplified in authoritarian regimes. This means that we should expect the political message of authoritarian regimes to resonate better, and therefore leave a greater attitudinal imprint, among individuals and groups who benefit from the regime, while the effects should be weaker or even reversed among marginalized/excluded groups. An illustration of this pattern is provided by two of the contributions to this special issue, focusing on religious, ethnic, and social groups (Neundorf et al., 2020;Pop-Eleches & Tucker, 2020). Both papers find that religious individuals, who were disadvantaged and sometimes actively persecuted by communist regimes, displayed noticeably weaker communist legacy effects than their nonreligious counterparts.
A second source of societal heterogeneity is rooted in the differential reach of authoritarian regimes in different parts of society. Sometimes these differences are rooted in the ideological and strategic priorities of authoritarian regimes, 19 at other times they may simply reflect limitations in state capacity, which undermine the ability of such regimes to project their political message with equal intensity to all of their subjects and should result in differential attitudinal and behavioral effects for different groups. This expectation is confirmed by Pop-Eleches and Tucker's (2020) contribution to this special issue, which shows that communist exposure effects were weaker among rural residents in countries with limited collectivization of agriculture, that is, a sector of society where the presence of the communist party state was much more limited.
Third, certain individual characteristics could strengthen or weaken the regime indoctrination efforts. Education appears to play an interesting role in this. For example, Croke et al. (2016) demonstrate that more educated citizens in contemporary Zimbabwe are more likely to be critical with the regime and less likely to participate in elections. In contrast, Tertytchnaya (2020), in this special issue, shows that more educated citizens are less likely to disengage from politics in authoritarian regimes, but (at least in the Russian case) this engagement did not translate into either higher or lower opposition support. This suggests that we may expect to see both stronger indoctrination and stronger resistance among educated citizens, which in turn raises interesting questions for future research about the institutional features of education systems that may help explain the nature of the overall effect.
Additional Applications of the Competing Socialization Framework
The discussion so far has illustrated how our theoretical conception of authoritarian legacies as the product of competing socialization efforts helps explain two important types of legacy heterogeneity, which are highlighted by the contributions to this special issue: variations across different types of authoritarian regimes and across different individuals and social groups within particular regimes. In this final section, we briefly discuss how the framework can be fruitfully applied to at least two other types of heterogeneity, which were less central for the articles in this special issue but are an important part of the attitudinal legacies research agenda.
Legacy Variety Across Issue Areas
Another understudied but potentially promising research area would be to take advantage of the analytical potential inherent in the heterogeneity of legacy effects across different issue areas. Given the analytical complexity of studying attitudinal legacies, it is perhaps not surprising that individual studies have focused on a single outcome or a small set of related outcomes. One exception in this sense is the book by Pop-Eleches and Tucker (2017), who find that communist legacies were stronger and more durable in some areas (support for welfare) than in others (gender equality), and they speculate that these differences are driven by variations in the ideological centrality and consistency with which the regime pursued particular aspects of indoctrination. Although this explanation is plausible, our competing socialization framework suggests an alternative-though not necessarily mutually exclusive-theoretical explanation, which instead highlights the strength of nonregime socialization efforts. From this perspective, the relative weakness of gender equality legacies could be due to the fact that these were resisted vigorously by conservative churches or family structures, while communist efforts to expand the welfare state met no comparable resistance.
Therefore, we expect that studying within-regime legacy variations across different types of political attitudes or behavior could be a fruitful way of disentangling the mechanisms through which even small variations in regime strategies and societal responses can lead to very different long-term outcomes.
The Role of Pre-Authoritarian Legacies
The competing socialization framework also provides a useful perspective to understand how pre-authoritarian historical legacies may shape authoritarian socialization efforts. Although several previous studies have shown how communist socialization was undermined by pre-communist education (Darden & Grzymala-Busse, 2006;Neundorf, 2010;Pop-Eleches & Tucker, 2017), this question has not received much attention outside of the East European context. From a competing socialization perspective, such preauthoritarian legacies can be understood as shaping the ability of societal actors (such as churches, civil society organizations, or families) to promote resistance to regime indoctrination by offering more (or less) resilient alternative socialization projects.
Conclusion
This essay has identified several potentially fruitful areas for future research as we move toward the second generation of studies on the attitudinal impact of authoritarian regimes. In particular, here we introduced a new theoretical framework of competing socialization, which provides an overarching framework to study attitudinal and behavioral legacies of authoritarian regimes. We started from the premise that regime efforts to impact their citizens can either be successful and lead to an internalization of the regime's doctrine or it could lead to a resistance to these efforts. The reasons why regimes can be more or less successful in indoctrinating their citizens is first based on the argument that they are not the only forces, which impact citizens. Otherpotentially competing-socialization agents, such as political and societal organizations as well as individuals' family and peers, also impact the development and updating of political attitudes and behavior. Second, regimes vary in their degree and capability to indoctrinate their citizens and sometimes rely on counterproductive measures such as repression, which potentially undermine the degree of authoritarian socialization. Finally, we argued that varying legacy effects are driven by individual heterogeneity, whereas some social groups are more or less receptive to the regime.
The contributions to this special issue highlight some of the exciting research opportunities inherent in a more systematic engagement with the complexities of authoritarian rule, but there is obviously much more that remains to be done if we want to shed light on the black box of authoritarian politics and the legacy these leave on individual political attitudes and behavior. We hope that the competing socialization framework, which we have proposed in this introductory essay, provides a starting point for a more systematic analysis of the theoretical underpinnings of the heterogeneity of authoritarian attitudinal and behavioral legacies. Viewing heterogeneous legacies through the prism of competing socialization projects should help researchers move from the important first step of documenting legacy heterogeneity to identifying the causal mechanisms that produce these patterns.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The research of this article was generously funded by the UK Economic and Social Research Council (ESRC)-Secondary Data Analysis Initiative Project: "The Legacy of Authoritarian Regimes on Democratic Citizenship" (code: ES/N012127/1).
ORCID iDs
Anja Neundorf https://orcid.org/0000-0002-1294-6771 Grigore Pop-Eleches https://orcid.org/0000-0003-3570-6233 Notes 1. In 2017, 24% of the world's population lived in closed authoritarian regimes and another 24% in electoral authoritarian regimes (authors' calculations based on data from V-Dem and the World Bank). 2. We do not claim that 90% of all people today directly experienced dictatorships, as younger generations were born after many of these countries democratized. But, as we argue below, the legacy of these regimes is often evident in these societies long after the regimes were overthrown and could still be transmitted to the younger generations via parental socialization of parents who lived through the dictatorship. 3. See inter alia, Crawford and Lijphart (1997), Kitschelt (2000), Kitschelt et al. (1999), Kitschelt and Smyth (2002) (2011), Tertytchnaya and Lankina (2020), and Treisman (2011). 5. Public opinion data in authoritarian regimes might be problematic because citizens falsify their true preferences out of fear of repression (Kuran, 1997;Tannenberg, 2017). 6. For example, Neundorf (2010) has shown for Central Eastern Europe that people who grew up under communism to be more skeptical of democracy. Contrary, Mattes and Bratton (2007) demonstrate that generations that grew up under autocracies in Africa to be more positive toward democracy than younger generations. These contradictory results for different parts of the world have so far not been consolidated. 7. There are some exceptions, where autocracies voluntarily passed on the responsibility of the education system to other actors. For example, Franco in Spain passed responsibility of school education to the Catholic Church (Domke, 2011;Pinto, 2004). In this case, we would expect that regime socialization to be less successful, leading to lower levels of authoritarian attitudinal legacies. However, this argument has never been empirically tested.
8. For example, the failure of economic and political liberalism in Russia and the former Soviet Union was at least partially due to the abysmal economic performance during the early post-communist period (Whitefield & Evans, 1994). 9. For a preliminary step in this direction, see the discussion in Pop-Eleches and Tucker (2017, pp. 273-278) about the interaction between communist legacies and post-communist economic growth and inequality trajectories in driving opposition to markets and support for the welfare state. 10. Some partial exceptions include Bernhard and Karakoç (2007), who distinguish between totalitarian (i.e., communist and fascist) and other authoritarian regimes, and Pop-Eleches and Tucker (2017) who distinguish between four different types of communist regimes on the basis of their use of repression and their degree of ideological orthodoxy. 11. See, for example, Alesina and Fuchs-Schündeln (2007) and Lupu and Peisakhin (2017). 12. Much of the cross-national work has focused on the legacies of the communist regimes of Eastern Europe and the former Soviet Union (see Wittenberg, 2015, for an overview). 13. We distinguish between democracy and autocracy using the regime classification provided by V-Dem (v2x_regime), which is based on an overall competitiveness of access to power as well as liberal principles (Coppedge et al., 2018, p. 219). 14. The index (v2x_clphy) is provided by V-Dem and is formed by taking the point estimates from a Bayesian factor analysis model of the indicators: freedom from torture and freedom from political killings. 15. The index (v2x_clpriv) is provided by V-Dem and is formed by point estimates drawn from a Bayesian factor analysis model including the following indicators: property rights for men/women, freedom from forced labor for men/women, freedom of religion, religious organization repression, freedom of foreign movement, and freedom of domestic movement for men/women. 16. The item (v2dlencmps) is based on the question about the profile of social and infrastructural spending in the national budget and how "particularistic" or "public goods" expenditures are. 17. The index (v2x_freexp_altinf) is provided by V-Dem and is formed by taking the point estimates from a Bayesian factor analysis model of the indicators for media censorship effort, harassment of journalists, media bias, media self-censorship, print/broadcast media critical, and print/broadcast media perspectives, freedom of discussion for men/women, and freedom of academic and cultural expression. 18. However, Figure 1 also shows that the full extent of free media is not achieved in any autocracies, which is not surprising (and reassuring from a measurement perspective). 19. See, for example, Jowitt's (1992) discussion of the much greater reach of Leninist regimes in the urban and industrial sectors than in the agrarian sector. | 2020-06-11T09:06:19.307Z | 2020-06-08T00:00:00.000 | {
"year": 2020,
"sha1": "dbeb1e5a90b5a4c6f5e76d216ebcc31bc0621225",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0010414020926203",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "ef92c7dc8ffe48c4664237bae29a398f82984062",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
253019670 | pes2o/s2orc | v3-fos-license | Aptamer‐Based Cancer Cell Analysis and Treatment
Abstract Aptamers are a class of single‐stranded DNA or RNA oligonucleotides that can exclusively bind to various targets with high affinity and selectivity. Regarded as “chemical antibodies”, aptamers possess several intrinsic advantages, including easy synthesis, convenient modification, high programmability, and good biocompatibility. In recent decades, many studies have demonstrated the superiority of aptamers as molecular tools for various biological applications, particularly in the area of cancer theranostics. In this review, we focus on recent progress in developing aptamer‐based strategies for the precise analysis and treatment of cancer cells.
Introduction
Cancer, a chronic disease with a high mortality rate, is a serious public health problem. [1] Although rapid advances in molecular biotechnology and chemical biology have led to significant progress in the area of cancer theranostics, a series of challenges remains, owing to the inherent heterogeneity and complexity of cancer cells. [2] Thus, to achieve early diagnosis and precise therapy, the development of high-performance recognition tools for cancer cells and cancer-related biomarkers is urgently required.
In recent decades, a diverse array of molecular recognition tools have been widely exploited, including antibodies, peptides, and nucleic acids, and lay an essential foundation for cancer diagnosis and therapy. [3] Nucleic acid aptamers are single-stranded DNA/RNA oligonucleotides with a length of approximately 25-80 nucleotides that could bind with target cargos by folding into specific secondary/tertiary conformation. [4] The concept of aptamers was first reported by Ellington et al. in 1990 and screened through a repetitive process known as the systematic evolution of ligands by exponential enrichment (SELEX). [5] With rapid advances in the SELEX technology, aptamers against various targets, such as small molecules, peptides, and proteins, have been generated. [6] Particularly, Tan et al. used intact living cells as the target to develop the Cell-SELEX technology and successfully selected many cell type-specific aptamers, such as sgc8 specific for acute lymphoblastic leukemia CCRF-CEM cell line, [7] XQ-2d for pancreatic ductal adenocarcinoma PL45 cells, [8] and TD05 for Ramos cells. [9] Taking advantage of the high specificity, facile synthesis, convenient modification, and high programmability, aptamers have been widely used as reliable recognition ligands in biosensing, bioimaging and bioregulation. [10] In addition, their integration with therapeutic modules, such as small-molecule drugs, peptides, nucleic acid drugs, has attracted broad interest for cancer-targeted treatment. [11] Meanwhile, some aptamers could even serve as therapeutics. As a typical example, the first aptamer drug, Macugen® (pegaptanib), was approved by the US Food and Drug Administration (FDA) for the treatment of wet age-related macular degeneration, and several other aptamer therapeutics underwent clinical trials. [12] While several aptamerrelated review papers have been published to date, [4,11a,13] in the current manuscript, we mainly focused on recent research progress of aptamer-based strategies in precise cancer analysis and targeted therapy.
Aptamer-based recognition and capture of cancer cells
The specific recognition of cancer cells is essential for cancer diagnosis, therapy, and prognosis. Typically, circulating tumor cells (CTCs), which are shed from the primary tumor into the vasculature, provide an attractive index for evaluating tumor progression. [14] However, because of the extraordinarily low concentration (nearly 1-10 CTCs per mL of whole blood) and high heterogeneity of CTCs, their capture and detection remain a technical challenge. [15] To overcome this issue, aptamers, which exhibit high affinity and high specificity against surface biomarkers of CTCs, have been used as the targeting ligands. [16] For example, Zeng et al. reported a cancer cell-activatable probe by conjugating aptamers with paired fluorochromequencher molecules for one-step assay of CTCs. [17] This aptamer probe could be specifically internalized by CTCs with overexpression of CD30 receptors, and then was degraded in lysosomes to separate the fluorophore-quencher pair, leading to fluorescence restoration. This aptamer reporter could selectively detect CTCs in whole blood and marrow aspirate samples of patients with lymphoma tumors. To improve the capture efficiency and detection sensitivity of CTCs, Ding et al. designed a magnetic nanoplatform by integrating multivalent aptamer-functionalized Ag 2 S nanodots with hybrid cell membrane-coated magnetic nanoparticles (HMÀ Fe 3 O 4 @SiO 2 /Tetra-DNAÀ Ag 2 S nanoplatform, Figure 1A). [18] Owing to the multivalent aptamer modification and magnetic isolation, this nanoplatform could specifically and efficiently identify target cancer cells, and a high cell capture efficiency of over 90 % was obtained in whole blood sample.
While the magnetic nanoplatform could effectively enrich CTCs, their application was time-consuming and required a large volume of blood samples. Microfluidics, processing smallvolume fluids with high throughput, automation, and multiplexing, showed great potential in the detection and isolation of CTCs. [19] Zhao et al. developed a microfluidic assay using rationally designed aptamer cocktails with synergistic effects ( Figure 1B). [20] By combining a microfluidic chip embedded with silicon nanowire substrate (SiNS), enhanced and specific capture of CTCs from non-small cell lung cancer (NSCLC) patient samples was achieved.
To further reduce the non-specific cellular adsorption, Zhang et al. synthesized leukocyte membrane-coated magnetic nanoclusters and then decorated them with aptamer SYL3C. [21] By loading these biomimetic nanoparticles into the microfluidic system ( Figure 1C), more than 90 % of rare EPCAM-positive tumor cells could be isolated from a whole blood sample and directly counted online based on fluorescence imaging within 20 min, indicating a good promise for the high-performance analysis of CTCs in complex biological samples.
Aptamer-based membrane protein imaging and regulation
Membrane proteins play pivotal roles in many biological processes, such as cell migration, signaling, and communication. [22] The abnormal expression and oligomerization of membrane proteins are closely associated with the occurrence of many diseases. [23] Developing methods for visualization and manipulation of disease-related membrane proteins could provide important information in the context of cell biology, disease theranostics, and drug discovery. [24] Aptamers with excellent molecular recognition ability have been widely applied for the specific imaging of membrane proteins. [25] For example, Tan et al. synthesized a sgc8 probe labeled with a fluorescein isothiocyanate dye to map the cellular density and distribution of protein tyrosine kinase-7, which was regarded as a potential biomarker of cancer cells. [26] To image target membrane proteins at the single-molecule level, Albertazzi et al. constructed a DNA-based point accumulation approach for imaging by nanoscale topography (DNA-PAINT). [27] Using aptamers with a tunable affinity, they achieved single-molecule tracking of epidermal growth factor receptor (EGFR) on the cell surface ( Figure 2A). They also showed that PAINT could be exploited for mapping the distribution and density of EGFR in different cancer cell lines without involvement of genetic and/or chemical modification. In addition, to realize the sensitive detection of low-abundance membrane proteins, Tang et al. developed an aptamer-based imaging strategy with combination of rolling circle amplification (RCA), which allowed the signal-amplified visualization of low-abundance EpCAM on MCF-7 cells during the epithelial-mesenchymal transition. [28] Due to the extraordinary heterogeneity and complexity of cells, it is rather difficult to achieve accurate cellular identification via single-marker detection. [29] To address this issue, multiplex analysis of membrane proteins held great promise for improving the accuracy of cell recognition. [ with several different aptamers (TD05, Sgc8, and Sgd5), which enabled cancer cells to be precisely classified according to their specific molecular signature patterns. [30b] To further realize the intelligent typing of cancer cells, aptamer-based DNA circuits, which enable computation of multiple membrane proteins, have also been developed. [31] As a typical example, You et al. developed an aptamer-encoded Boolean logic circuit that allowed programmable and higher-order profiling of multiple co-existing cell-surface markers based on AND, OR, and NOT logic gates. [32] Besides, to further improve the accuracy of DNA computation, Chang et al. introduced a hybridization chain reaction (HCR) to develop multiple aptamer-based AND logic circuits and achieved sensitive detection of CCRF-CEM cells. [33] In addition to the expression level of cell membrane proteins, their oligomerization state is also related with the cellular function and behavior. [34] Dynamic monitoring the oligomerization of surface proteins would therefore be beneficial for understanding their biological roles. [35] By combining aptamer-based molecular recognition and proximity-induced DNA assembly, Liang et al. proposed an aptamer-based method for dynamically imaging the dimerization process of mesenchy-mal-epithelial transition (Met) receptor. [36] To further improve the reliability and efficiency of protein monitoring, tetrahedral DNA nanostructures were introduced to prepare a proximityinduced fluorescence resonance energy transfer (FRET) nanoplatform for visualizing dimerization of Met ( Figure 2B). [37] With enhanced biostability, this DNA tetrahedron-based nanoprobe enabled imaging of Met receptor dimers in nude mice.
Manipulating the behavior of membrane receptors in living cells would offer a valuable strategy for studying their biological function. By combining the advantage of aptamer-based molecular recognition and DNA-based receptor assembly, Yang et al. recently reported a non-genetic approach to realize the dynamic dimerization of Met receptors, and thus modulation of corresponding signal transduction events ( Figure 2C). [38] They designed aptamers as robotic arms that captured target receptors (C-Met and CD71), and DNA logic components as computer processors to handle multiple inputs. Based on DNA assembly, c-Met and CD71 were brought into close proximity, leading to interference on the ligand-receptor interactions of c-Met and thus inhibiting related biofunctions. Besides, Han et al. developed a bispecific aptamer chimera, and proved its good performance for triggering lysosomal degradation of therapeutically relevant proteins (e. g., Met receptor) through specific interaction with lysosome-shuttling receptor (IGF-IIR) (Figure 2D), indicating an attractive strategy for disease therapy. [39]
Aptamer-based probes for bioimaging at subcellular organelles
Eukaryotic cells contain many organelles, and these are compartmentalized through individual membranes. [40] Since many biochemical processes occur in specific organelles, their disorganization is associated with many diseases, such as neurological diseases, diabetes, and cancer. [41] Molecular imaging at the subcellular level would be beneficial for the study of complex biological systems and accurate disease diagnosis. [42] Lysosomes, as one of the key cellular organelles, contain approximately 50 different degradative enzymes that are active at acidic pH levels, and abnormal acidification is closely associated with reduced digestion ability, autophagy blockage, and storage disorders. [43] To realize dynamic imaging of the lysosomal environment, Du et al. designed a lysosome-targeting framework nucleic acid (FNA) nanodevice by incorporating an i-motif and an adenosine triphosphate (ATP)-binding aptamer (ABA) into a DNA triangular prism ( Figure 3A). [44] After entering into the lysosomal compartment of cancer cells, this nanodevice would undergo a conformational switch to generate fluorescence output for indicating the abnormal level of pH and ATP.
The nucleus is one of the most important cellular organelles that plays a critical role in maintaining the integrity and expression of genes. [45] Aptamer AS1411 could specifically bind with nucleolin, which is generally overexpressed on the membrane of cancer cells and can translocate between the cell membrane and the nucleus. [46] For example, Li et al. integrated AS1411 with poly cytosine as a scaffold to synthesize AS1411functionalized silver nanoclusters (Ag NCs) for nuclear staining. [47] In addition, Shen et al. used the Cell-SELEX technol- ogy to select a nucleus-targeted aptamer Ch4-1 ( Figure 3B). [48] They demonstrated that this aptamer probe exhibited a high affinity (K d = 6.65 � 3.40 nm) to nucleoproteins and could be applied for distinguishing dead cells from live cells.
Aptamer-based fluorescent probes have also been applied to monitor other subcellular microenvironments. For example, to synchronously map the activity of nitric oxide synthase 3 (NOS3) at the plasma membrane and in the trans-Golgi network (TGN), Jani et al. designed a fluorescent DNA probe by conjugating a nitric oxide (NO)-sensitive fluorophore with aptamers specific for either the plasma membrane or the Golgi apparatus. [49] Because of their subcellular targetability and sensitivity to NO, these probes allowed simultaneous measurement of the NOS3 activity in these two organelles. They also used this imaging platform to investigate the selective regulators of NOS3 in different compartments. To image mitochondrial ATP in living cells, Wang et al. developed a ratiometric fluorescent DNA nanostructure (RFDN) based on hybridization chain reaction (HCR) and split aptamers ( Figure 3C). [50] The results showed that the RFDN was easily designed and assembled by HCR, which had good biocompatibility and response to mitochondrial ATP.
Aptamer-based Cancer Therapies
In addition to accurate diagnosis, effective treatment of cancer cells is another critical issue in the area of cancer research. The development of precise cancer treatments with minimal side effects is urgently required. In recent decades, aptamers have been widely exploited for developing cancer targeted therapeutic systems. [51]
Aptamer-small molecule drug conjugates
It is well known that traditional chemotherapeutic drugs can cause serious side effects owing to their low selectivity. To overcome this problem, cancer-targeted drug delivery systems have been proposed [52] and achieved significant progress in chemotherapy. [53] For example, by conjugating doxorubicin (Dox) with aptamer sgc8c, Huang et al. developed the first aptamer-drug conjugate (ApDC). [54] They demonstrated that this ApDC could specifically kill target acute lymphoblastic leukemia CCRF-CEM cells with limited influence on non-targeted cells. In addition, the effective release of molecular drugs from conjugates was expected to improve their therapeutic efficacy. In this context, Yang et al. used 4-nitrophenyl-4-(2-pyridyldithio) benzyl carbonate (NPDBC) and (4-nitrophenyl-2-(2-pyridyldithio) ethyl carbonate (NPDEC) as tumor microenvironment-responsive linkers to bridge an aptamer with mitomycin C (MMC) ( Figure 4A). [55] They demonstrated that this ApDC could specifically accumulate in the tumor region, where the MMC module could be effectively released owing to the reductive microenvironment of tumors, thereby leading to an enhanced drug activity compared with that of the stable conjugation design. Meanwhile, to address the heterogeneity issue of cancer cells, Zhu et al. developed a bispecific aptamer probe by connecting aptamers sgc8c and sgd5a. [56] They demonstrated that this bispecific aptamer possessed a broader recognition capability for cancer cells than the individual monovalent probes. In addition, to enhance the biostability and drug loading capability, Deng et al. proposed a polymer-based approach by conjugating cell-targeting aptamers with a water-soluble polymer prodrug that contained a reductive environmentallysensitive prodrug and a biocompatible brush skeleton. [57] This high-payload aptamer poly(prodrug) conjugate (ApPDC) showed prolonged circulation time and enhanced therapeutic efficacy. Figure 4B). [58] Through solid-phase synthesis technology, this module was automatically and efficiently conjugated with an aptamer at predesigned positions. Biological studies showed that these ApDCs could maintain specific recognition ability and induce cytotoxicity against tumor cells.
Aptamer-conjugated biomacromolecular therapeutics
In addition to chemotherapeutic drugs, nucleic acids and peptides emerged as biomacromolecular therapeutics with high specificity. [11c] As a typical example, interference RNAs (iRNAs) that can specifically silence disease-related genes have attracted increasing interest in cancer therapy. [59] Meanwhile, the lack of cell specificity has impeded the full potential of iRNA-based therapy in clinics. To address these issues, McNamara et al. developed aptamer-conjugated siRNA that enabled specific delivery of siRNA therapeutics to target cancer cells. [60] By using an anti-PSMA aptamer as the targeting ligand, the designed siRNA could effectively reduce the tumor volume in prostate cancer xenograft models with limited systematic side effects. Besides, to improve the cellular internalization efficiency, Yoo et al. developed multivalent comb-type aptamer-siRNA conjugates (Comb-Apt-siR) by combining chemical coupling and DNA hybridization. [61] Their results proved that the cellular internalization efficiency of Comb-Apt-siR could be improved based on the clustering effect.
Peptides, too, are attractive therapeutic biomacromolecules. [13b,62] As a typical example, Tan et al. designed anti-MUC1 aptamer-conjugated peptides (ApPCs) as targeted chemical sensitizers to overcome the poor cellular permeability of peptides, which specifically target the heat shock protein 70 (HSP70), and showed significant protein inhibition and chemical sensitization ( Figure 4C). [63] They also found that DOX could be loaded onto the ApPCs, which acted as both a targeted sensitizer and an anticancer agent for the treatment of drug-resistant breast cancer cells. The application of this technology enabled targeted peptides and DOX to be delivered to MCF-7/ADR drug-resistant breast cancer cells, thereby enhancing tumor growth inhibition in vivo and significantly reducing side effects.
Aptamer-conjugated nanotherapeutics
Nanoparticles have shown the potential to encapsulate and deliver anticancer drugs to tumors with enhanced efficiency. [64] Even with the permeability and retention (EPR) effect against the tumor region, functionalization of nanoparticles with targeting ligands against cancer cells is still an intensively investigated topic. [65] To date, many aptamer-functionalized nanoparticles have been widely developed. [66] For example, Cao et al. proposed aptamer-conjugated liposomes encapsulated with cisplatin, and successfully applied them for effective cancer treatment. [67] In addition, Kim et al. developed a doxorubicin (Dox)-encapsulated liposome functionalized with two types of aptamers for separately targeting mucin 1 (MUC1) and CD44 antigen. [68] This dual-aptamer- conjugated liposome achieved higher cytotoxicity against cancer stem cells than liposomes without or with only a single aptamer ( Figure 5A).
In addition to soft nanomaterials, rigid inorganic nanoparticles possess unique material-and size-dependent physical properties, such as optic, electric and magnetism, [69] which make them promising candidates as nanocarriers for targeted drug delivery. Jo et al. developed aptamer-modified gold nanostars (AuNSs), and proved their potential for effective eradiation of prostate cancer cells based on the photothermal therapy effect. [70] In addition, Jamileh et al. constructed anti-MUC1 aptamer-modified PEG-AuNPs for the loading of paclitaxel (PTX) for synergistic therapy ( Figure 5B). [71] The PTX-loaded PEG-AuNPs@antiMUC1 system could specificity bind to MUC1positive cancer cells, achieving effective cell killing with combination of NIR irradiation.
Despite these advantages, NPs themselves cannot be used for intelligent regulation and precise drug delivery. As an alternative, structural DNA nanotechnology with programmable self-assembly and spatial addressing capabilities has showed great potential for the precise cancer therapy. [72] For example, Wu et al. constructed an acrydite-modified DNA nanoassembly loaded with multi-drug-resistant antisense (MDR1-AS) oligonucleotides and functionalized by different aptamers (Sgc8 and KK1B10) to target leukemia cells. [73] Moreover, to explore novel activatable theranostic agents with robust in vivo applicability, Lei et al. designed a multivalent activatable aptamer (NTri-SAAP) and a robust DOX-functionalized DNA nanotriangle scaffold, which combined the advantages of a programmable self-assembly, the multivalent effect and target-activatable architecture. [74] This Ntri-SAAP showed good in vitro and in vivo performance for treatment of leukemia. The assembly was capable of targeted drug delivery and silencing the role of drug-resistant P-gp protein in CCRF-CEM and K562/D cells.
In order to achieve precise and effective treatment, Wang et al. constructed a second-order DNA logic-gated nanorobot (DLGN) that could be anchored on the cell membranes to load multiple aptamers and therapeutics ( Figure 5C). [75] This DLGN allowed accurate differentiation among five different cell lines, and then triggered synergistic killing of target cancer cells.
Aptamer-based cancer immunotherapy
Cancer immunotherapy refers to the attack of cancer cells by boosting the immune system, which could avoid the side effects associated with damage to normal cells. [76] In recent decades, immunotherapy has attracted increasing attention and has gradually become an extensive research topic in the area of cancer treatment. [77] Many immunotherapeutic strategies, such as the immune checkpoint blockade (ICB) and adoptive T cell immunotherapy, have been designed and applied to inhibit tumor growth. For example, Gao et al. successfully isolated an anti-PD-1 aptamer PD4S using the Cell-SELEX procedure (Figure 6A), and verified its good performance for rescuing PD-1/ PDL1 induced T cell exhaustion. [78] Moreover, Du et al. prepared a highly stable multifunctional aptamer that could block the CTLA-4/B7 and PD-1/PD-L1 signaling pathways, and achieved enhanced anti-tumor immunity against liver tumors. [79] To reduce the toxic side effect of ICB-based therapy, Yang et al. proposed an aptamer-based logic operation to achieve effective and sustained immune checkpoint blockade treatment by chemical modification of anti-PDL1 aptamers on the cell surface with limited internalization and detachment ( Figure 6B). [80] Their results indicated that incorporation of DNA logic gates could improve the accuracy and robustness of ICB therapy.
Adoptive T cell immunotherapy, such as chimeric antigen receptor (CAR) T cell therapy, was proven to be highly effective in the treatment of hematologic malignancies, but it was reported an aptamer-based "recognition-then-activation" strategy, where naive T cells could be specifically recruited by cancer cells and then activated in situ to effectively kill cancer cells ( Figure 6C). [81] This strategy was a universal and economical approach, providing a prototype of on-shelf T cells for cancer immunotherapy. Moreover, Zhang et al. proposed an aptamerequipped strategy to generate specific, universal, and permeable natural killer (NK) cells for enhanced adoptive immunotherapy in solid tumors ( Figure 6D). [82] In this case, the NK cells were chemically equipped with two aptamers for targeting HepG2 cells and membrane PDL1. These dual aptamersequipped NK cells exhibited a high specificity toward tumor cells, which led to enhanced cytokine secretion as well as apoptosis/necrosis compared to parental or single aptamerequipped NK cells.
Conclusion and Outlook
Aptamers, which are reliable recognition units, have been incorporated into various emerging devices for biosensing, imaging, and bioregulation due to their high programmability, affinity, and selectivity. In recent years, rapid progress has been made in the fields of aptamer-based devices and aptamerconjugated drugs for accurate cancer detection and targeting treatment. However, the development and application of aptamers are in their initial stages, and many challenges remain to be solved.
First, natural nucleic acids have a limited biological stability and a short half-life in vivo in aptamer-mediated therapy. While advances in nucleic acid chemistry and biotechnology have made major breakthroughs to circumvent these challenges, there is still much room for improvement. Second, the number of available aptamers is still very limited. Thus, new strategies that enable the selection of good-performance aptamers with high efficiency would be desirable. Third, many aptamer-drug conjugates have failed to realize their potential in clinical studies, suggesting that challenges remain in the rational design of such conjugates, including improving drug delivery capacity and efficiency in targeting established oncogenes. Overall, while aptamers have shown great promise in the areas of bioanalysis and cancer therapy, intensive efforts are still required in this exciting research field, and future studies should focus on the design of novel configuration-activated aptamer probes for recognition in complex environments.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2022-10-21T06:18:05.916Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "b0179115593e7bafa28db2fde0116f349cc6f243",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "facd3701968a0d0ff22c60a3486d0714991936fe",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246240165 | pes2o/s2orc | v3-fos-license | Linear series with $\rho<0$ via thrifty lego-building
The moduli space $\mathcal{G}^r_{g,d} \to \mathcal{M}_g$ parameterizing algebraic curves with a linear series of degree $d$ and rank $r$ has expected relative dimension $\rho = g - (r+1)(g-d+r)$. Classical Brill-Noether theory concerns the case $\rho \geq 0$; we consider the non-surjective case $\rho<0$. We prove the existence of components of this moduli space with the expected relative dimension when $0>\rho \geq -g+3$, or $0>\rho \geq -C_r g + \mathcal{O}(g^{5/6})$, where $C_r$ is a constant depending on the rank of the linear series such that $C_r \to 3$ as $r \to \infty$. These results are proved via a two-marked-point generalization suitable for inductive arguments, and the regeneration theorem for limit linear series.
Introduction
Brill-Noether theory of algebraic curves can be understood, borrowing an analogy from [Har09], as "representation theory for curves:" given an abstract curve C and integers r, d, how can the curve C be mapped to P r by a degree d map? The first question one might ask is: how plentiful are these maps? That is, what is the dimension of their parameter space? The central objects of Brill-Noether theory are two parameter spaces: G r d (C), which parameterizes linear series (L, V ) such that deg L = d and dim PV = r, and W r d (C), the image of G r d (C) → Pic d (C). Let g be the genus of C. Given the numbers g, r, d, the Brill-Noether number is ρ = ρ(g, r, d) = g − (r + 1)(g − d + r).
The Brill-Noether theorem answers the "how plentiful" question when C is a general curve: dim G r d (C) = ρ unless ρ < 0, in which case G r d (C) is empty. As one would hope, the same can be said of W r d (C), except that when ρ > g, i.e. g − d + r < 0, W r d (C) is all of Pic d (C). This may also be phrased globally: these parameter spaces globalize to moduli spaces G r g,d → M g and W r g,d → M g , and the Brill-Noether theorem says that G r g,d has an irreducible component (in fact, a unique one, by another theorem) that surjects onto M g if and only if ρ ≥ 0, and that component has relative dimension exactly ρ. This paper proves the following extensions to ρ < 0.
Theorem A. Let g, r, d be nonnegative integers, with r + 1, g − d + r ≥ 2. If ρ(g, r, d) ≥ −g + 3, then G r g,d has a component of relative dimension ρ and generic fiber dimension max{0, ρ} over M g .
1 Lemma 9.1 gives the explicit bound underlying Theorem B without asymptotic notation. We work over an algebraically closed field F. Except in Sections 3 through 6, and in particular in the main theorems, we assume char F = 0. In Sections 3 through 6, statements assuming char F = 0 are labeled. A "curve" is always assumed to be reduced, connected and proper. A "scheme" is always assumed to be finite-type over F, and by a "point" of a scheme we mean an F-point.
1.1. Some remarks on the main theorems. In both theorems, when ρ ≤ 0 the desired components are finite over M g , from which it follows that a generic element is a complete linear series and G r g,d → W r g,d is a local isomorphism (at least set-theoretically). So we can and will focus our attention on G r g,d in this paper, except on a few occasions. A virtue of W r g,d is that we have an isomorphism W r g,d ∼ − → W g−d+r−1 g,2g−2−d from Serre duality. This shows that we may swap the roles of r + 1 and g − d + r if we wish, and in particular assume without loss of generality that r + 1 ≤ g − d + r, i.e. d ≤ g − 1, in our arguments. In particular, that hypothesis d ≤ g − 1 in Theorem B is harmless; it merely simplifies the constant in the bound. The bound r + 1, g − d + r ≥ 2 excludes the trivial case r = 0 and the dual case g − d + r − 1 = 0.
Note that ε r+1 → 0 as r → ∞, and ε r+1 ≤ 1 for r ≥ 5, so Theorem B proves the existence of linear series with r ≥ 5, d ≤ g − 1, and ρ ≥ −2g + o(g). For r ≡ 3 (mod 4), ε r+1 = 0 and the bound is ρ ≥ −3g + o(g), which is asymptotically optimal since dim M g = 3g − 3. Evidently Theorem B is much stronger than Theorem A for large g, but the error bound is formidable enough in low genus that Theorem A is stronger for g ≤ 1875; see Remark 9.2.
These moduli spaces G r g,d , W r g,d are stacks, but the morphisms are representable by schemes. Concretely, this means that for any family C → S of smooth curves over a base scheme S, we obtain schemes G r d (C) → S and W r d (C) → S, which we will call relative Brill-Noether schemes, that are compatible with base change; see [ACG11,§XXI]. We use the language of stacks mainly for linguistic convenience; everything we say can be formulated in scheme-theoretic terms via relative Brill-Noether schemes, and in fact we will be able to work almost exclusively set-theoretically since we are concerned with dimension statements.
The Brill-Noether number ρ is a lower bound on the local relative dimension of G r d (C) → S at every point. Here, by local relative dimension for a point x in an S-scheme f : G → S we simply mean dim x G−dim f (x) S. We do not assume f is surjective, so negative values are meaningful. Local relative dimension is preserved by smooth (in particular,étale) base change, so we can also define local relative dimension for G r g,d → M g and W r g,d → M g by choosing any versal deformation. By the relative dimension of an irreducible component, we mean the generic local relative dimension.
Local relative dimension can increase under base change from one smooth base scheme to another, but it cannot decrease. Any deformation C → S isétale-locally a pullback from a versal deformation, so if L is a linear series at which the local relative dimension of G r d (C) → S is ρ and S is smooth, then the same is true for a versal deformation, and therefore for G r g,d → M g . So in practice we will verify that G r g,d has local relative dimension ρ at a given point by checking that the relative local dimension is ρ in a relative Brill-Noether scheme for a conveniently chosen not-necessarilyversal deformation over a smooth base. The same remark hold mutatis mutandis when finding local relative dimension in W r g,d or the other moduli spaces considered in this paper. 1.2. Background. Brill-Noether theory for ρ < 0, which I like to call "underwater Brill-Noether theory" (because the flora and fauna become more mysterious and hard to observe the deeper ρ goes) encompasses several related questions. Theorems A and B address Question 1.1. For which g, r, d such that ρ < 0 does G r g,d (or equivalently, W r g,d ) have an irreducible component of relative dimension exactly ρ that is generically finite over M g ?
Of course, one may ask a more demanding question. Question 1.2. For which g, r, d such that ρ < 0 is G r g,d (or equivalently, W r g,d ) equidimensional of relative dimension ρ and generically finite over M g ? When is it irreducible?
Unfortunately, the methods of this paper are not applicable to irreducibility questions, since they are local in nature. Very little is known about Question 1.2. It is known that G r g,d is irreducible when ρ = −1 [EH89], and equidimensional when ρ = −2 [Edi93].
Call a linear series very ample if the map C → PV ∨ is an embedding. One may wish to consider only very ample linear series, and work in the Hilbert scheme of smooth, non-degenerate curves of degree d and genus g in P r . Denote this by H r g,d ; it has expected dimension (cf. [HM98,§1E]) h(g, r, d) = ρ(g, r, d) + dim M g + dim Aut P r = (r + 1)d − (r − 3)(g − 1).
Following [Par89], a component of H r g,d is called regular if h 1 (C, N C ) = 0 for a general curve C from it, and is said to have the expected number of moduli if the image in M g has codimension max{0, −ρ(g, r, d)}. A general curve in a regular component of H A nice, if dated, discussion of these and similar questions, with examples, can be found in [Har82]; see also [HM98,§1E]. In 1982, Harris described the situation for ρ < 0 as "truly uncharted waters" [Har82,p. 71], observing that there is such a wide range of observed behavior, and such a paucity of general principles, that the situation defies conjecture. However, one pattern that does seem to emerge is that the wildest, most alien creatures in these mysterious waters live deep below the surface, where ρ < −g + C for some constant C (or perhaps some function C(g) = o(g)).
Indeed, the simplest case where Question 1.3 has a negative answer is H 3 9,8 . An octic curve of genus 9 in P 3 is necessarily a complete intersection of a quadric and quartic surface, and an elementary dimension count shows that H 3 9,8 is irreducible with dim H 3 9,8 = 33; this is just larger than the expected h(9, 3, 8) = 32. In this case, ρ = −7 = −g + 2. So at the depth ρ = −g + 2 we encounter a slightly over-sized octicpus. This does not immediately imply that G 3 9,8 has no components of expected relative dimension, since there might be components consisting entirely of linear series giving maps to P 3 factoring through a lower-degree curve. Nonetheless, the fact that Theorem A reaches ρ ≥ −g + 3 and this exceptional behavior is found at ρ = −g + 2 feels eerily compelling. Theorem B reaches far beyond this depth; one could imagine that this means one of two things: either the dimensionally proper components it identifies live in their deep waters alongside wild and mysterious components of far larger dimension, or the threshold where the wild behavior occurs descends below ρ ≈ −g as g increases.
Remark 1.4. The example H 3 9,8 above is a simple case of a Hilbert scheme of Castelnuovo curves, which provide a large class of dimensionally improper linear series. For any r ≥ 2, d ≥ 2r − 1, define m = ⌊ d−1 r−1 ⌋, ε = d − 1 − m(r − 1), and g = m 2 (r − 1) + mε. This is the maximum genus of a smooth, non-degenerate curve of degree d in P r , and such curves are called Castelnuovo curves. By the dimension count in e.g. [Cil87], H r g,d has a component of dimension g+2m+ε+d−r−3+dim Aut P r , whose image is a component in G r g,d of dimension g + 2m + ε + d − r − 3. If we fix r and let d tend to infinity, we may write m = O( √ g), d = O( √ g), so this component has dimension g + O( √ g), i.e relative dimension −2g + O( √ g) over M g , while ρ(g, r, d) = −rg + O( √ g). So for r ≥ 3, these components are dimensionally improper.
Remark 1.5. A large collection of examples of non-dimensionally proper components of G r g,d can also be gleaned from recent developments in Hurwitz-Brill-Noether theory. These developments have many connections to the present work, and suggest a Hurwitz space analog of Question 1.1; we summarize this in in Appendix A.
Several authors have charted more of these waters, particularly Question 1.3 (which then gives some answers to Question 1.1 as well). The cases r = 1 and r = 2 are completely understood via the Hurwitz scheme and Severi variety, and results of Segre, Arbarello-Cornalba, Sernesi, and Harris. In the case r = 3, Pareschi [Par89] answered Question 1.3 in an asymptotically optimal way; his result provides the "thriftiest lego bricks" we could hope for and are the essential input in our proof of the asymptotic Theorem B. Strong results for r ≥ 4 were later provided by Lopez [Lop91,Lop99]. These are summarized in Section 2. Recently, Ballico [Bal21] has proved the existence of regular components with the expected number of moduli in the range Other results on similar lines include [BE88].
As far as I am aware, however, an argument was never published.
In an unpublished preprint [Pfl13], I proved results on Question 1.1 in a range somewhat smaller than what is proved in Theorem A. I must sheepishly admit that I never published it because I intended all these years to strengthen its results before resubmitting it. This intent has only now been realized by the present paper. I urge any graduate students or young researchers reading this paper to not follow this example when preparing your thesis work for publication. Since that preprint has received some citations, I have decided to leave it "in amber" on the arXiv rather than updating it with the new content of this paper.
Work on similar problems to those discussed above includes [Far03] on regular components of moduli of stable maps to products of projective spaces, [BBF14] on generalizations to nodal curves, and [KKL19] on the problem of curves rigid in moduli.
1.3. Dimensionally proper points and threshold genera. We will see that Question 1.1 is conveniently studied by obtaining bounds on threshold genera, which we now define.
Definition 1.6. A linear series L of degree d and rank r on a smooth curve C of genus g is called dimensionally proper if G r g,d has local relative dimension ρ at (C, L) and G r d (C) has local dimension max{0, ρ} at L. A line bundle L on a curve C is called dimensionally proper if its complete linear series is dimensionally proper in the sense above.
Definition 1.7. Let a, b be positive integers. The threshold genus of the a × b rectangle, denoted tg(a × b), is the minimum genus g 0 such that, for all genera g ≥ g 0 , there exists a dimensionally proper line bundle L on a genus g curve C such that h 0 (C, L) = a and h 1 (C, L) = b. Equivalently, tg(a × b) is the minimum g 0 such that G a−1 g,g+a−b−1 has dimensionally proper points for all g ≥ g 0 .
The Brill-Noether theorem guarantees that tg(a × b) is well defined and tg(a × b) ≤ ab. The function tg has a symmetry property due to Serre duality: Figure 1. An illustration of the proof of the subadditivity theorem. A point of G β/α g 1 and a point of G γ/β g 2 are used to construct a limit linear series on a nodal curve of genus g 1 + g 2 . This curve is smoothed to obtain a point of G γ/α g 1 +g 2 .
Results addressing Question 1.1 are often conveniently formulated as bounds on tg(a × b). Indeed, Theorems A and B, respectively, will be proved by first proving the following inequalities.
Therefore the rest of the paper will investigate Question 1.1 via bounds on threshold genera tg(a×b), and a flexible generalization of them to skew Young diagrams.
1.4. Subadditivity and thrifty lego-building. The engine driving this paper is a subadditivity theorem derived from the theory of limit linear series and proved in Section 5. This subadditivity allows us to leverage existing results from many different sources to obtain new bounds. The first form of subadditivity concerns threshold genus of rectangles, where it may be stated as More generally, the notion of threshold genera generalizes in a natural way to fixed-height skew shapes β/α, as defined in Section 3, and we prove subadditivity in this more general context, as an inequality tg(γ/α) ≤ tg(γ/β) + tg(β/α) (Theorem 5.1). An illustration of the proof of subadditivity is shown in Figure 1; the notation in the caption and the proof itself are in Section 5. The reader may understand this via a cheesy analogy: the threshold genus of a fixed-height skew shape tells the cost of a building an object constructed out of little square bricks (the boxes of the skew Young diagram), and subadditivity indicates that you can assemble this object from multiple pieces, paying for the parts in each piece separately. Our goal, then, in proving Theorems A and B, is to identify particularly inexpensive ways to construct rectangles; we refer to this process as "thrifty lego-building." See e.g. Figure 6 for the way we thriftily build rectangles for Theorem A.
In the proofs of both theorems, we will use existing results to obtain bounds on certain smaller skew shapes that are assembled into rectangles. To stretch the lego analogy a bit further, we visit the toy store and find good deals of a few boxed sets that will then be put together to cheaply construct rectangles. The proof of Theorem A involved much more intricate lego-building, using results of Komeda [Kom91] on Weierstrass points and a careful analysis of skew shapes of threshold genus 1. The proof of Theorem B, by contrast, uses only a very simple sort of lego building: many thin rectangles are stacked up to a thicker one. The great deal used in that proof is provided by Pareschi [Par89], which gives an asymptotically optimal bound of tg(4 × b). The disadvantage of this much simpler lego-building is of course the somewhat complicated error bounds. Subadditivity of threshold genus implies that the limit exists, and is equal to inf b≥1 tg(a × b) b .
Equation (3) and the fact that ρ(g, r, d) ≥ − dim M g whenever G r g,d has dimensionally proper components imply bounds Conjecturally, the bound ρ ≥ − dim M g never holds with equality for g > 0; this is a folklore conjecture sometimes called the rigid curves conjecture, and it has recently been verified for r = 3 and in a restricted range of cases with r ≥ 4 by Keem, Kim, and Lopez [KKL19]. A natural question, in light of this conjecture, is: how close can you get? In the language of threshold genus, Finally, we note that one unsatisfying aspect of our main results is that it is not clear whether the components produced consist of very ample linear series.
Question 1.10. Do the irreducible components produced in the proofs of Theorems A and B correspond to regular components of H r g,d with the expected number of moduli? In other words, do they provide answers to Question 1.3 as well?
Our method does not provide information about this question, because our subadditivity theorem does not say anything about whether the resulting linear series are very ample. It is possible that the subadditivity theorem can be strengthened to include this information after adding some additional hypotheses, but new ideas are required.
1.6. A note on the characteristic 0 hypothesis. The hypothesis char F = 0 is used in two essential ways. The first is that we build on results with that hypothesis: the proof of Theorem A uses the results of [Kom91], and the proof of Theorem B uses results of [Par89], both of which assume char F = 0. I do not know for certain if those results, or at least the parts needed for our application, can be extended to characteristic p. Secondly, we use the fact that, in characteristic 0, any linear series has finitely many ramification points (see e.g. [ACGH85, p. 39]).
I strongly suspect that very slightly weakened version of Theorem A can be proved in characteristic p by a similar strategy, replacing the use of Komeda's theorem with a different bound valid in all characteristics. The proof of Theorem B, however, cannot easily be extended to arbitrary characteristic, since it depends indispensably on [Par89].
Sections 3 through 6 do not assume that char F = 0 except where stated otherwise, so that the results therein can be used in any future work addressing the situation in characteristic p. For such applications, it is probably necessary to modify the definition of tg(a × b) to consider only the open locus of linear series that have unramified points. Unfortunately, symmetry of tg(a × b) in a and b is lost, since this property need not be preserved under Serre duality. Our notion of threshold genus of skew shapes, however, requires no modification in positive characteristic.
Threshold genera for a ≤ 4
This section surveys some previously known results about Question 1.1, mostly via work on Question 1.3, formulated as bounds on threshold genera. In light of the symmetry in a, b, we will assume a ≤ b in this discussion. We begin by surveying results from the literature that show the following asymptotic facts, explained in the next several subsections.
In particular, this confirms that the first four answers to Question 1.8 are "1." 2.1. The case a = 1.
Proof. Suppose tg(a × b) ≤ g ≤ ab, and L is a dimensionally proper line bundle on a genusg curve C with h 0 (C, L) = a and h 1 (C, L) = b. Clifford's theorem and Riemann-Roch imply 2.2. The case a = 2. The case a = 2 concerns the geometry of G 1 g,d . This case can be analyzed via the Hurwitz space H d,g of degree-d branched covers f : C → P 1 from a genus g curve, which can be identified (set-theoretically) with the open subspace in G 1 g,d of basepoint-free series. The forgetful map H d,g → M g is generically finite; this was proved by Segre [Seg28], and a modern treatment and strengthening is in [AC81]. The space H d,g is nonempty if and only if d ≥ 2, which is equivalent to ρ(g, 1, d) ≥ −g + 2, and if so it is irreducible of dimension 2g + 2d − 5 = ρ(g, 1, d) + dim M g . So the closure of the basepoint-free locus gives a dimensionally proper component of G 1 g,d provided that ρ(g, 1, d) ≥ −g + 2. Thus tg(2 × b) ≤ b + 1. Lemma 2.1 gives the reverse inequality, hence 2.3. The case a = 3. The case a = 3 concerns G 2 g,d , which may be studied using the geometry of Severi varieties, and in particular using a theorem of Sernesi [Ser84]. We continue to assume a ≤ b, i.e. b ≥ 3. Sernesi considered the Severi variety V d,g of irreducible nodal plane curves of degree d and geometric genus g, and proved that if Up to the action of PGL 3 , V d,g can be identified with the open locus in G 2 g,d of linear series giving immersions in P 2 with nodal images, so this shows that the closure of this locus is an irreducible . Therefore this is a dimensionally proper component of G 2 g,d when d ≥ 5 and d − 2 ≤ g ≤ d−1 2 . Translated into our terms, suppose that we have fixed a = 3 ≤ b, and let r = 2, d = g +a−b−1 = g + 2 − b. We want to know the minimum g such that the bounds g ≥ b + 3 and g − b ≤ g ≤ g+1−b 2 hold. By elementary algebra, that minimum is b + 1 2 + 2b + 1 4 , hence Further algebra shows that 1 2 + 2b + 1 4 ≤ 1 2 b + 3 2 for all b ≥ 3; equality holds for b = 3. Since 1 2 b + 3 2 ≤ 1 2 b + 2, Inequality (9) implies Inequality (2) in the case a = 3: 2.4. The case a = 4. In the case a = 4, an asymptotically precise description of tg(4 × b) follows from work of Pareschi [Par89]. Pareschi considers the Hilbert scheme H 3 g,d , i.e. it addresses Question 1.3. Pareschi's theorem says that H 3 g,d has a regular component with the expected number of moduli and therefore Remark 2.2. Our bound from Theorem A for this case is tg(4 × b) ≤ 2b + 2, which is obviously much weaker than Equation (13) for large b. Nonetheless, Theorem A is still stronger in reasonably high genus: the explicit bound in Equation (12) is greater than 2b + 2 for all b ≤ 50, and is equal to 2b + 2 for 51 ≤ b ≤ 53. So even in genus g = 102, Theorem A gives a stronger bound on tg(4 × b).
2.5. The cases a ≥ 5. In the cases a ≥ 5, i.e. r ≥ 4, strong asymptotic results about Question 1.3 have been obtained by Lopez [Lop99], building on [Lop91]. Some of these results are summarized in the table below in asymptotic form; see [Lop99] for precise bounds. rank sufficient bound on ρ bound on threshold genus for a ≥ 8. Note in particular that Lopez's results in r = 4 gives the same asymptotic as Theorem B, while a stronger asymptotic is obtained in Theorem B for r ≥ 5. Like the work of Pareschi, these results concern regular components of the Hilbert scheme with the expected number of moduli, so they are stronger than merely bounds on tg(a × b), and remain the strongest results on Question 1.3.
Rational linear series on twice-marked curves
Our main theorems, though stated for smooth curves without marked points, will be proved by inductive arguments about curves with two marked points that are chained together and smoothed. This section develops preliminary material about Brill-Noether theory of curves with two marked points. This section contains no original mathematics, but develops some slightly nonstandard terminology about "rational linear series" that will prove to be very convenient for our purpose, and formulates some known results in these terms. In this section, and every section up to and including Section 6, we do not assume that char F = 0 unless stated otherwise. Figure 2. The Young diagram of the fixed-height skew shape (0, 2, 2, 2, 3, 7, 7)/(0, 0, 1, 2, 3, 4, 4).
If α is a ramification sequence, the associated vanishing sequence is the strictly increasing sequence a = (a 0 , · · · , a r ) where a i = α i + i. We follow the convention that a vanishing sequence is denoted by the lowercase roman letter corresponding greek letter of the ramification sequence.
We write α ≤ β to mean that α, β have the same rank r and α i ≤ β i for all 0 ≤ i ≤ r. The notation β/α denotes an ordered pair of two ramification sequences, in which we assume α ≤ β. The ordered pair β/α is usefully visualized as a skew Young diagram, as illustrated in Figure 2, though this visualization is not logically necessary in our discussion. For this reason we will call such an ordered pair a fixed-height skew shape of rank r. We emphasize the phrase "fixed-height" here: r is part of the data of β/α, as opposed to the standard definition of a skew shape as an ordered pair of partitions; see Remark 6.6.
3.2. Ramification of linear series. Let C be a smooth curve of genus g, and let L = (L, V ) be a linear series of rank r and degree d on (C, p, q). For any p ∈ C, the vanishing sequence of L at p is the increasing sequence a(L, p) = (a 0 (L, p), · · · , a r (L, p)) of orders of vanishing of sections of V at p. The ramification sequence of L at p is α(L, p), where α i (L, p) = a i (L, p) − i. If |α(L, p)| > 0 we call p a ramification point of L. Otherwise, it is called unramified.
3.3.
Brill-Noether theory of twice-marked curves. Let (C, p, q) be a smooth twice-marked curve, fix integers r, d ≥ 0, and let α, β be nonnegative ramification sequences of rank r. Define Denote by G r,α,β d (C, p, q) the open subscheme where α(L, p) = α and α(L, q) = β. This construction globalizes to a morphism of stacks G r,α,β g,d → M g,2 , representable by schemes. For a family f : C → S with two disjoint sections p S , q S , this gives an S-scheme G r,α,β d (C, p S , q S ), which we call a relative Brill-Noether scheme for twice-marked curves. By imposing each ramification condition locally via the inverse image in G r g,d of a Schubert variety, it follows that the local relative dimension of G r,α,β g,d → M g,2 is bounded locally at all points by the adjusted Brill-Noether number : Definition 3.1. A linear series L on a twice-marked curve (C, p, q), with ramification at least α at p and β at q, is called dimensionally proper in G r,α,β g,d if the local dimension of G r,α,β d (C, p, q) is max{0, ρ} and the local relative dimension of G r,α,β g,d → M g,2 is ρ.
Theorem 3.2 (Brill-Noether theorem for two marked points; see [EH86] or [Oss14]). Suppose g ≥ 0, and let (C, p, q) be a general twice-marked smooth curve of genus g. For nonnegative ramification sequences α, β of rank r, G r,α,β d (C, p, q) is nonempty if and only if the number Remark 3.3. The numberρ has a geometric interpretation: it is the expected dimension not of G r,α,β d (C, p, q) itself, but of its image in Pic d (C).
3.4. Extending to ramification sequences with negative entries. Several arguments in this paper are simplified if we relax the assumption that α, β are nonnegative in the discussion above, and allow "linear series" whose sections have poles of bounded order at the marked points.
Definition 3.4. A rational linear series of rank r and degree d on (C, p, q) is a pair L = (L, V ) of a degree d line bundle on C and an (r + 1)-dimensional vector space V ⊆ H 0 (C\{p, q}, L) of rational sections of L that are regular except possibly at p, q. So L encodes an r-dimensional family of divisors that are effective away from p and q, but may have negative multiplicity at p or q.
If L is a rational linear series on (C, p, q), define ramification sequences α(L, p) and α(L, q) in the same way as before, regarding vanishing order of a section with a pole to be negative.
If L is a rational linear series on (C, p, q) of degree d, then for any m, n ∈ Z we may define a rational linear series L + mp + nq of degree d + m + n, given by twisting by O C (mp + nq). For m, n sufficiently large we obtain a regular linear series, with ramification α(L, p)+ m at p and α(L, q)+ n at q. Via this identification, we may define G r,α,β g,d (C, p, q) for all choices of rank-r ramification sequences α, β, nonnegative or otherwise; these various spaces are linked by isomorphisms (14) " + mp + nq" : G r,α,β g,d ∼ − → G r,m+α,n+β g,d+m+n . Note also that the numbers ρ,ρ are unaffected by these twists; that is, Therefore we may define "dimensionally proper points" in exactly the same way, and L is dimensionally proper in G r,α,β g,d if and only if L + mp + nq is dimensionally proper in G r,m+α,n+β g,d+m+n . The following theorem is immediate.
Theorem 3.5. Theorem 3.2 is true exactly as written, but without assuming that the ramification sequences are nonnegative, for rational linear series.
3.5. The notation G β/α g . As a reminder that we are working with rational linear series, and to simplify certain statements, we introduce the following alternate notation.
Definition 3.6. Let β/α be a fixed-height skew shape, where α, β have rank r. Define The implicit assumption α ≤ β is equivalent to sayingρ = ρ in Theorem 3.2. The choice of degree d = r + g is the "expected" degree of a line bundle whose complete linear series has rank r. While this choice appears restrictive, Equation (14) shows us that it is not. This choice of degree has the pleasant consequence that both r and d disappear from the formula for ρ, leaving an expression in g, α, β alone: ρ(g, r, r + g, −α, β) =ρ(g, r, r + g, −α, β) = g − |β/α|.
Threshold genera of fixed-height skew shapes
The existence of dimensionally proper points of G β/α g provides a sort of distance, or cost, associated to fixed-height skew shapes β/α. We continue to allow char F = p in this section.
Definition 4.1. Let β/α be a fixed-height skew shape. The threshold genus tg(β/α) is the minimum integer g 0 such that for all genera g ≥ g 0 , G β/α g has dimensionally proper points.
Proposition 4.2. Threshold genera of fixed-height skew shapes have the following properties.
Rectangular skew shapes recover the same threshold genera defined in the introduction. Proof. By Equation (14), the latter is isomorphic to G r,0 r+1 ,0 r+1 g,d .
In characteristic 0, any linear series has only finitely many ramification points, so G r,0 r+1 ,0 r+1 simply parameterizes points of G r g,d together with two unconstrained marked points on the curve, and it follows that it has dimensionally proper points if and only if G r g,d does.
The subadditivity theorem
The engine driving this whole paper is the following sub-additivity theorem, which justifies our method of bounding threshold genus by "lego-building," and shows that threshold genus of skew shapes behaves like a distance function. In this theorem and section, we allow char F = p.
Theorem 5.1. For any fixed-height ramification sequences α ≤ β ≤ γ, The idea behind the proof of Theorem 5.1 is illustrated in Figure 1: from dimensionally proper points of G β/α g 1 and G γ/β g 2 , we obtain a nodal curve by gluing along marked points, and construct a limit linear series on this nodal curve, which can then be smoothed. The result which will provide the necessary smoothing is the following, from which we will deduce Theorem 5.1.
5.1. Regeneration of limit linear series. Theorem 5.2 is not novel; it is a straightforward application of the theory of limit linear series, as developed in [EH86], and especially the Smoothing Theorem 3.4, which is also called the "Regeneration Theorem" in the exposition [HM98,Theorem 5.41]. See also the development of Osserman [Oss06] which requires no assumptions on characteristic, and the more modern treatment discussed in [LO19] and the references therein. A nearly identical statement is [Pfl18, Lemma 3.8], which is proved based on the development in [Oss06], though that paper uses a weaker notion of "dimensionally proper" that does not demand that fiber dimension be minimal. We briefly sketch the ideas here, but the reader should consult these references for details and background. Let X be a curve of compact type, i.e. a nodal curve in which every node is disconnecting. Let C 1 , · · · , C n be the components of X. A limit linear series of degree d and rank r on X is an , subject to the compatibility condition that if p ∈ X is a node lying on components C i and C j , then α(L i , p) ≥ d − r − α(L j , p). Call L refined if equality holds at every node. For a smooth point q ∈ X, define the ramification sequence of L at q to be α(L, q) = α(L i , q), where C i is the component on which q lies. The set of limit linear series forms a subvariety G r d (X) ⊆ n i=1 G r d (C i ), in which the refined series form an open locus, and one may define a subvariety G r,α,β d (X, p, q) imposing ramification at two smooth points. The same bounds ρ(g, r, d) and ρ(g, r, d, α, β) are valid lower bounds on dimension for limit linear series as well.
The construction of limit linear series may be relativized to deformations of a curve of compact type, and the Brill-Noether number ρ (with or without imposed ramification) remains a lower bound on relative dimension, but one must be careful: current techniques only construct these relative spaces of limit linear series for certain deformations called smoothing families. One may need to pass to anétale neighborhood to obtain a smoothing family. This difficulty meant that for many years, applications of limit linear series required some boilerplate language about passing from the family you really care about to a smoothing family. This did not prevent the theory from having useful applications, but it had the frustrating consequence that there was no global moduli space of limit linear series, and also stymied applications to non-algebraically closed fields. These difficulties were surmounted by Lieblich and Osserman in [LO19] using their formalism of descent of moduli spaces; we now have a global moduli space G r g,d → M ct g over the moduli of curves of compact type, with the catch that this morphism is not known to be representable by schemes, but only by algebraic spaces. This global moduli space obeys the bound ρ on local relative dimension. Imposing ramification conditions, we obtain a moduli space G r,α,β g,d → M ct g,2 obeying the local relative dimension bound ρ(g, r, d, α, β). These tools quickly prove Theorem 5.2.
Proof of Theorem 5.2. Let L 1 , L 2 be linear series on twice-marked curves (C 1 , p 1 , q 1 ), (C 2 , p 2 , q 2 ) that are dimensionally proper in G r,α,β g 1 and G r,d−r−β,γ g 2 , respectively. Let g = g 1 + g 2 , and let (X, p 1 , q 2 ) be the twice-marked nodal curve of genus g obtained by gluing q 1 to p 2 ; then L = (L 1 , L 2 ) is a refined limit linear series on (X, p 1 , q 2 ), with ramification exactly α, γ at p 1 , q 2 . It is isolated in G r,α,β d (X, p 1 , q 2 ), since L 1 , L 2 are isolated in G r,α,β d (C 1 , p 1 , q 1 ) and G r,d−r−β,γ d (C 2 , p 2 , q 2 ). Now, denote by ∂G r,α,γ g,d the part of G r,α,γ g,d lying over the boundary of M ct g,2 . Then we have a map G r,α,β g 1 ,d × G r,d−r−β,γ g 2 ,d ֒→ ∂G r,α,γ g,d , whose image provides a neighborhood of L. Since L 1 , L 2 are dimensionally proper, the local relative dimension (relative to the boundary of M ct g,2 ) at L is ρ(g 1 , r, d, α, β) + ρ(g 2 , r, d, d − r − β, γ) = ρ(g, r, d, α, γ). Since the local relative dimension of L in ∂G r,α,γ g,d cannot be smaller than the local relative dimension in G r,α,γ g,d , it follows that the same is true in G r,α,γ g,d . But this means that G r,α,γ g,d cannot be supported over the boundary; in a neighborhood of L there must be linear series over smooth curves. By semicontinuity, all such series in a small enough neighborhood have ramification exactly α, γ at the marked points, are isolated in their fibers, and have local relative dimension ρ, i.e. they are dimensionally proper points of G r,α,γ g,d .
5.2. From regeneration to subadditivity. We now deduce the subadditivity theorem.
Displacement difficulty of fixed-height skew shapes
A crucial source of "cheap lego bricks" in the proof of Theorem A are fixed-height skew shapes β/α with threshold genus 1. We will characterize these via a curious combinatorial construction called displacement. Such skew shapes can be assembled like lego bricks, thereby obtaining an upper bound on threshold genera that is completely combinatorial, and computable in principle. In this section, we continue to allow char F = p.
The key idea ideas in this section were inspired by [EH87]; this work began by observing that the method in that paper could be applied not just to the ramification of the canonical series.
6.1. Displacement of ramification sequences. We begin with some terminology.
Definition 6.1. An arithmetic progression, for purposes of this paper, is a proper subset Λ Z, that is either empty or of the form n + mZ for m, n ∈ Z with m = 0 or m ≥ 2. In the latter case, m is called the modulus of Λ. In particular, we allow the empty set, we allow m = 0 and view single-element sets as arithmetic progressions, but we do not allow m = 1.
A useful way to regard to operations disp + Λ , disp − Λ is: whenever n ∈ Λ and {n − 1, n} ∩ S has exactly one element, that element is "loose;" disp + Λ slides loose elements up (attracting to Λ), while disp − Λ slides loose elements down (repelling away from Λ). The following properties are immediate. Lemma 6.3. For any arithmetic progression Λ, the operations disp A ramification sequence is uniquely determined by the set of elements in its associated vanishing sequence. Using this correspondence, we extend disp + Λ , disp − Λ from sets to ramification sequences. Definition 6.4. Let α be a ramification sequence. Call an entry α i increasable (resp. decreasable) if α is still a nondecreasing sequence after α i is increased (resp. decreased) by 1.
Remark 6.6. This construction is closely related to, but subtly different from, the displacement operations on partitions defined in [Pfl17b] and the unpublished preprint [Pfl13]. A partition can be represented as a nonincreasing sequence λ = (λ 0 , λ 1 , · · · ) of integers, almost all 0, and disp + Λ (λ), disp − Λ (λ) can be defined via displacement of the infinite set {λ n − n − 1 : n ≥ 0}. A nonnegative ramification sequence α = (α 0 , · · · , α r ) of course determines a partition λ = (α r , α r−1 , · · · , α 0 , 0, 0, · · · ), and displacements of this partition almost correspond to displacements of the ramification sequence, with two crucial differences: the first 0 entry in a partition must be considered increasable, and the last entry of a ramification sequence must be considered decreasable, even when it is 0. This causes subtle but important differences when analyzing the two situations. This distinction is the reason this paper emphasizes the phrase "fixed-height" when discussing skew shapes.
Proof. If these two conditions hold, then |β/α| = 2, and it follows from definitions that disp − Λ (β) = α and disp + Λ (α) = β, so β/α is a 2-link. Conversely, if β/α is a 2-link then condition (1) follows from the assumption |β/α| = 2 and the fact that upward displacement cannot increase any elements by more than 1. So there exists some arithmetic progression Λ ′ linking α to β. This Λ ′ must contain Λ, and Λ ′ can only meet the loose set in the two specified values, so that same is true of Λ.
Lemma 6.13. Let α < β be ramification sequences, and Λ an arithmetic progression. The following are equivalent.
We prove this theorem over the course of this subsection. The core of the argument is the following lemma. This lemma is essentially a rephrasing of [EH87, Proposition 5.2], but we include a proof because there is a subtle error in the statement of that Proposition that was immaterial in that paper but would cause confusion in our present context 2 .
Proof. We will construct V (γ) explicitly. Let c = (c 0 , · · · , c r ) be the vanishing orders corresponding to the ramification sequence γ. The vanishing sequence corresponding to ramification sequence −γ is (r − c r , r − c r−1 , · · · , r − c 0 ). The basic observation is that for any rational linear series (L, V ) with ramification at least −γ at p and γ at q, and any 0 ≤ i ≤ j ≤ r, the subspace of V consisting of sections vanishing to order at least r − c j at p at order at least c i at q has dimension at least (r + 1) − (r − j) − i = j − i + 1. Furthermore, this subspace must also be a subspace of the image of H 0 (E, L(−(r − c j )p − c i q)) ֒→ H 0 (E\{p, q}, L), which has dimension c j − c i + 1 by Riemann-Roch. So whenever j − i = c j − c i , or equivalently γ i = γ j , this subseries is uniquely determined.
Partition {0, · · · , r} into disjoint subsets such that i, j are grouped together if γ i = γ j . Call these subsets blocks. If {i, i + 1, · · · , j} is a block, then c i , · · · , c j are consecutive, so c j − c i = j − i, and neither c i − 1 nor c j + 1 occur in the vanishing sequence c. For such a block, define V i:j to be the image of the natural inclusion H 0 (E, L(−(r − c j )p − c i q)) ֒→ H 0 (E\{p, q}, L). As observed in the previous paragraph, dim V i:j is equal to the size of the block, and V i:j ⊆ V for any (L, V ) with the desired ramification.
It follows from Riemann-Roch that the vanishing orders of V i:j at q are c i , · · · , c j−1 , c ′ j , where c ′ j = c j + 1 if c j + 1 ∈ Λ and c ′ j = c j otherwise. Since c i , · · · , c j are consecutive and c j +1 is not in c, these numbers are precisely disp + Λ (c) i , · · · , disp + Λ (c) j . Similarly, the fact that c i − 1 is not in the vanishing sequence implies that the vanishing orders of to be the sum, taken over all blocks {i, i + 1, · · · , j} ⊆ {0, · · · r}, of V i:j . The vanishing orders at p of sections in these various subspaces are all disjoint, so this sum is a direct sum, dim V (γ) = r+1, and the vanishing sequences of V (γ) at p and q are precisely r−disp − Λ (c) and disp + Λ (c), respectively. In other words, its ramification sequences at p, q are − disp − Λ (γ), disp + Λ (γ), and (L, V (γ)) is a rational linear series of the desired form. Since each summand V i:j must be a subspace of V for any rational linear series (L, V ) with the desired ramification, V (γ) is the only possible such subspace.
Proposition 6.16. Let (E, p, q) be a smooth twice-marked genus 1 curve. Let m be the order of p − q in the Jacobian, where m = 0 if p − q is nontorsion. For any fixed-height skew shape β/α, (2) If α < β, then G β/α (E, p, q) has a single point if α, β are linked by an arithmetic progression of modulus m, and it is empty otherwise.
Proof. Consider the fibers of the forgetful map G β/α (E, p, q) → Pic r+1 (E), where r is the rank of α and β. For any [L] ∈ Pic r+1 (E), any point L = (L, V ) in the fiber is a rational linear series with ramification exactly −α at p and β at q; since β ≥ α it follows from Lemma 6.15 that L must be L(α). So every fiber is either empty or a single point, and it is nonempty if and only if disp − Λ (α) = α and disp + Λ (α) = β, where Λ = {n ∈ Z : L ∼ = O E ((r + 1 − n)p + nq}. This last condition is equivalent to α, β being linked by Λ, by Lemma 6.13. As L varies, every possible arithmetic progression Λ with modulus m occurs exactly once, and for all other choices of L, the progression Λ is empty. If α = β, this means that the fiber is nonempty for all choices of L except a finite set corresponding to the decreasable and increasable elements of α; part (1) follows. Now assume α < β. If α, β are not linked by any arithmetic progressions with modulus m, then all fibers are empty. On the other hand, if they are linked by such a progression, then they are linked by a unique such progression, namely α i + i + 1 + mZ, where i is any index for which α i < β i , so there is a single nonempty fiber and G β/α (E, p, q) is a single point.
Proof. Proposition 6.16 shows that G β/α 1 is empty when β/α is not a link, so we may assume that β/α is an n-link for some n. If n ∈ {0, 1}, the result follows from Corollary 3.7.
Consider the case n = 2. Choose an integer m such that α is linked to β by an arithmetic progression with modulus m, and let (E, p, q) have torsion order m. Consider a family (E, p, q) of twice-marked genus 1 over a base scheme S that includes (E, p, q), but for which a general member has torsion order 0. For example, one may take S = E\{p} and consider a family in which p is fixed and q moves to all other points on E. There are a finite set of integers m ′ for which α, β are linked by a progression modulo m ′ , and for each one the locus in S where this torsion order occurs has codimension 1. So Proposition 6.16 implies that the image of G β/α (E, p, q) → S has codimension 1, over which all fibers are 0-dimensional. It follows that G β/α (E, p, q) has a component of dimension dim S − 1 = dim S + 1 − |β/α|, all points of which are isolated in their fibers; hence this component is dimensionally proper. So G β/α 1 has dimensionally proper points, by the discussion in Section 1.1. Finally, suppose n ≥ 3. For any family of twice-marked genus-1 curves (E, p, q) over a base scheme S and any point of G β/α (E, p, q) over x ∈ S, there is a codimension-1 locus in S where the same torsion order occurs, and therefore the local dimension of G β/α (E, p, q) is at least dim S − 1 at this point. Since dim S − 1 > dim S + 1 − |β/α|, this point cannot be dimensionally proper. So G β/α 1 cannot have dimensionally proper points in this case.
The chain threshold may be regarded as a version of threshold genus in which we consider limit linear series on chains of elliptic curves, rather than linear series on smooth curves. An immediate consequence of the subadditivity is Corollary 6.19. For an fixed-height skew shape β/α, tg(β/α) ≤ ct(β/α) and gδ(β/α) ≤ cδ(β/α).
Proof. This follows from Proposition 4.2, Theorem 5.1, and definitions.
6.4. Some useful 2-links. The number cδ can be computed algorithmically. The main impetus for this project is the following observation which, while vague, bears emphasis.
This subsection provides constructions of 2-links, and therefore difficulty-0 skew shapes, useful in the proof of Theorem A.
Remark 6.22. Unfortunately, while it is very common that displacement difficulty is 0, there are enough insidious exceptions that it is often maddeningly delicate to make general constructions. While working on this project, I tried several dozen constructions, many of which were found by computer search, before arriving at the choices below; the reader may be forgiven for not considering the specific choices below natural or obvious (but I hope they seem somewhat natural with the benefit of hindsight). The key observation was that it is useful to find ramification sequences α with the "periodicity" property cδ (n + α/α) = 0 for some n ≥ 1; such partitions were then found by computer searches, and are stated in Corollary 6.27 below. I mention this only because, when reading others' papers, I find myself perseverating often on how they could have naturally arrived at certain clever constructions and try to guess at the intuition concealed behind them. In this case, very little intuition was present in the author's mind; only patience and many discarded alternatives.
Definition 6.23. For integers n ≥ a ≥ b ≥ c, let τ n a,b,c denote the rank n − 1 ramification sequence τ n a,b,c = 0 n−a 1 a−b 2 b−c 3 c . Visually, this ramification sequence has Young diagram consisting of three columns of height a, b, c. The loose set is contained in {0, n − a, n − a + 1, n − b + 1, n − b + 2, n − c + 2, n − c + 3, n + 3}.
The loose set need not include all of these values; it does so only when a, b, c, n are all distinct.
(1) Suppose c < b and a < n. If neither n − a nor c + 1 is divisible by a − c + 2, then τ n a+1,b,c+1 /τ n a,b,c is a 2-link.
(2) Suppose b < a < n. If none of n − a, b − c + 1, b − c + 2, b + 2 are divisible by a − b + 1, then τ n a+1,b+1,c /τ n a,b,c is a 2-link. Proof. We apply Lemma 6.11. In the first part, the arithmetic progression Λ is generated by n − a and n − c + 2 and has common difference a − c + 2, and it suffices to check that n − a, n − c + 2 are the only two elements of Λ in list (15). The three values n − a + 1, n − b + 2, n − b + 2 cannot be in Λ since they lie strictly between the adjacent elements n − a and n − c + 2, and n − c + 3 cannot be in Λ since the common difference is at least 2. So it suffices that neither 0 nor n + 3 are in Λ, which amounts to the stated divisibility conditions. For the second part, the arithmetic progression is generated by n − a and n − b + 1; its common difference is a − b + 1. Since this common difference is at least 2, neither n − a + 1 nor n − b + 2 can be present, and it suffices to check that the other four elements in list (15) are not in Λ, which amounts to the stated divisibility conditions.
The following two corollaries are illustrated in Figure 4.
See Figure 5 for an illustration.
Weierstrass points and twists of the canonical series
This section explains how to use a theorem of Komeda on dimensionally proper Weierstrass points to compute certain threshold genera. We return in this section to assuming char F = 0.
A numerical semigroup is a cofinite subset S ⊆ Z ≥0 containing 0 and closed under addition. The elements of Z ≥0 \S are called the gaps of S, and the number of gaps is called the genus of S. Denoting the gaps by t 1 < t 2 < · · · < t g , the weight of S is wt(S) = g n=1 (t n − n). A numerical semigroup is called primitive if, denoting by s 1 the first positive element of S, 2s 1 > t g . Primitivity has the useful consequence that decreasing the gaps preserves closure under addition. That is, if 0 < t ′ 1 < · · · < t ′ g is any other increasing sequence of positive integers with t ′ i ≤ t i for all 1 ≤ i ≤ g, then S ′ = Z ≥0 \{t ′ 1 , · · · , t ′ g } is also a primitive numerical semigroup. A smooth once-marked curve (C, q) of genus g determines a numerical semigroup S(C, q), called the Weierstrass semigroup, which is the set of all pole orders at q of regular functions on C\{q}. Equivalently, the elements of S(C, q) are s 0 < s 1 < · · · , where s n = min{s ∈ Z : h 0 (C, O(sq)) ≥ n + 1}. Since s 0 = 0 and s n = n + g for n ≫ 0, there are exactly g gaps t 1 , · · · , t g .
Proof. Riemann-Roch shows that m ≥ 0 is a vanishing order of |ω C (nq)| if and only if m + 1 − n is not in the Weierstrass semigroup.
Every numerical semigroup S defines a locally closed locus (possibly empty) M S g,1 ⊆ M g,1 , consisting of all marked curves (C, q) such that S(C, q) = S. The local codimension of M S g,1 at any point is at most wt(S), and a point is called dimensionally proper if equality holds.
Theorem 7.2 ( [Kom91]). If S is a primitive numerical semigroup of genus g, and wt(S) ≤ g − 1, then M S g,1 has dimensionally proper points.
Remark 7.3. Theorem 7.2 was originally proved with the bound wt(S) ≤ g − 2 by Eisenbud and Harris [EH87], using limit linear series methods that inspired the techniques of this paper. There is a generalization allowing non-primitive semigroups in [Pfl18], which works in characteristic p. Eisenbud and Harris made an argument by induction on g, but the inductive step failed for some specific semigroups of weight g − 1. It was these that were analyzed by Komeda, and in fact it is precisely the weight g − 1 semigroups analyzed by Komeda that we use in our proof of Theorem A.
The theorem now follows from the claim: for all integers h satisfying g ≤ h ≤ |β/0 r+1 | = wt(S) + g, G β/0 r+1 h has dimensionally proper points. The case g = h follows from Proposition 7.4 and Komeda's Theorem 7.2. To deduce the result for h > g, we induct on the difference h − g, and use our analysis on 1-links from the previous section.
Suppose that g < h ≤ wt(S) + g, and that the claim holds for smaller values of h − g. Let t be the largest gap of S such that t − 1 ∈ S. Since wt(S) > 0, t > 1. Let S ′ = S ∪ {t}\{t − 1}. Since S is a primitive semigroup, it follows that S ′ is also a primitive semigroup. The weight of S ′ is wt(S ′ ) = wt(S) − 1. Let β ′ be the rank-r ramification sequence associated to S ′ . So has dimensionally proper points. Lemma 6.9 shows that β/β ′ is a 1-link. Proposition 6.17 implies that G β/β ′ 1 has dimensionally proper points, and subadditivity implies that G β/0 r+1 h does as well. This completes the induction.
Proof of Theorem A
We will deduce Theorem A from the following bound, claimed in Equation (2).
The proof of Theorem 8.1 occupies the rest of this section. First, note that if a = 2 or a = 3, the result follows from previously known results in Section 2, namely Equations (7) and (10); by symmetry of tg(a × b) the result also follow if b = 2 or b = 3. So it suffices to consider a, b ≥ 4. For the rest of the section, suppose we have fixed a, b ≥ 4. Recall by Corollary 4.4 that tg(a × b) = tg(b a /0 a ); our strategy is to decompose the fixed-height skew shape b a /0 a into four skew shapes, each of which has geometric difficulty at most 1.
Define the following notation: let k = a−1 2 , so that either a = 2k + 1 if a is odd, or a = 2k + 2 if a is even. Choose any integer ℓ satisfying 1 ≤ ℓ ≤ b − 3. In this notation, we break the skew shape b a /0 a into the differences between the following four ramification sequences: 0 a , τ a 2k+1,k,k , ℓ + τ a ⌈a/2⌉,⌊a/2⌋,0 , b − τ a 2k+1,k,k , b a . This sequence is illustrated in Figure 6. We consider each of the four skew shapes in turn.
Remark 9.2. Although we are primarily interested in the asymptotics of this bound, the precise bound is easy to compute; it is just messy to write down by hand. The explicit formula also makes it possible to compare directly to the bound tg(a×b) ≤ 1 2 ab+2 obtained in Theorem 8.1. For example, consider a = b = 61. In this case, 1 2 ab + 2 = 1862.5, so Theorem 8.1 gives tg(61 × 61) ≤ 1862. However, the bound in Lemma 9.1 is 1876. Therefore, for curves with genus as high as 1875, there are choices of g, r, d for which Theorem A guarantees existence of dimensionally proper points of G r g,d but the explicit bound underlying Theorem B does not.
In the Theorem below, recall the notation ε a = (−a) mod 4 worked with chains). They may also be used for an analogous dimension calculation on a chain of genus 1 tropical curves, as in [Pfl17b]. The resulting dimension depends on the orders of torsion between the attachment points in the chain, as well as the marked points at the ends of the chain. The proof of Theorem A in the present paper makes use of chains with varying torsion order, chosen carefully to make regeneration possible. In contrast, many recent papers on Hurwitz-Brill-Noether theory [Pfl17a, JR21, CPJ22a, CPJ22b, Lar21a, LLV20] use chains in which all orders of torsion are k. This is because these chains arise as specializations of k-gonal curves, with two marked points of total ramification. For example, in [Pfl17a], the expected dimension ρ k (g, r, d) was derived in terms of displacement as follows. Call the operation disp + Λ , where Λ is an arithmetic progression of common difference k, a displacement modulo k. The expected co-dimension u = g − ρ k (g, r, d) is equal to the minimum number of displacements modulo k needed to obtain, starting from the empty partition, a partition containing the (r + 1) × (g − d + r) rectangle. This minimum may be found by finding the minimum number of symbols in a type of tableau called a k-uniform displacement tableau in [Pfl17a] or a k-regular tableau in [LLV20] and elsewhere.
A.3. The challenge of regeneration. A crucial difficulty in Hurwitz-Brill-Noether theory is the issue of regeneration of linear series from a chain of k-torsion genus 1 curves to a smooth genus g curve. Such regeneration was not attempted in [Pfl17a], which is why only an upper bound on dimension could be proved there. The smoothing techniques used in this paper (Section 5.1) are not applicable, precisely because the linear series to be constructed are not dimensionally proper. The basic issue is that the naive dimension estimate used in the regeneration theorem of limit linear series does not account for additional structure imposed by the existence of a line bundle in G 1 k (C). This difficulty has now been solved in three distinct ways: using logarithmic deformation theory of tropical scrolls [JR21], using deformation theory of splitting loci [Lar21a], and using the notion of e-nested linear series [LLV20]. Variants of these techniques may be useful in addressing Question A.4, formulated at the end of this appendix.
The reason why the usual regeneration theorem for limit linear series is not useful for Hurwitz-Brill-Noether theory is neatly encapsulated by a concept from combinatorics. In the regeneration theorem, one can measure ramification of a linear series with a partition or ramification sequence λ, and the size |λ| serves as a codimension estimate. In a certain sense, each box of the Young diagram of λ contributes one equation. But in Hurwitz-Brill-Noether theory, the partitions that arise are special partitions called k-core partitions, many of the equations become redundant, and a lower codimension estimate should be used.
A.4. The role of k-core partitions. In fact, the partitions that can be obtained by a sequence of displacements modulo k are well-studied and have many deep combinatorial properties. They are called k-core partitions; the combinatorics of such partitions was later used to obtain much more precise results via tropical methods in [CPJ22a,CPJ22b], and plays a crucial role in the regeneration techniques developed in [LLV20]. For example, the multiple components of G r d (C), for C a general k-gonal curve, are classified by the minimal k-core partitions that contain a given rectangle.
A k-core partition may be defined as a partition with no hooks of length k, or equivalently no hook lengths divisible by k. A crucial fact about k-core partitions, as compared with arbitrary partitions, is that if a k-core partition λ is obtained from the empty partition by a minimal sequence of displacements modulo k, then the length of this minimal sequence does not depend on the sequence chosen. We will call this number the length of the k-core, and denote it |λ| k . It is the proper replacement for the more naive |λ| that occurs in the codimension estimate from limit linear series.
The length |λ| k has another convenient description: it is the number of boxes in the Young diagram of the partition whose hook length is less than k [LM05, Lemma 31]. The relationship between length and displacement is neatly encapsulated by the equation (which follows, in different notation, from [LM05, Proposition 22]) where Λ is any arithmetic progression with common difference k, and λ is a k-core.
Remark A.2. The reason k-cores are the partitions that arise naturally in Hurwitz-Brill-Noether theory, and why the length of a k-core is more natural than the size of the partition when making codimension estimates, is most transparent in the following special situation. Let f : C → P 1 be a degree k genus g cover, and suppose that f has a point p ∈ C of total ramification. Assume that f (p) = ∞, so that f is a rational function with pole divisor kp. The assumption that the gonality pencil has one or two points of total ramification is quite convenient, and is used extensively in [LLV20]. Fortunately, covers with two points of total ramification appear to be "general enough" to satisfy the main theorems of Hurwitz-Brill-Noether theory. We claim that for any line bundle L, the ramification sequence α = (α 0 , · · · , α r ) of the complete linear series L = (L, H 0 (C, L)) is a k-core partition. Here we abuse notation slightly and regard a "partition" as a multiset of nonnegative integers, and regard two partitions as "the same" if one is obtained by adding some number of 0s to the other.
Here is a sketch of the proof; the reader is encouraged to draw some pictures to convince themself (see for example the pictures in [LLV20, §5.1]; their "vertical and horizontal line segements" correspond to integers that are, or are not, vanishing orders). The boxes of the Young diagram of α are in bijection with pairs (a, b) of nonnegative integers, where a is in the vanishing sequence, b is not in the vanishing sequence, and a > b. The hook length of this box is the difference a − b.
Now, the fact that α is a k-core follows from the observation that, if a ≥ k is a vanishing order, then a − k is also a vanishing order; this is because we can multiply sections by f . So in every pair (a, b) discussed above, b − a = k. This also hints at why the k-core length is a good measure of complexity in the Hurwitz-Brill-Noether setting: it ignores the "redundent" pairs (a, b) whose existence is implied by multiplication with powers of f , which in turn would correspond to redundant equations in the codimension estimate.
A.5. Refined Hurwitz-Brill-Noether theory: splitting loci. The main question of this paper, Question 1.1, has a natural analog in Hurwitz-Brill-Noether theory, which to my knowledge has not been studied in detail. To state it, we use the terminology of splitting type loci, which we briefly summarize now. Splitting type loci provide the right vocabulary for the "refined" form of Hurwitz-Brill-Noether theory that cleanly accounts for the reducibility and non-equidimensionality of G r d (C) for C a general k-gonal curve. For a fixed degree-k branched cover f : C → P 1 from a genus g smooth curve, and a nondecreasing sequence e = (e 1 , · · · , e k ) of integers called the splitting type, let W e (f ) denote the locus of line bundles L such that f * L ∼ = max{0, e i + 1}, the stratification into splitting type refines the stratification into Brill-Noether loci. The expected codimension of W e (f ), from Larson's theory of splitting type loci [Lar21b], is u( e) = h 1 (P 1 , End(f * L)) = i,j max{0, e i − e j − 1}.
Informally, the more generic splitting types are those that are more "balanced." In fact, this expected codimension can be described in terms of displacement. To every splitting type e, we associate a k-core partition Γ( e) [LLV20, Definition 4.7], and we have [LLV20, Proposition 5.6]: (20) u( e) = |Γ( e)| k .
Remark A.3. Remark A.2 suggests a succinct description of Γ( e) in the case that f : C → P 1 has a point p of total ramification: for n ≫ 0, it is the partition obtained from the ramification sequence at p of the complete linear series of L(np). Here n must be large enough that h 1 (C, L(np)) = 0, because that means that incrementing n simply adds one more 0 to the ramification sequence.
For f a general point in Hurwitz space, this expected codimension u( e) is correct: This was proved independently in [Lar21a] and [CPJ22a,CPJ22b]; it is the analog over Hurwitz space of the Brill-Noether theorem. Furthermore, W e (f ) is irreducible [LLV20]. From this, one can classify the irreducible components of G r d (C) for a general k-gonal curve C, by first classifying all maximal splitting types corresponding to d and r; see [Lar21a, Corollary 1.3] or [CPJ22a, §2.2].
A.6. An analog of Question 1.1. This construction can be relativized, to obtain a moduli space W e g → H k,g , where H k,g is the Hurwitz space. We may now ask the analog of Question 1.1 for Hurwitz space.
Question A.4. For which g, e does W e g → H k,g have a component of relative dimension g − u( e) and generic fiber dimension max{0, g − u( e)}?
The main theorems of Hurwitz-Brill-Noether theory imply that, in all cases where g > u( e), there is a unique dimensionally proper component. This leaves the non-surjective case g < u( e).
I believe it may be productive to generalize the tools of this paper to bear on Question A.4, but there are a number of details that will require care. In particular, I believe that the machinery of threshold genera of skew shapes may be a useful tool, but that it should be necessary to restrict all partitions to be k-core partitions. Hopefully, a subadditivity theorem, analogous to Theorem 5.1, holds for k-core partitions; the proof of such a theorem might be possible by adapting one of the smoothing techniques developed so far [JR21,Lar21a,LLV20]. Finally, I will remark that elliptic chains are unlikely to be useful in approaching Question A.4. This is because the elliptic chains considered in [LLV20] and elsewhere already have k-torsion on each curve in the chain, so they cannot specialize further, yet they already behave like general points in Hurwitz space. Novel ideas are needed. | 2022-01-25T02:15:51.410Z | 2022-01-21T00:00:00.000 | {
"year": 2022,
"sha1": "bceaf07eda02f4d9999986654868ce21e09588a6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bceaf07eda02f4d9999986654868ce21e09588a6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
246497806 | pes2o/s2orc | v3-fos-license | Benchmark dataset for multi depot vehicle routing problem with road capacity and damage road consideration for humanitarian operation in critical supply delivery
The dataset for Multi Depot Dynamic Vehicle Routing Problem with Stochastic Road Capacity (MDDVRPSRC) is presented in this paper. The data consist of 10 independent designs of evolving road networks ranging from 14-49 nodes. Together with the road networks are the Damage file (DF) for each corresponding road network. The DF simulates the damage level of roads within the networks due to a disaster source, thus affecting travel time and road capacity. We applied this data to test our proposed algorithm and validate our proposed model. This dataset served as an addition to the Vehicle Routing Problem (VRP) datasets that specifically addressed the road capacity problem during a disaster from an epicentre and could be used for other applications that constitute chaotic events and compromised road networks.
a b s t r a c t
The dataset for Multi Depot Dynamic Vehicle Routing Problem with Stochastic Road Capacity (MDDVRPSRC) is presented in this paper. The data consist of 10 independent designs of evolving road networks ranging from 14-49 nodes. Together with the road networks are the Damage file (DF) for each corresponding road network. The DF simulates the damage level of roads within the networks due to a disaster source, thus affecting travel time and road capacity. We applied this data to test our proposed algorithm and validate our proposed model. This dataset served as an addition to the Vehicle Routing Problem (VRP) datasets that specifically addressed the road capacity problem during a disaster from an epicentre and could be used for other applications that constitute chaotic events and compromised road networks. Table Subject Applied Mathematics Specific subject area Vehicle Routing Problem in Operations Research Type of data Table Image Network Figure How data were acquired All test instances presented are simulated and the respective Damage Files (DFs) are generated based on these instances. This simulated dataset is inspired and derived from the 2015 Nepal Earthquake, reported by news reports, independent reports and scholarly articles, from which the information are gathered. Additionally, the geographical map of Nepal and the earthquake epicentre of the earthquakes are referred to when generating a concept instance. From the concept instance, other instances are developed with varying degrees of complexities to allow for sensitivity analysis. A related research [10] served as a general guide in developing the test instances. Other relevant information regarding humanitarian operations in regards to VRP is also referred ( [1] ). Data format Raw Parameters for data collection Parameters such as node placements and number of special nodes as well as road network are purposely varied in ways that would allow for sensitivity analysis. Some parameters are also derived from assumptions made for the model of the problem in deriving costs and time travels. ( [2] ) Description of data collection A simulated road network on each instance is designed and developed based on observing the challenges reported during the event. The development of the networks is driven by the objectives in highlighting these challenges such that different scenarios could be simulated by ranging the instance from a simple instance to a high complexity instance in terms of computation effort. Assumptions were made when placing the nodes, the edges/ roads and determining the road capacities of the road. From the graph theory, the road networks are designed as an undirected, incomplete, connected graph to represent more realistic road networks. Furthermore, the earthquake tremor is assumed to be dispersed radially, with the radius chosen based on the Fibonacci sequence. The damage level of an edge is assumed to be correlated to the number of intersections observed between the radial tremor lines and the edge.
Value of the Data
• The presented data is the first problem instances that assign specifically road damages to each road in the road network based on the simulated earthquake tremor lines from an epicentre as well as road capacity in each problem instance. Through this data unlimited sets of simulated data can be generated due to random road capacity as observed in work [2] . Within minimal focus on road capacity and road damage for VRP researches, a general and standardized test data that address such problem is vital in order to compare and benchmark a proposed solution algorithm. • Interested researchers who look into the VRP specifically on the road capacity and road damage problem in times of disaster when developing a model with multi depots and multi customers/shelters/connecting nodes in a chaotic environment can benefit from this data. • The presented data is fully reusable and customisable for further insights and developments at every stage of the experiment through which different sets of simulated data can be generated: 1. The test instances could be configured further for deeper insights by expanding selected networks into a more complex version of the original or reducing the networks to observe the basic operation of the intended model. Vehicle number of the heterogeneous or homogeneous fleet as well as their respective capacities could be manually varied to create different test instances based on the applied vehicle number. 2. Unlike typical test instances, the damage level for each road/ edge is included in the data (DF), which is computed by the method described in [2] . This DF is generated in a python program, as explained in [2] . 3. The dataset is presented in ways that flexibility is allowed in terms of adjusting nodes coordinates and the capacity of the road for sensitivity analysis. Furthermore, one test instance is adequate as a reference for others to design their own road network. Moreover the test data can be used independently without the "Damage File" of earthquake tremor if so needed. 4. The data is developed for the application of delivering medical supply during a postearthquake disaster event. However, this data could also be used for other scenarios where road capacity and damaged road condition are concerned due to an event triggered at a single coordinate. For example, scenarios such as bomb evacuation, a mass outbreak during a pandemic or a huge concert that leads to congestions affecting road capacities could be simulated while computing for efficient vehicle routing. 5. As opposed to test instances derived from real geographical locations and real road network, this dataset could be served as a basic dataset allowing for freedom in designing networks that highlight difficult aspects of a specific VRP to ensure a more robust solution algorithm and model development.
Data Description
This dataset, applied first for MDDVRPSRC ( [2] ), provides the following road network characteristics: • Multiple depot nodes represent multi depots problems.
• Multiple shelter nodes represent demand locations with different demands.
• Connecting nodes represent junction points within the road networks.
• Edges represent roads within the road networks, each with it's respective capacity. The roads are divided into three: (1) Highways with the highest road capacity, (2) normal roads with a medium road capacity, (3) city roads with the least road capacity. • An epicentre of an earthquake that spread the tremor outward radially affecting the road condition in terms of road capacity and travel time.
maximum capacity of vehicles after replenishment at depot road network in the form of Graph G For a stochastic and dynamic road capacity as addressed by [2] , this data provides the initial value of road capacities of the network and the damaged unit each road sustained due to the tremor of the earthquake.
This presented data applies basic parameters of MDDVRPSRC detailed in Table 1 .
The presented data is accessible in the repository mentioned in the table (Data Specifications) above consist of 4 main files followed by a standard open software license ("LICENSE.txt"). These four files are listed below:
"dataC"
• Here, the coordinate of node i is specified (given as (i x , i y ) ) in the road network simulated in the Euclidean map.
• Furthermore, the demand for each node w i is also specified. 3. "RoadCap" • In this Excel sheet, the deterministic road capacity r i, j is mapped where the node i and j are represented by the rows and columns, respectively, such that each row i c = i + 1 and column j c = j + 1 are in the matrix. 4. "DemandData" • lists the demand of each node in the increasing order of nodes. 5. "network" • attaches the road network as displayed in the python canvas generated in the work [2] .
Moreover, each test instance are accompanied with respective (DF), which could also be found within a folder denoted as "LOAD_DAMAGE NETWORK" by extracting the second file in the repository(LOAD_DAMAGE NETWORK.zip). All DF excel file consist of 2 sheets: 1. "damages" • this excel sheet mapped all edges (i, j) ∈ E from the respective test instance with the associated damages p i, j sustained by the edges 2. "epicentre" • here the coordinate of the epicentre from which the simulated earthquake tremor lines are generated, thus causing damages to the roads/ edges as specified.
Meanwhile, the data from the test instance and respective DF is uploaded when the python file "LOAD_INTS_DAMG_V5.py" is executed. The upload could be the initial part of developing VRP models in Python that addressed the problem associated with the multi depot, road capacity and road damages. The flowchart of "LOAD_INTS_DAMG_V5.py" in extracting and processing data from the selected test instance and respective DF is illustrated in Fig. 1 .
Finally, the "README TEST_INSTANCE" is the first file to be downloaded and opened from the repository. This file consists of: 1. Appreciation note. 2. Introduction to the test instance and DF. 3. Authors and contributors and acknowledgement requests when utilizing this data. 4. brief overview of test instances. 5. brief overview of DF. 6. brief overview of "LOAD_INTS_DAMG_V5.py" Python file, the output that it produces including the cost and time matrix. 7. Requirements to process the data downloaded from the repository. 8. Instruction on downloading the file and adapting the "LOAD_INTS_DAMG_V5.py" Python file to the local user machine.
Design
The proposed data is derived based on the information reported during 2015 Nepal earthquake event. Among these reports are the bottleneck problem ( [8] ), urgent medical supply demands ( [11] and [9] ), set-up of temporary shelters and field hospitals due to compromised buildings ( [4] and [11] ), damage to the road network ( [13] ) as well as the limited vehicle available for urgent medical supply delivery ( [5] ).
From these pieces of information, the proposed data in the form of test instances are derived from experimenting with specific approaches that could alleviate the problem. Such approaches include introducing: • multi depots to address the bottleneck.
• split delivery to tackle the problem of the limited vehicle and routing when considering stochastic road capacity and delay time travel on compromised roads. • setting up temporary emergency shelters near the event (epicentre) of disaster where affected victims are. • emergency medical supply delivery routing while considering stochastic road capacity and damage effect on the roads.
The task of dataset development is divided into two: 1. Development of the road networks such that the solution approach from the proposed model and solution algorithm [2] could be tested. 2. Incorporating damage unit for each road within each road network due to simulated earthquake tremors to complete the dataset.
In the earlier phase of development, the road network of Nepal in [6] and the earthquake epicentres in [7] are referred to. From these sources, it is observed that the highways are mainly constructed near the country's border, thus the outermost of the road network. The test instances are therefore designed by incorporating: • City roads which are located in the innermost of the road network.
• Normal roads which are located roughly between the city roads and the highways.
• Multi depots which should be scattered at the outermost of the road network where highways are located.
Furthermore, based on [7] , it is observed that the earthquake epicentres are located in the inner part of Nepal. The test instances are then further designed with the following specification: • All emergency shelters should be located within the inner part of the road network.
• Python with essential libraries such as Tkinter ( [12] ) and Networkx ( [3] ) to draw the networks and verify the networks.
Method
Based on the specifications selected for the test instance design in Design subsection, the following steps are performed in producing the test instances: 1. A basic road network consists of 3 depot nodes, 3 emergency shelter nodes as well as 8 connecting nodes are first designed based on the Euclidean map in positioning the nodes. Their coordinates, along with their numbers and node assignments (depots, shelters, and connecting) are saved in a test instance file. In the test instance, a fixed vehicle number is given, although this is easily reconfigured when developing the model for manual number adjustment. The numbers of depots, shelters and connecting nodes, on the other hand, are fixed according to the specified test instance. 2. Edges is then drawn and listed in the same test instance file with the following considerations: (a) no direct connection among depots. (b) the road network is based on an undirected incomplete graph in Graph Theory such that the nodes are not fully connected amongst each other, and the edges are bidirectional. (c) no direct connection is allowed between depots and shelters such that a connecting node must at least be visited once. 3. Highway, normal road, and city road are next assigned for each edge with specifications mentioned in the Design subsection. 4. For each type of road, a deterministic road capacity is assigned, and the matrix of road capacity for possible pairs of nodes forming the edge between them is then added to the instance file. 5. Demand for each node is also assigned and added to the test instance. The demand is assigned such that: (a) for a minimum number of vehicles specified in the experiment ( | M| = 4 ), more than one trip is required. (b) the minimum demand of a shelter must be more than the fixed capacity of the vehicle (50 units in the experiment) to allow the experiment with the split delivery operation. (c) both depots and connecting nodes have zero demands.
6. The resulting test instance is then applied to the work [2] in Python: (a) data extraction from the test instance is inspected for potential errors. The test instance or the Python code is modified accordingly if any error is found. (b) cost matrix is automatically computed based on the Euclidean Distance formula based on the node's coordinate given in the test instance. (c) time travel matrix is computed based on the assumption of constant speed of 90 km/hour. (d) the extracted data of the road network from the test instance is utilized using the Networkx Python library to recreate the road network in Python based on the Graph Theory: • the road network is an undirected incomplete connected graph G .
• i and j represent nodes that form the edge (i, j) ∈ E.
• and H is a set of all nodes in G .
• Node s ∈ S ⊂ H is an emergency shelter node, • while node d ∈ D ⊂ H and n ∈ N ⊂ H represent depot node and connecting node respectively. (e) This network represented by the Graph G is then visualized using Python Tkinter library and compared with the network designed in Excel for any potential errors. (f) the applicability of the test instance for the work [2] is observed, and the required modification of the network is noted. 7. The process of improving the test instance and applying the improved version in step 6 is repeated until the functionality of the test instance is developed as desired. The resulting test instance is illustrated in Fig. 2 . 8. Once the test instance is ready; the corresponding DF is then developed: (a) Assumption is made that the earthquake tremors damage some of the roads thus affecting the road capacity as well as the travel time along the road (for the case of deterministic, dynamic and stochastic road capacity problem). (b) The damage unit values are obtained through the simulated earthquake tremor lines described in Algorithm 1 in [2] The computed road capacities based on these values and the initial road capacity from the test instance in work [2] can be observed in Fig. 3 at the center of each respective edge. (c) Similarly, the travel time along an edge should be longer not only due to the distance (length of the edge), but also due to the damage sustained by the edge. Therefore the time travel matrix computed in steps 6(c) is hence computed incorporating the damaged unit sustained by corresponding edges (Equation 13 in [2] ). (d) Such proposed mechanism is advantageous when there are more than one epicentres within the road network to evaluate the condition of the roads. 9. Once the basic road network (DataD3N8S3) is validated, more increasingly complex road networks are developed by adding more edges and nodes to the basic road network. The 10 core test instances representing different road networks are listed in Table 2 . In the table, the road capacity (6,7,8) represents the road capacity of city road, normal road, and highway, respectively. Each parameter listed in the table, including the road capacity tuple, could be changed to develop a new test instance. 10. This test instances could be further expanded from the 10 core instances by differentiating the fixed vehicle numbers for the emergency delivery operation as is done in [2] .
The test instances derived by following steps 1-10 could also be emulated if raw data is obtained. For example, a case study of an earthquake disaster in a known location with known numbers of delivery points (shelters), junctions (connecting nodes) and depots/warehouses along with their coordinates as well as comprehensive networks consisting of different types of roads could be adapted to the test instances instead of designing simulated networks. In this case, steps 1-5 could be directly applied by replacing hypothetical numbers with the raw data at hand.
Step 6 can be applied to validate the real test instance with the exclusion of steps 6(b) and 6(c) if the raw data also includes cost and travel time data along each edge or road. Additionally, if the raw data also includes damages measurement for each road in the network, then step 8(b) could be excluded when designing the test instance based on the raw data. Furthermore, the epicentre coordinate is not needed as it is only required to simulate the earthquake tremor lines.
Once the test instance and the corresponding DF is produced, more hypothetical complex test instances could be developed by altering the raw data. The resulting test data would then have the advantage of being based on raw data from existing topography. Furthermore projected plans such as building future depots could be applied on top of these raw data to simulate practical hypothetical scenarios.
Despite the benefit of incorporating raw data when developing test instances and DFs, the methodology provided in this section allows for more freedom in designing any networks required for experimentations which could potentially be very useful for education and planning. The theoretical mathematical model of VRP such as MDDVRPSRC and the solution approach to the problem could be validated to any degree of setup for further insights and developments. | 2022-02-04T16:03:05.052Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "03250a66e09dac2166551cd54d08c3ae011844d4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2022.107901",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ef82b648a30d595b31141a19b23efbea5604528",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265422544 | pes2o/s2orc | v3-fos-license | A Case Report on Senear-Usher Syndrome
Pemphigus erythematosus is an uncommon autoimmune bullous skin disorder with clinical, histological, and serological characteristics that overlap with lupus erythematosus and pemphigus foliaceus. The autoantigens are desmoglein 3, desmoglein 1, and desmosomal adhesion proteins in keratinocytes. When these bonds are disrupted, it causes acantholysis of keratinocytes, leading to the fluid collection between layers. Hence, the patient will present clinically with small flaccid bullae with crusting and scaling, mainly on the seborrheic areas. We report the case of a 21-year-old female presenting to us with multiple hyperkeratotic plaques, mainly on the seborrheic areas, including the face, chest, and elbows. The patient was evaluated further, and based on clinical and laboratory investigations, the diagnosis of pemphigus erythematosus associated with anti-double-stranded deoxyribonucleic acid (anti-dSDNA) and anti-nuclear antibody (ANA) positivity was made. The patient was then managed using immunosuppressant therapy, and the entire course has been detailed in this case report.
Introduction
Pemphigus has four different clinical subtypes: pemphigus foliaceus, pemphigus vegetans, pemphigus vulgaris, and pemphigus erythematosus [1].Cutaneous-limited lupus erythematosus (LE), intermediate LE, and systemic LE (SLE) are the three subtypes of LE according to symptom-based diagnostic classification.Localized (involving the malar regions) and generalized (relating to the broad morbilliform eruption) are terms used to characterize acute cutaneous LE lesions [2].
A study on "An Unusual Type of Pemphigus" was published in 1926 by Senear and Usher.Pemphigus erythematosus is an uncommon autoimmune bullous skin disorder that overlaps with LE and pemphigus foliaceus.The pemphigus lesions are flaccid bullae that rapidly rupture and turn into regions of crusted, oozing dermatitis or inflammatory papules with a thick, greasy, or even keratotic scale and crust over the trunk, mainly in the seborrheic areas.It is characterized histologically by acantholysis.Usually, the lesions involute on their own and leave behind dark-coloured patches [3].
Exposure to sunlight may worsen the condition.Pemphigus erythematosus typically has positive direct immunofluorescence (DIF) along the dermal-epidermal junction (DEJ) (reminiscent of lupus).It may have positive circulating anti-nuclear antibodies (ANA) in 30-80% of patients, even though the histopathological and clinical findings are similar to those of pemphigus foliaceus.Pemphigus erythematosus cases have been associated with anti-double-stranded deoxyribonucleic acid (anti-dsDNA), anti-Smith, anti-Ro, anti-Sjögren's syndrome-related antigen A (anti-SSA), and anti-ribonucleoprotein (anti-RNP) antibodies.Very few cases of pemphigus erythematosus have been related to anti-dsDNA antibodies, a highly specific SLE marker [4].Treatment may include steroids like prednisolone, starting from the dose of 1 mg/kg/day and then tapered eventually, azathioprine, and other immunosuppressants [5].There are many newer agents available, including biologicals.Rituximab, a monoclonal, chimeric anti-CD20 antibody, has been proven to have excellent efficacy, which helps in avoiding the side effects of corticosteroids.Tumor necrosis factor alpha (TNF-α) inhibitors can be used in recalcitrant pemphigus cases [6].
Case Presentation
The case involves a 21-year-old woman residing in a rural area of Maharashtra, India.The patient came with chief complaints of multiple painful oral ulcers associated with difficulty in eating and chewing.One month later, she also developed many crusted plaques with an erythematous base initially over the malar region and ears, followed by similar lesions on the elbows and back, palms, and soles, which first appeared on the malar region on the face, followed by other sites on the body (Figure 1).There was occasional blood-stained discharge present from the lesions.There was evidence of crusting present on the eyelids as well.She had a low-grade fever and significant joint pain, limiting her daily activities.Cutaneous examination showed multiple hyperkeratotic plaques and crusts present over the erythematous base over the malar region, eyelids, and ears and multiple hyperpigmented plaques present over the bilateral elbows and back with the involvement of palms and soles (Figure 2).Her complete blood count was within normal limits except for low hemoglobin (8.5 g/dl).Her routine biochemistry testing for liver function and renal function test were within normal limits.Erythrocyte sedimentation rate (ESR) was raised (60 mm/hr), and ANA (6.311) and anti-dsDNA (1.28) by enzyme-linked immunosorbent assay (ELISA) were positive.DIF could not be done due to a lack of facilities.After admitting the patient, she was started on intravenous dexamethasone 8 mg once daily for seven days and later replaced by oral prednisolone at the dose of 1 mg/kg/day, which was then tapered to 0.5 mg/kg/day.Parenteral antibiotic amoxicillin and clavulanic acid 1.2 g twice daily was instituted for five days.Antimalarial drug hydroxychloroquine sulfate 300 mg was also prescribed.Dressing with normal salinesoaked gauze was done twice daily.For discharge in eyelids, dressing along with antibiotic eye drops was advised.After 15 days, the patient showed significant symptomatic improvement and was discharged on treatment and regularly followed up.Daily dressing and crust removal showed underlying ulcer formation, eventually healing with scarring and hyperpigmentation (Figure 4).
Discussion
Amerian and Ahmed reported three male and one female pemphigus erythematosus cases.All four individuals had immunoglobulin and/or complement deposition at the DEJ on DIF examination of the perilesional skin.ANA testing was positive for all four patients.Different combinations of oral corticosteroids, topical corticosteroids, dapsone, and oral treatment were used to treat the patients.After four to 10 months of follow-up (mean, six months), three patients were in complete remission, and one was in incomplete remission.He concluded that for complete remission, patients with pemphigus erythematosus need considerably lower dosages of systemic corticosteroids [1].
Scheinfeld et al. discussed the case of a 44-year-old African-American female who presented with generalized skin eruptions.She was treated with oral prednisolone 80 mg once daily along with weekly intramuscular injections of gold 50 mg and showed dramatic improvement.Chief cutaneous findings of pemphigus erythematosus, along with positive deposits of IgG, IgM, and C3 in an intercellular pattern and along the DEJ, were suggestive of diagnosis.Histologically, suprabasal acantholysis may be present (as it is in the pemphigus foliaceus) [4].
Hobbs et al. proposed that pemphigus erythematosus is an autoimmune bullous disease having overlapping features of pemphigus foliaceus and LE.The classic presentation of pemphigus erythematosus is characterized by the overlapping features of SLE (DIF) and pemphigus foliaceus, including the deposition of multiple immunoreactions on a granular pattern (analogous to that seen in the lupus band) and the typical sub-corneal acantholysis present in pemphigus foliaceus [5].
Dick and Werth described various treatment options in patients with pemphigus.Although corticosteroids have significantly reduced mortality, morbidity is still present with corticosteroid treatment.Newer treatment options like biological and steroid-sparing therapies appear promising.Biologic agents like rituximab, a chimeric monoclonal anti-CD20 antibody targeting pre-B cells and mature B cells, prevent the formation of antibody-producing plasma cells.Infliximab, adalimumab, and etanercept (TNF-α inhibitors) have also been tried, but limited data is available.Patients receiving biologicals could taper their steroid dose and immunosuppressive agents [6].
Lyde et al. did an elaborate discussion on pemphigus erythematosus in a 5-year-old female who presented with blistering eruptions over her face, trunk, and extremities two days after a dental procedure associated with generalized erythema with scaling, erosions, and intact bullae present over the abdomen and extremities.After positive ANA and thorough investigations, she was treated with prednisolone 1 mg/kg/day and dapsone 50 mg/day and showed significant improvement [7].
According to Malik and Ahmed, eight patients with dual diagnoses showed no discernible SLE organ system involvement pattern.All had an immunologic disorder and a positive ANA; 75% had a haematological disease; 50% had renal involvement, photosensitivity, or a malar rash; and 38% exhibited signs of a neurological disorder [8].Melchionda and Harman discussed an overview of cases of pemphigus, suggesting that tissue biopsy for histopathology and DIF remains the gold standard for diagnosis.A biopsy should be taken from a fresh, intact blister, including the edge.The role of indirect immunofluorescence (IIF) and ELISA is complementary [9].Kasperkiewicz et al. pointed out a novel B-cell-depleting agent, veltuzumab.It is a monoclonal humanized anti-CD20 antibody that has shown promising results in patients refractory to other modalities.A single dose of subcutaneous veltuzumab resulted in complete clinical remission.Integrated treatment measures, including quality-of-life assessment in the clinical evaluation of patients, help in better treatment outcomes and modalities [10].
A descriptive table enlisting various cases of pemphigus erythematosus is given below (Table 1).
Conclusions
Pemphigus erythematosus is a clinical overlap of pemphigus foliaceus and LE.The diagnosis of pemphigus erythematosus can be challenging, and it involves a combination of various diagnostic methods.In addition to histopathological examination, various serological investigations, including routine blood tests and specific tests like ANA and anti-dSDNA, can aid in the diagnosis.However, a positive ANA is only detected in 30-80% of patients.It is imperative to come to a diagnosis in the case of pemphigus erythematosus, as pemphigus foliaceus affects only the skin.However, LE has a systemic involvement.Every dermatologist must become alert when they come across a patient with pemphigus foliaceus lesions on the malar area to diagnose pemphigus erythematosus since SLE, having systemic involvement, can be potentially lifethreatening.
FIGURE 1 :
FIGURE 1: Multiple erosions present on seborrheic areas of the face topped with crusts
FIGURE 2 :
FIGURE 2: Multiple erythematous to violaceous macules and patches are present over bilateral palms.Few lesions are topped with scaling and crust
FIGURE 3 :
FIGURE 3: H&E staining under 10x magnification shows intraepidermal cleft (green arrow) with epidermal thinning with epidermal necrosis (black arrow) and vacuolar degeneration of the basal layer (blue circle) with congestion of the superficial dermal vasculature H&E: hematoxylin and eosin
FIGURE 4 :
FIGURE 4: Lesions heal with ulcer formation at four weeks (Figure 4A) which eventually healed with scarring at eight weeks and hyperpigmentation (Figure 4B) | 2023-11-25T16:03:46.554Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "2ccbfe2d246f152e4f2d673718b2d5ed3e0f1f6b",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/203747/20231123-11476-n1hfme.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "48bccf397554e164e15ebe1c27782318bd8a3156",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
17235549 | pes2o/s2orc | v3-fos-license | Suppression of a single pair of mushroom body output neurons in Drosophila triggers aversive associations
Memory includes the processes of acquisition, consolidation and retrieval. In the study of aversive olfactory memory in Drosophila melanogaster, flies are first exposed to an odor (conditioned stimulus, CS+) that is associated with an electric shock (unconditioned stimulus, US), then to another odor (CS−) without the US, before allowing the flies to choose to avoid one of the two odors. The center for memory formation is the mushroom body which consists of Kenyon cells (KCs), dopaminergic neurons (DANs) and mushroom body output neurons (MBONs). However, the roles of individual neurons are not fully understood. We focused on the role of a single pair of GABAergic neurons (MBON‐γ1pedc) and found that it could inhibit the effects of DANs, resulting in the suppression of aversive memory acquisition during the CS− odor presentation, but not during the CS+ odor presentation. We propose that MBON‐γ1pedc suppresses the DAN‐dependent effect that can convey the aversive US during the CS− odor presentation, and thereby prevents an insignificant stimulus from becoming an aversive US.
Memory includes the processes of acquisition, consolidation and retrieval. In the study of aversive olfactory memory in Drosophila melanogaster, flies are first exposed to an odor (conditioned stimulus, CS+) that is associated with an electric shock (unconditioned stimulus, US), then to another odor (CSÀ) without the US, before allowing the flies to choose to avoid one of the two odors. The center for memory formation is the mushroom body which consists of Kenyon cells (KCs), dopaminergic neurons (DANs) and mushroom body output neurons (MBONs). However, the roles of individual neurons are not fully understood. We focused on the role of a single pair of GABAergic neurons (MBON-c1pedc) and found that it could inhibit the effects of DANs, resulting in the suppression of aversive memory acquisition during the CSÀ odor presentation, but not during the CS+ odor presentation. We propose that MBON-c1pedc suppresses the DANdependent effect that can convey the aversive US during the CSÀ odor presentation, and thereby prevents an insignificant stimulus from becoming an aversive US.
Pavlovian classical conditioning, in which the conditioned stimulus (CS) is associated with the unconditioned stimulus (US), serves as a simple model for learning and memory. There are many kinds of stimuli in the environment, and organisms have evolved to select for stimuli that are used as the US in conditioning paradigms, enabling them to survive and thrive. The olfactory aversive memory of Drosophilamelanogaster serves as a good example of Pavlovian classical conditioning [1,2], and several distinct stimuli can be used as the US in Drosophila [1][2][3][4][5][6]. However, the mechanisms by which Drosophila select the US or tune the threshold for accepting a stimulus as the US remain largely unknown.
The neuropil called the mushroom body (MB) has been extensively studied anatomically [7][8][9] and functionally as the center for the olfactory aversive memory [10][11][12][13]. The MB consists of~2000 intrinsic neurons called Kenyon cells (KCs), which are the third-order olfactory neurons in each hemisphere [8]. Subsets of KCs sparsely represent odor information [14][15][16][17], and the information is modified by aversive stimuli conveyed by dopaminergic neurons (DANs) upon conditioning [12,[18][19][20]. The modified information then converges on MB output neurons (MBONs) [9,21]. Cellular identification of MBONs has been an intriguing result from recent studies of brain anatomy [22,23]. It has been revealed that odor information encoded in~2000 KCs converges on only 34 MBONs composed of 21 anatomically distinct cell types [9]. This finding permits the study of neuronal mechanisms underlying odor coding and olfactory memory formation in the reduced dimension at the level of fourthorder olfactory neurons. Specifically, it allows us not only to identify each output neuron at a cellular resolution but also to manipulate the functions of each output neuron using split-Gal4 drivers [9,24]. We have already started to witness the progress in understanding roles of MBONs in the process of memory formation [9,[21][22][23]25,26]. In addition, the DAN activity has been shown to be dynamically changed by external stimuli or internal physiological states [27][28][29], and the output from MBONs is also known to affect the DAN activity [28], suggesting that the circuits consisting of KCs, DANs and MBONs form dynamic neuronal networks including multiple layers of feed forward and feedback regulation.
We chose to study the role of MBON-c1pedc because it has been reported to play a pivotal role in aversive memory [24,26] and because a memory trace is detected in its responses to odors associated with electric shocks or activation of DANs [26,30]. In addition, MBON-c1pedc reflects internal and physiological states of flies and inhibits activities of other MBONs [26]. These results prompted us to explore the possibility that MBON-c1pedc plays multiple roles in memory formation, and we found that MBON-c1pedc was required for the acquisition of memory. Furthermore, during memory formation, MBON-c1pedc suppresses the acquisition of aversive memory for CSÀ but not for CS+.
Setting for behavioral experiments
Groups of~50 flies (2-5 days old) raised under a 12 hr:12 hr light-dark cycle were used for one trial in behavioral experiments. Before behavior experiments, flies were kept in vials with Kimwipes soaked with sucrose solution. The training and test apparatus were the same as described previously [2], and protocols were slightly modified. Flies were exposed to 60 s of a CS+ odor (MCH or OCT) with 12 90 V electric shocks at a 5 s interstimulus interval, then 30 s of clean air, followed by the CSÀ odor (OCT or MCH) without electric shocks. After the training stage, flies were allowed to select the CS+ odor or the CSÀ odor in a T-maze at test stage. Odors were in a glass 'odor cup' (8 mm in diameter for OCT and 10 mm for MCH) sitting in the middle of an odor stream. The flow velocities of air or odors were 0.75 LÁmin À1 in each stage.
Temperature shifting
To shift temperature between the permissive temperature (22°C) and the restrictive temperature (33°C), we used two climate boxes set to 22°C or 33°C, and all of the training tubes and T-mazes were preheated and fixed at the indicated temperatures. Temperature shifts were performed immediately. After the transfer, flies were left in a tube with airflow at the indicated temperature.
Temperature shifting between training and test
Flies were preheated at 33°C for 30 min and then trained and tested at 33°C (Fig. 1D). Flies were trained and tested at 22°C (Fig. 1E
Temperature shift during CS+ and CSÀ presentation
Flies were trained with the CS+ presentation at 22°C and immediately transferred to 33°C, followed by 2 min air flow, and the CSÀ was presented at 33°C, then immediately retransferred to 22°C, followed by 2 min air flow and testing at 22°C ( . Flies were trained with the CS+ presentation at 22°C and immediately transferred to 33°C, followed by 3 min air flow at 33°C, and they were then immediately retransferred to 22°C, followed by 2 min air flow, the CSÀ presentation at 22°C and testing at 22°C (Fig. 2C). Flies were preheated at 33°C for 30 min, trained with the CSÀ presentation at 33°C, and then immediately transferred to 22°C, followed by 2 min air flow and the CS+ presentation at 22°C, with testing at 22°C (Fig. 2D).
Blockade of MBON-c1pedc-induced aversive memory (BGAM) training and test
Flies were exposed to odor 1 for 60 s at 22°C and immediately transferred to 33°C, followed by 2 min air flow, exposure to odor 2 for 60 s at 33°C, and then immediate retransfer to 22°C, followed by 2 min air flow and testing at 22°C (Figs 3B, 4F, 5A and 6B,C).
Test stage
Flies were loaded into the T-maze and allowed to choose between MCH and OCT for 1.5 min. Performance index was calculated as the number of flies avoiding the CS+ odors (or odors presented at 33°C for BGAM) minus the number of flies in the other side, divided by the total number of flies. Flies were reciprocally trained with MCH or OCT. Control odors (OCT or MCH) were also presented, and two performance indices each were calculated for MCH and for OCT. The final performance index was calculated by averaging the two performance indices for MCH and for OCT.
Confocal imaging
Flies were dissected in cold phosphate-buffered saline (PBS) solution and fixed in PBT (PBS containing 0.3% Tri-tonX-100) with 4% formaldehyde for 30 min at room temperature. After PBT washing, PBT was replaced with PBS, and brains were placed between a glass slide and a cover glass with medium (VECTASHIELD Mounting Medium, Vector Laboratories, Burlingame, CA, USA). Images were captured on a LSM 710 confocal microscope (Carl-Zeiss, Jena, Germany) and brightness was lineally processed using FIJI software (http://fiji.sc/Fiji).
Statistical analysis
We performed statistical analyses using PRISM 6 (Graph-Pad, La Jolla, CA, USA). All behavior data were tested
MBON-c1pedc is required for both acquisition and retrieval of aversive short-term memory
We first examined the role of MBON-c1pedc in 2 min short-term memory (STM). We used a MBON-c1pedc specific split-Gal4 driver, MB112C (Fig. 1B,C) [9,24] to express a temperature-sensitive dominant-negative form of dynamin, Shi ts [32,35] and block output from MBON-c1pedc. Flies were exposed to an odor, 4methylcyclohexanol (MCH) or 3-octanol (OCT), paired with 12 electric shocks for 1 min (CS+), followed by OCT (or MCH) without electric shocks for 1 min (CSÀ) (Fig. 1A). Two minutes later, flies were allowed to select one of the two odors to avoid. Flies were trained and tested at a restrictive temperature (33°C) (Fig. 1D) or at a permissive temperature (22°C) (Fig. 1E) throughout the experiments. Blocking MBON-c1pedc severely impaired STM (Fig. 1D), demonstrating that MBON-c1pedc output is indispensable for STM, as has been reported for 2 h memory [24]. To clarify whether this STM deficit was caused by impairment of memory acquisition or retrieval, we blocked output from MBON-c1pedc during the acquisition stage or the retrieval stage. STM was impaired by blockade of MBON-c1pedc either during the training stage (Fig. 1F) or during the test stage (Fig. 1G). These results suggest that MBON-c1pedc is required for aversive memory acquisition in addition to aversive memory retrieval [26].
MBON-c1pedc synaptic output is necessary to inhibit aversive memory acquisition for CSÀ
For further analysis of MBON-c1pedc in memory acquisition, we blocked MBON-c1pedc separately during the CS+ or CSÀ presentation. Interestingly, blocking MBON-c1pedc during the CSÀ presentation impaired memory significantly ( Fig. 2A), whereas blocking MBON-c1pedc during the CS+ presentation (Fig. 2B) or immediately after the CS+ presentation (Fig. 2C) presentation at 33°C followed by CS+ presentation at 22°C also showed the memory deficit (Fig. 2D). These results indicate that aversive memory acquisition requires MBON-c1pedc output during the presentation of the CSÀ odor ( Fig. 2A,D). Blockade of MBON-c1pedc during the CSÀ presentation may interfere with aversive memory acquired during the CS+ presentation.
Considering the possibility that blocking MBON-c1pedc during the CSÀ presentation alone may forms an aversive memory for the CSÀ odor, and competition of the aversive memory between the CS+ odor and the CSÀ odor might cause the memory deficit, flies were exposed to two odors in sequence, followed by the test stage. One odor was presented at the permissive temperature, and the other was at the restrictive temperature to block MBON-c1pedc (Fig. 3A). We found that flies formed aversive memory toward the odors presented without synaptic output from MBON-c1pedc (Fig. 3B). We named this BGAM, blockade of MBON-c1pedc-induced aversive memory. In BGAM acquisition, a control odor is presented before the temperature shift. We investigated the possibility that some type of memory could be formed for the control odor by temperature shifting immediately after the presentation of the odor, since the timing-dependent behavioral plasticity was reported [36]. To test this possibility, only a single odor was presented at the restricted temperature in the training session, and the control odor was not presented. As a result, BGAM was also observed in the training session, regardless of the presentation of the control odor at the permissive temperature (Fig. 3C). These results indicate that the memory deficit evoked by blocking MBON-c1pedc at the acquisition stage (Fig. 1F) is caused at least in part by competition between the aversive memory for CS+ and the BGAM for CSÀ. Thus, output from MBON-c1pedc is necessary to prevent the aversive memory for CSÀ. MBON-c1pedc, but not the other neurons that could also be labeled by the MB112C driver, is responsible for aversive memory acquisition and BGAM In the above experiments, MB112C was used as the specific driver to label MBON-c1pedc. Although MBON-c1pedc seemed to be the only neurons labeled by the MB112C driver, according to the confocal images, a few neurons may be labeled by MB112C (Fig. 4A,A'). To test if MBON-c1pedc, but not the other neurons, is responsible for aversive memory acquisition and BGAM, R83A12 was used as another driver to examine the role of MBON-c1pedc (Fig. 4B, B'). The blockade of the R83A12-positive neurons by Shi ts impaired STM acquisition (Fig. 4C). Output from the R83A12-positive neurons was necessary during the CSÀ presentation (Fig. 4D), but not during the CS+ presentation (Fig. 4E). Furthermore, BGAM was also observed by blocking the R83A12-positive neurons during odor presentation (Fig. 4F). These results indicate that the neurons responsible for STM acquisition and BGAM formation were likely to be MBON-c1pedc, but not the other neurons that could potentially be labeled by the drivers.
DANs are required for the BGAM acquisition
MBONs and DANs constitute microcircuits [7,9], and DANs transmit various kinds of aversive information [12,[37][38][39]. Thus, we tested whether DANs are involved in the BGAM acquisition. To label DANs, we used tyrosine-hydroxylase (TH) Gal4 (TH-Gal4) [31], which is thought to label most DANs that convey aversive information [18,40]. Flies lacking synaptic outputs from both MBON-c1pedc and DANs showed severe impairment of the BGAM (Fig. 5A), suggesting that the BGAM is made only when MBON-c1pedc is inactive and DANs are active. Activation of DANs during the CSÀ presentation might be the cause of the memory deficit observed when blocking MBON-c1pedc during the acquisition stage (Fig. 1F). To test this possibility, we expressed Shi ts in MBON-c1pedc and DANs, and performed the same protocol as in Fig. 2A,B. Blocking MBON-c1pedc alone during the CSÀ presentation impaired memory, whereas blocking both MBON-c1pedc and DANs during the CSÀ presentation did not produce any memory impairments (Fig. 5B), indicating that blocking MBON-c1pedc during the CSÀ presentation caused memory deficits via the output of DANs. On the other hand, blocking DANs during the CS+ presentation caused memory deficits with or without the blockade of MBON-c1pedc (Fig. 5C). Blocking MBON-c1pedc during the CS+ presentation did not cause any significant effects on memory compared to the control Gal4 strain, nor did it rescue memory deficits caused by the blockade of DANs. Assuming that DANs activation is necessary at the CS+ presentation and MBON-c1pedc inhibits the effect of DANs, we investigated whether the activation of MBON-c1pedc at the CS+ presentation impaired the STM. To activate MBON-c1pedc artificially, dTrpA1, a temperature-sensitive cation channel [33], was expressed by using the MB112C driver. Neurons expressing dTrpA1 are transiently activated at the restrictive temperature (33°C), and not at the permissive temperature (22°C). The flies were transferred to the restrictive temperature and immediately the CS+ odor and ESs were presented for 1 min. The flies were then re-transferred to the permissive temperature, exposed to the CSÀ odor and tested. This manipulation of MBON-c1pedc did not impair the aversive STM significantly (Fig. 5D). The dTrpA1 inducing the artificial activation of MBON-c1pedc might be too weak to suppress the effect of DANs induced by ESs sufficiently. Thus, MBON-c1pedc might suppress the weak effect of DANs.
DANs are effectively downstream of MBON-c1pedc in the acquisition of the memory Taken together, in classical conditioning, the output of DANs is ineffective in the CSÀ presentation and is required during the CS+ presentation, whereas the MBON-c1pedc output is required during the CSÀ presentation, but not during the CS+ presentation. In addition, aversive memory induced by the output of DANs in classical conditioning was not affected by blocking MBON-c1pedc during CS+ presentation (Fig. 5C), whereas BGAM induced by blocking MBON-c1pedc was affected by blocking DANs (Fig. 5A). Thus, DANs are effectively downstream of MBON-c1pedc in the aversive memory acquisition stage. In addition, MBON-c1pedc and DANs negatively modify each other's functions, since DANs attenuate input from KCs to MBON-c1pedc [30], and this study suggests that MBON-c1pedc inhibits the functions of DANs. For further dissection of the involvement of DANs in BGAM, we used a panel of split-Gal4 drivers [9] and manipulated subsets of TH-Gal4 positive neurons. We first used drivers to label a large population of TH-Gal4 positive neurons in combination with MB112C to block the subsets of DANs and MBON-c1pedc (Fig. 6B). Compared to the MBON-c1pedc blocked flies, flies without synaptic output from MBON-c1pedc and TH-or MB504B-positive DANs showed significantly lower BGAM. Blockade of DANs labeled using MB060B did not cause a significant decrease in BGAM. These results indicate that DANs labeled by TH or MB504B, but not by MB060B, are important for BGAM formation. Importantly, MB060B and MB504B label similar subsets of DANs, but only MB504B labels PPL1-c1pedc DANs. We next used the MB438B split-Gal4 driver to manipulate PPL1-c1pedc DANs and tested if the BGAM was impaired by blocking PPL1-c1pedc DANs and MBON-c1pedc, and found that inactivation of PPL1-c1pedc DANs did not impair the BGAM (Fig. 6C). Taking into account that MB504B positive neurons are sufficient to suppress the BGAM, a combination of the PPL1-c1pedc, -c2a'1, -a'2a2 and -a3 DANs or all of them are required for BGAM. Since the combination of DANs labeled by MB060B, which does not label PPL1-c1pedc DANs, or MB438B, which does not label PPL1-c2a'1 DANs, is not sufficient to suppress the BGAM, the PPL1-c1pedc DANs and PPL1-c2a'1 DANs are necessary for the BGAM. No drivers labeling the combination of PPL1-c1pedc, -c2a'1 and -a3 DANs or the combination of PPL1-c1pedc, -c2a'1 and -a'2a2 DANs are available, and thus the necessities for the PPL1-a'2a2 and -a3 DANs are unclear.
Taken together, the BGAM is acquired through a combination of PPL1-DANs labeled by MB504B, which is consistent with the notion that some DANs function coordinately [19,27,28,40,41]. Their anatomical connectivity also suggests the possibility that MBON-c1pedc modify the effects of some DANs projecting to a/b lobes [9,42].
Discussion
We have shown that the synaptic output from MBON-c1pedc is required for suppressing aversive memory acquisition without electric shocks and that in the classical olfactory conditioning procedure with electric shocks as the US, MBON-c1pedc must be active during the CSÀ presentation, whereas DANs must be active during the CS+ presentation. Given that the memory is formed regardless of the activity of MBON-c1pedc during the CS+ presentation and that DANs are required for BGAM, DANs are functionally downstream of MBON-c1pedc in this context. Among the population of DANs, BGAM required PPL1-DANs, which are thought to convey punitive information and cause aversive memory in concert [40,41]. In aversive olfactory memory, electric shocks as the US can be replaced by artificial activation of PPL1-DANs [18,19,43], indicating that DAN activation alone is sufficient for aversive associations with odors. The population of DANs that can replace the US overlaps with the population of DANs required for BGAM. Collectively, the blockade of MBON-c1pedc allows DANs to replace the aversive US to make the aversive associations.
Assuming that BGAM is acquired by MBON-c1pedc and DANs, there are two questions about the BGAM formation. One is about the pathway for MBON-c1pedc to modify DANs effects and the other is about the trigger for DANs activation. The pathway for MBON-c1pedc to modify the DANs is unknown although there is anatomical connectivity. According to the previous study referring to the anatomies of MBONs and DANs, the dendrites of a few DANs are slightly co-localized with the axons of MBON-c1pedc [9]. This indicates that some DANs may be downstream of MBON-c1pedc at the level of a neural circuit. However, we could not detect the functional connectivity of MBON-c1pedc and DANs, since the DANs activity was stochastic and fluctuated at the restrictive temperature used to manipulate the MBON-c1pedc in the functional calcium imaging (data not shown). Thus, other methodologies, such as optogenetics, membrane potential indicators or synaptic output indicators might be useful to test this possibility. Since MBON-c1pedc axons and DANs dendrites are only slightly colocalized, this possibility is less likely than the following second possibility. Second possibility is that MBON-c1pedc affects DANs effect indirectly. MBON-c1pedc axons are projected to the crepine (a region surrounding the horizontal and medial lobes) and the core of the a and b lobes [9], and the DANs axons also project to the a and b lobes [9,42]. Thus, DANs and MBON-c1pedc converge on the lobes, and they may input to KCs or other MBONs coordinately, to modulate their plasticity. Since MBON-c1pedc is GABAergic, MBON-c1pedc may inhibit the KCs activity, and blocking MBON-c1pedc may disinhibit the KCs activity, leading to the hyperactivity of KCs and the easy association with weak DANs activity.
It is also unclear why and how the DANs are activated when MBON-c1pedc is blocked and the odors are presented. One possibility is that DANs activity fluctuates, reflecting inner physiological states [27,28], and that the active state of DANs can stochastically cause aversive memory to a given odor. Another possibility is that the exposure to a given odor activates DANs. We examined the activity of DANs via functional calcium imaging under a two-photon microscope, but we only observed stochastic activity of DANs and failed to detect significant correlation with the exposure to odors (data not shown).
Without the appropriate activity of MBON-c1pedc, the probability of aversive associations might be increased even if the environment contains few aversive stimuli. Although aversive associations are important for animals' survival, an appropriate threshold for memory acquisition is necessary to conserve the energy required to acquire an aberrant memory and to highlight the importance of essential memories. MBON-c1pedc might have such a gating function by antagonizing the activity of DANs.
BGAM is acquired by odor presentation and the blockade of MBON-c1pedc. MBON-c1pedc responds to odors robustly, but its response is decreased after associating the odors with ESs or activation of DANs [26,30]. Thus, BGAM acquisition may mimic the situation in which flies sense CS+ odors after associating the odors with ESs. After the classical conditioning, CS+ odor presentation may cause some BGAM in flies.
BGAM was observed as behavioral plasticity, and can be categorized into associative or nonassociative memory, depending on the viewpoint. Since BGAM is acquired solely by an odor presentation, BGAM may be categorized as a nonassociative memory or a particular sensitization. In the T-maze machine, na€ ıve flies avoid odors (MCH or OCT) as compared to the air (this is called odor avoidance), indicating that odors are aversive stimuli for flies to some extent. In wildtype flies, odorant information may be processed as aversive information, but not stored as aversive memory by inhibiting the memory acquisition processes. However, the blockade of MBON-c1pedc may disturb the inhibiting processes, and thus the odorant information may be stored as an aversive memory. A previous study showed that odor avoidance was enhanced by blocking MBON-c1pedc [26], and this study indicated that enhanced odor avoidance lasts as memory by blocking MBON-c1pedc. The enhancement of responses to pre-exposed stimuli is called sensitization. In Drosophila, behavioral sensitization to odors or neural sensitization to odors around the KCs was not observed, although odor sensitization in sensory neurons was reported [44]. In contrast, in Caenorhabditis elegans, it was previously reported that behavioral sensitization to odors was regulated by dopamine release to an interneuron [45]. This sensitization mechanism in C. elegans might be similar to the BGAM mechanism, since their behavioral protocols are nonassociative learning and dopamine-related. BGAM might be nonassociative memory and lasting sensitization, and in wild-type flies, MBON-c1pedc might suppress the sensitization.
However, if BGAM is acquired by associating an odor with a temperature stimulus or another aversive stimulus surrounding the flies, then BGAM may be categorized as associative memory. In order to block synaptic output by using Shi ts , the flies are kept at a restrictive temperature (33°C), which could be an aversive stimulus [37]. Although the temperature we used might be slightly aversive for flies, the temperature shifting to 33°C for 1 min was lower and shorter than the 34°C shift for 2 min used in a previous study [37], and our protocol was apparently insufficient for the control strains to acquire strong aversive memory (Fig. 3B). If the blockade of MBON-c1pedc lowers the threshold for the temperature as the US, then BGAM could result from the association between the odors and the high temperature. In rats, the aversive US pathway is reportedly inhibited by feedback circuits to calibrate the strength of learning after aversive memory formation [46]. MBON-c1pedc and DANs may comprise a similar circuit in Drosophila. To investigate whether aversive information is associated with odor in BGAM, other novel methodologies to block the synaptic output in a freely moving fly in a precise time window without aversive stimuli, such as temperature shifting, are needed.
Taken together, the blockade of MBON-c1pedc during odor presentation without US influences the DANs effects directly or indirectly and forms BGAM. We found the novel function of MBON-c1pedc for BGAM formation at the level of behavior. The MBON-c1pedc functions to suppress the memory formation, indicating that memory acquisition can be regulated negatively. Only a few studies have reported the negative regulation (suppression) of memory, and a recent study reported that the neural circuit suppresses the US pathway in rats by feedback circuits, to calibrate the strength of learning after aversive memory formation [46]. This is the first evidence that MBON-c1pedc and DANs may comprise a similar circuit in Drosophila. | 2018-04-03T02:39:25.834Z | 2017-02-22T00:00:00.000 | {
"year": 2017,
"sha1": "edb4ff0631c191a59d790714d24298be9e50ff55",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/2211-5463.12203",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "edb4ff0631c191a59d790714d24298be9e50ff55",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14180483 | pes2o/s2orc | v3-fos-license | Lnc RNA HOTAIR functions as a competing endogenous RNA to regulate HER2 expression by sponging miR-331-3p in gastric cancer
Background Accumulating evidence indicates that the long non-coding RNA HOTAIR plays a critical role in cancer progression and metastasis. However, the overall biological role and clinical significance of HOTAIR in gastric carcinogenesis remains largely unknown. Methods HOTAIR expression was measured in 78 paired cancerous and noncancerous tissue samples by real-time PCR. The effects of HOTAIR on gastric cancer cells were studied by overexpression and RNA interference approaches in vitro and in vivo. Insights of the mechanism of competitive endogenous RNAs (ceRNAs) were gained from bioinformatic analysis, luciferase assays and RNA binding protein immunoprecipitation (RIP). The positive HOTAIR/HER2 interaction was identified and verified by immunohistochemistry assay and bivariate correlation analysis. Results HOTAIR upregulation was associated with larger tumor size, advanced pathological stage and extensive metastasis, and also correlated with shorter overall survival of gastric cancer patients. Furthermore, HOTAIR overexpression promoted the proliferation, migration and invasion of gastric carcinoma cells, while HOTAIR depletion inhibited both cell invasion and cell viability, and induced growth arrest in vitro and in vivo. In particular, HOTAIR may act as a ceRNA, effectively becoming a sink for miR-331-3p, thereby modulating the derepression of HER2 and imposing an additional level of post-transcriptional regulation. Finally, the positive HOTAIR/HER2 correlation was significantly associated with advanced gastric cancers. Conclusions HOTAIR overexpression represents a biomarker of poor prognosis in gastric cancer, and may confer malignant phenotype to tumor cells. The ceRNA regulatory network involving HOTAIR and the positive interaction between HOTAIR and HER2 may contribute to a better understanding of gastric cancer pathogenesis and facilitate the development of lncRNA-directed diagnostics and therapeutics against this disease.
Background
Gastric cancer is the second leading cause of cancerinduced death, and is the most common gastrointestinal malignancy in East Asia, Eastern Europe, and parts of Central and South America. In most patients, gastric cancer is diagnosed at an advanced stage and is accompanied by malignant proliferation, extensive invasion and lymphatic metastasis. Successful therapeutic strategies are limited and the mortality is high [1,2]. Long non-coding RNAs (lncRNAs) have recently gained significant attention in delineating the complex mechanisms underlying malignant processes such as carcinogenesis, metastasis and drug resistance. Therefore, if we want to fully understand gastric carcinogenesis, we need to consider this family of regulatory transcripts that add a new layer of complexity to tumor biology.
Although only a small number of functional lncRNAs have been well characterized to date, they have been shown to regulate gene expression at various levels, including chromatin modification, transcription and posttranscriptional processing [3,4]. Recently, a new regulatory mechanism has been identified in which crosstalk between lncRNAs and mRNAs occurs by competing for shared microRNAs (miRNAs) response elements. In this case, lncRNAs may function as competing endogenous RNAs (ceRNAs) to sponge miRNAs, thereby modulating the derepression of miRNA targets and imposing an additional level of post-transcriptional regulation [5]. In previous reports, a muscle-specific lncRNA, linc-MD1, has been reported to be a ceRNA that protects MyoD messenger RNA (mRNA) from miRNA-mediated degradation [6]. Pluripotency-associated lnc-RoR may function as a key ceRNA to link the network of miRNAs and core transcription factors, e.g., Oct4, Sox2, and Nanog, in human embryonic stem cells. Notably, lncRNA HULC is highly upregulated in liver cancer and plays an important role in tumorigenesis [7]. In particular, HULC may act as an endogenous 'sponge' that down-regulates a series of miRNAs activities, including miR-372 [8]. We therefore propose that some lncRNAs may also have roles as ceRNAs, linking miRNAs and the post-transcriptional network in gastric pathogenesis.
HOTAIR (Hox transcript antisense intergenic RNA) is a~2.2-kb long non-coding RNA transcribed from the HOXC locus, which can repress transcription in trans of HOXD in foreskin fibroblasts [9]. As a novel molecule in the field of tumor biology, HOTAIR initially became well known for its involvement in primary breast tumors and breast cancer metastases, wherein elevation of HOTAIR promoted invasiveness and metastasis [10]. Furthermore, HOTAIR expression positively correlates with malignant processes and poor outcome in colorectal cancer, hepatocellular carcinoma, pancreatic cancer and gastrointestinal stromal tumors [11][12][13][14]. Recent studies reported that HOTAIR was upregulated in gastric cancer [15,16]. Nevertheless, the overall biological role and underlying molecular mechanism of HOTAIR in gastric carcinogenesis remains largely undefined.
In this study, we report that HOTAIR upregulation is a characteristic molecular change in gastric cancer and investigate the biological roles of HOTAIR on the phenotypes of gastric cancer cells in vitro and in vivo. Moreover, mechanistic analysis reveals that HOTAIR may function as a ceRNA to regulate the expression of human epithelial growth factor receptor 2 (HER2) through competition for miR-331-3p, thus playing an oncogenic role in gastric pathogenesis. The present work provides the first evidence for a positive HOTAIR/HER2 correlation and the crosstalk between miR-331-3p, HOTAIR and HER2, shedding new light on the treatment of gastric cancer.
HOTAIR expression is upregulated in human gastric cancer tissues
The level of HOTAIR expression was determined in 78 paired gastric cancer samples and adjacent, histologically normal tissues by qRT-PCR, and normalized to GAPDH. HOTAIR expression was significantly upregulated in cancerous tissues (mean ratio of 14.35-fold, P < 0.01) compared with normal counterparts ( Figure 1A). Examination of the correlation between HOTAIR expression and clinical pathological features showed that HOTAIR upregulation was correlated with larger tumor size, advanced pathological stage, distant metastasis ( Figure 1B and C), lymph node metastasis and tumor cell differentiation (Table 1). However, HOTAIR expression was not associated with tumor position or patient gender (Table 1). With regard to overall survival, patients with high HOTAIR expression had a significantly poorer prognosis than those with low HOTAIR expression (P < 0.001, log-rank test; Figure 1D). These results imply that HOTAIR overexpression may be useful in the development of novel prognostic or progression markers for gastric cancer.
Manipulation of HOTAIR levels in gastric cancer cells
We next performed qRT-PCR analysis to examine the expression levels of HOTAIR in various cancer cell lines, including gastric, non-small cell lung cancer (NSCLC) and breast cancer-derived cells. As shown in Figure 2A Figure 2A). Our data suggest that HOTAIR may be frequently upregulated in many tumor cells.
To manipulate HOTAIR levels in gastric cancer cells, a pCDNA/HOTAIR vector was transfected into SGC-7901 cells and si-HOTAIR was transfected into BGC-823 cells, respectively. qRT-PCR analysis of HOTAIR levels was performed at 48 h post-transfection and revealed that HOTAIR expression was increased 98-fold in SGC-7901 cells, compared with control cells. However, in BGC-823 cells, HOTAIR expression was effectively 75% knocked down by si-HOTAIR2, the most effective siRNA subsequently used in the following experiments ( Figure 2B).
Effect of HOTAIR on cell proliferation and apoptosis in vitro
The significant increase of HOTAIR expression in gastric cancer samples prompted us to explore the possible biological significance of HOTAIR in tumorigenesis. MTT assay revealed that cell growth was significantly impaired in BGC-823 cells transfected with si-HOTAIR, while proliferation of SGC-7901 cells was increased in pCDNA/ HOTAIR transfected cells compared with respective controls ( Figure 2C). Similarly, the result of colonyformation assay revealed that clonogenic survival was decreased following inhibition of HOTAIR in BGC-823 cells ( Figure 2D). Figure 1 Relative HOTAIR expression in gastric cancer tissues and its clinical significance. (A) Relative expression of HOTAIR in gastric cancer tissues (n = 78) in comparison with corresponding non-tumor normal tissues (n = 78). HOTAIR expression was examined by qRT-PCR and normalized to GAPDH expression. Data was presented as fold-change in tumor tissues relative to normal tissues. (B) Examination of the correlation between HOTAIR expression and clinical pathological features showed that HOTAIR upregulation correlated with larger tumor size and advanced pathological stage. (C) HOTAIR expression was significantly higher in patients with distant metastasis than in those with non-distant metastasis. (D) Kaplan-Meier overall survival curves according to HOTAIR expression level. The overall survival of the High-HOTAIR group (n = 39: HOTAIR expression ratio ≥ median ratio) was significantly higher than that of Low-HOTAIR group (n = 39; HOTAIR expression ratio ≤ median ratio; P < 0.001, log-rank test). *P < 0.05, **P < 0.01.
To determine whether apoptosis was a contributing factor to cell growth inhibition, we performed Hoechst staining and flow-cytometric analysis of si-HOTAIR-treated BGC-823 cells. The data showed that the number of cells with condensed and fragmented nuclei indicating the fraction of early apoptotic cells was significantly higher in si-HOTAIR-treated BGC-823 cells compared with si-NC-treated cells ( Figure 3A and B). In addition, we found that inhibition of HOTAIR enhanced caspase-3-dependent apoptosis, demonstrated by western blot analysis of activated caspase-3 after si-HOTAIR transfection (Additional file 1: Figure S1). Taken together, these results indicate that knockdown of HOTAIR suppresses gastric cancer cell proliferation and induces apoptosis in vitro.
HOTAIR promotes migration and invasion of gastric cancer cells in vitro
Cell invasion is a significant aspect of cancer progression that involves the migration of tumor cells into contiguous tissues and the dissolution of extracellular matrix proteins. Here we evaluated cancer cell invasion through transwell assays. As shown in Figure 3C, the transfection of HOTAIR siRNA impeded the migratory ability BGC-823 cells by roughly 66%. A corresponding effect on invasiveness was also observed in a parallel invasion assay. Conversely, transfection of SGC-7901 cells with pCDNA/ HOTAIR vector promoted cell migration and invasive-ness~1.9-fold ( Figure 3D). These data indicate that HOTAIR has oncogenic properties that can promote a migratory and invasive phenotype in gastric cancer cells.
HOTAIR promotes tumorigenesis of gastric cancer cells in vivo
To explore whether the level of HOTAIR expression affects tumorigenesis, BGC-823 cells transduced with the sh-HOTAIR/pENTR vector (EV) were used in a nude mice xenograft model. Up to 16 days after knockdown of HOTAIR, there was a dramatic decrease in tumor volume and weight in the sh-HOTAIR group compared with controls ( Figure 4A, B and C). Next, immunostaining analysis of the proliferation marker PCNA was performed in resected tumor tissues. In comparison with that in tumors formed from control cells, sh-HOTAIRderived tumors showed significantly reduced PCNA positivity ( Figure 4D). These results suggest that the level of HOTAIR expression is significantly associated with the in vivo proliferation capacity of gastric cancer cells.
HOTAIR is a target of miR-331-3P and miR-124
Bioinformatics analysis of miRNA recognition sequences on HOTAIR revealed the presence of 11 tumor-suppressive miRNAs binding sites. The HOTAIR cDNA was cloned downstream of the luciferase gene and named RLuc-HOT AIR ( Figure 5A), then transfected together with various miRNA-coding plasmids. rno-miR-344 acted as a negative control. The results showed that luciferase activity was reduced by 48% and 31% compared with the empty vector control when miR-331-3p and miR-124 were expressed, respectively. These data demonstrate that both miR-331-3p and miR-124 can directly bind to HOTAIR through respective miRNA recognition sites ( Figure 5B left panel).
Herein, we chose miR-331-3p as a model miRNA for further studies. To further confirm that the reduction in luciferase activity from the RLuc-HOTAIR-WT vector was due to direct interaction between the miRNA and its putative binding site, we mutated the miR-331-3p binding site by site-directed mutagenesis, resulting in RLuc-HOTAIR-Mut. As expected, suppression of luciferase activity was completely abolished in this mutant construct compared with wild-type vector ( Figure 5B right panel).
HOTAIR and miR-331-3p both bind with AGO2 in gastric cancer cells miRNAs are known to be present in the cytoplasm in the form of miRNA ribonucleoprotein complexes (miRNPs) that also contain Ago2, the core component of the RNAinduced silencing complex (RISC) [17,18]. To test whether HOTAIR associates with miRNPs, RNA binding protein immunoprecipitation (RIP) experiments were performed on BGC-823 cell extracts using antibodies against Ago2. RNA levels in immunoprecipitates were determined by qRT-PCR. HOTAIR was preferentially enriched (354-fold) in Ago2-containing miRNPs relative to control immunoglobulin G (IgG) immunoprecipitates. Similarly, miRNA-331-3p was detected at a level 840-fold greater than that of control anti-IgG. Successful immunoprecipitation of Ago2-associated RNA was verified by qRT-PCR, using RIP primers against human FOS included in the RIPAb + Ago2 kit ( Figure 5D, top panel). Moreover, anti-SNRNP70 was used as a positive control for the RIP procedure, and U1 snRNA was also detected at a level 206-fold greater than that of anti-IgG ( Figure 5D, bottom panel). Thus, HOTAIR is present in Ago2-containing miRNPs, likely through association with miRNA-331-3p, consistent with our bioinformatic analysis and luciferase assays.
HOTAIR controls the miR-331-3p target, HER2 Among the many targets of miR-331-3p, we concentrated on HER2 since it encodes a transmembrane protein with a relevant function in carcinogenesis and resistance to trastuzumab-based therapy [19]. The 3'-UTR of HER2 was fused to the luciferase coding region (RLuc-HER2 3'-UTR) and transfected with plasmids encoding miR-331-3p and an empty plasmid vector. rno-miRNA-344 acted as a negative control. The luciferase assay showed that miR-331-3p significantly inhibited luciferase activity (~41% inhibition) of the RLuc-HER2 3'-UTR reporter, confirming that HER2 is a target of miR-331-3p. The RLuc-HER2 3'-UTR construct was subsequently transfected together with plasmids encoding miR-331-3p and HOTAIR (pCDNA/HOTAIR). Luciferase assays indicated that, in the presence of HOTAIR, RLuc-HER2 3'-UTR repression was restored compared with the control group ( Figure 5C). This indicates that HOTAIR acts as an endogenous 'sponge' by binding miR-331-3p, thus abolishing the miRNA-induced repressing activity on the HER2 3'-UTR. Furthermore, the effect of HOTAIR expression on endogenous HER2 protein in combination with modulation of miRNA or lncRNA levels was monitored by the different approaches shown in Figure 5E: (1) Western blot analysis showed that forced expression of miR-331-3p or knockdown of HOTAIR in BGC-823 cells triggered a significant silencing effect on endogenous HER2 protein expression. Furthermore, HER2 protein expression was markedly upregulated after transfection with HOTAIR in SGC-7901 cells, which demonstrates a relatively low endogenous HOTAIR expression level.
Together these data indicate that by binding miR-331-3p, HOTAIR acts as a ceRNA for the target HER2 mRNA, thereby modulating the derepression of HER2 and imposing an additional level of post-transcriptional regulation.
MiR-331-3p and miR-124 suppress gastric cancer cells proliferation To serve as an endogenous sink for target miRNAs, the abundance of HOTAIR should be comparable to or higher than miR-331-3p/miR-124. In our study, qRT-PCR analysis showed that miR-331-3p/miR-124 expression was inversely correlated with HOTAIR expression in 20 pairs of advanced gastric cancers ( Figure 6A). To validate whether miR-331-3p and miR-124 could also inhibit gastric cancer cell proliferation, we forced their expression in BGC-823 cells using miRNA-encoding plasmids. The expression levels of miRNA-331-3p and miR-124 transfected into BGC-823 cells were significantly increased by 36-fold and 29-fold, respectively, compared with control cells ( Figure 6B). Next, MTT and colony-formation assays were performed to determine cell viability. The results of the MTT assay and growth curves revealed that cells transfected with miR-331-3p or miR-124 showed significant growth retardation when compared with cells transfected with empty vector ( Figure 6C). These data indicate that overexpression of miR-331-3p or miR-124 expression can arrest gastric cancer cell proliferation, which is consistent with the results of HOTAIR expression knockdown in BGC-823 cells.
HER2 is coexpressed with HOTAIR in gastric cancer tissues
The HER2 overexpression rate was reported to be 7-34% in gastric cancer, and was associated with more aggressive disease and poorer survival in gastric cancer [19]. We detected expression of HER2 in 50 advanced gastric cancer tissues (stage III/IV) selected from the previous 78 gastric cancer tissues by immunohistochemistry (IHC) and qRT-PCR analysis. The results of IHC staining showed HER2 protein positivity in 56% of the selected 50 gastric cancer tissues. Eighty-six percent of these HER2 positive samples were from advanced stage III/IV tumors, and 71% displayed high HOTAIR expression ( Figure 6D and Additional file 2: Table S2). Bivariate correlation analysis showed that expression of HER2 was significantly correlated with HOTAIR transcript level in gastric cancer tissues compared with normal counterparts ( Figure 6D). These data indicate that the expression of HER2 is positively associated with upregulated HOTAIR in gastric Right: the luciferase reporter plasmid containing wild-type or mutant HOTAIR was co-transfected into HEK-293 T cells with miR-331-3p in parallel with an empty plasmid vector. Luciferase activity was determined using the dual luciferase assay and shown as the relative luciferase activity normalized to renilla activity. Histogram indicates the values of luciferase measured 48 h after transfection. (C) The 3'-UTR of HER2 was fused to the luciferase coding region (RLuc-HER2 3'-UTR) and transfected in HEK293T cells with miR-331-3p to confirm HER2 is the target of miR-331-3p. RLuc-HER2 3'-UTR and miR-331-3p constructs were co-transfected into HEK293T cells with plasmids expressing HOTAIR (pCDNA/HOTAIR) or with a control vector to verify the ceRNA activity of HOTAIR. rno-miRNA-344 was used as a negative control. Histogram indicates the values of luciferase measured 48 h after transfection. (D) RIP with mouse monoclonal anti-Ago2, preimmune IgG or 10% input from BGC 823-cell extracts. RNA levels in immunoprecipitates were determined by qRT-PCR. Top: levels of HOTAIR, miRNA331-3P and FOS RNA were presented as fold enrichment in Ago2 relative to IgG immunoprecipitates. Bottom: relative RNA levels of U1 snRNA in SNRNP70 relative to IgG immunoprecipitates. Numbers are mean ± s.d. (n = 3). (E) Western blot analysis of HER2 protein level following treatment of BGC-823 cells with si-HOTAIR or pCDNA/HOTAIR, and SGC-7901 cells with pCDNA/HOTAIR. GAPDH was used as control. *P < 0.05, **P < 0.01 and N.S. not significant.
cancer tissue samples, suggesting that characterization of the HER2/HOTAIR interaction might be biologically significant in human gastric tumorigenesis.
Notably, because of an ongoing phase III/IV trial, the HER2 overexpression rate detected in the present study is higher than the previously reported median range. This increased rate of HER2 overexpression is strongly linked to poor outcomes for patients with metastatic and highgrade localized gastric cancers, thereby highlighting the importance of HER2 in gastric cancer development and metastasis.
Discussion
LncRNAs, which are more than 200 nucleotides in length with limited protein-coding capacity, are often expressed in a disease-, tissue-or developmental stage-specific manner, indicating specific functions for lncRNAs in development and diseases and making these molecules attractive therapeutic targets [20][21][22]. A number of recent papers have revealed that dysregulation of these lncRNAs may also affect the regulation of the eukaryotic genome and provide a cellular growth advantage, resulting in progressive and uncontrolled tumor growth [23][24][25]. Therefore, lncRNAs may provide a missing piece of the otherwise well-known oncogenic and tumor suppressor network puzzle.
In this study, we tested the expression of HOTAIR in gastric carcinoma samples and their surrounding non-tumorous tissues. We also identified the function of HOTAIR in gastric carcinoma cells by applying gainand loss-of-function approaches. The results demonstrated that HOTAIR was upregulated in gastric carcinoma tissues in comparison with adjacent normal gastric tissues, and that HOTAIR upregulation correlated with larger tumor size, advanced pathological stage and extensive metastasis. Moreover, the overall survival time of patients with lower HOTAIR expression levels was significantly longer than that of patients with moderate or strong HOTAIR expression levels. Furthermore, HOTAIR overexpression promoted the proliferation, migration and invasion of gastric carcinoma cells, while HOTAIR depletion inhibited cell invasion and cell viability, and induced growth arrest both in vitro and in vivo. Additionally, HOTAIR suppression led to the promotion of gastric cell apoptosis. These findings suggest that HOTAIR plays a direct role in the modulation of multiple oncogenic properties and gastric cancer progression, stimulating new research directions and therapeutic options considering HOTAIR as a novel prognostic marker and therapeutic target in gastric cancer.
The importance of lncRNAs in human disease may be associated with their ability to impact cellular functions through various mechanisms. In this study, as far as the mechanism of HOTAIR is concerned, it is worth mentioning that subcellular localization analysis of HOTAIR by RNA fluorescence in situ hybridization assay demonstrates the localization of HOTAIR to both the nucleus and the cytoplasm [24]. It is evident that nuclear HOTAIR can target polycomb repressive complex 2, altering H3K27 methylation and gene expression patterns across the genome [10,11]. Recent work reported a scaffold function for HOTAIR in the cytoplasm as an inducer of ubiquitinmediated proteolysis [26]. Nevertheless, the tumorigenic properties and mechanistic heterogeneity of HOTAIR, and particularly those of the cytoplasmic form, are far from being fully elucidated.
Inspired by the 'competitive endogenous RNAs' regulatory network and emerging evidence that suggests that lncRNAs may participate in this regulatory circuitry, we hypothesized that HOTAIR may also serve as a ceRNA and so we searched for potential interactions with miR-NAs. In support of this notion, we employed bioinformatics analysis and luciferase assays to validate the direct binding ability of the predicted miRNA response elements on the full-length HOTAIR transcript. As expected, we discovered miR-331-3p and miR-124 could form complementary base pairing with HOTAIR and induce translational repression of a RLuc-HOTAIR reporter gene. In addition, HOTAIR:miR-331-3P coimmunoprecipitation with anti-Ago2 demonstrated a physical interaction in gastric cancer cells, providing further support for HOTAIR's miRNAsequestering activity. To serve as an endogenous 'sponge' , the abundance of HOTAIR should be comparable to or higher than miR-331-3p/miR-124. In our study, qRT-PCR analysis showed that miR-331-3p/miR-124 expression was inversely correlated with HOTAIR expression in advanced gastric cancer. Moreover, ectopic overexpression of miR-331-3p or miR-124 expression could arrest gastric cancer proliferation, which was consistent with results of knockdown of HOTAIR expression in gastric cancer cells. Taken together, these data are consistent with our hypothesis and indicate that HOTAIR may interact with miRNAs to link miRNAs and the post-transcriptional network in gastric pathogenesis.
To investigate the miRNA-related functions of HOTAIR in gastric pathogenesis, we chose miR-331-3p as a model miRNA for further studies, with a particular focus on the target gene HER2. In carcinomas, HER2 acts as an oncogene, encoding a 185-kDa transmembrane protein to trigger the activation of cell signaling networks, impacting on various malignant cell functions such as proliferation, motility, angiogenesis and apoptosis [27][28][29]. HER2 amplification and/or overexpression have been detected in approximately 20% to 30% of patients with breast and gastric cancer and correlates with poorer clinical outcomes [19,30,31]. The importance of HER2 has been well documented in breast cancer, where HER2 testing is a standard approach for identifying patients who may benefit from HER2-targeted agents such as lapatinib and trastuzumab therapy in metastatic and adjuvant settings [32,33]. In gastric cancer, HER2 overexpression is associated with more aggressive disease and poor survival. Preclinical studies have indicated that trastuzumab can impede HER2-overexpressing human gastric cancer cells growth and inhibit tumorigenesis in xenograft models [34][35][36]. Accumulating studies indicate that HER2 overexpression may not be affected by gene amplification alone, but is also likely to be influenced by transcriptional activation and/or post-transcriptional mechanisms in cancers [28,37]. In previous reports, HER2 mRNA and protein overexpression have been directly affected by miRNA-mediated post-transcriptional mechanisms in carcinomas [38,39]. Our study also confirms that HER2 is a direct target of miR-331-3p. Considering the interaction of HOTAIR/miR-331-3p, we therefore hypothesize that HOTAIR may also regulate HER2 expression in gastric cancer, which signifies the role of HOTAIR in the tumorigenesis-regulating network.
In this study, luciferase and RIP assays confirmed the existence of specific crosstalk between the lncRNA HOTAIR and HER2 mRNA through competition for miR-331-3p binding. Consistent with HOTAIR sequestration of miR-331-3p, we found that its depletion reduced the expression level of HER2, while its overexpression restored elevated HER2 protein synthesis. These data are consistent with the hypothesis that ceRNAs are transmodulators of gene expression through competing miRNA binding. Furthermore, IHC and qRT-PCR assays revealed that HER2 was mainly upregulated in advanced stage gastric cancer tissues or those with lymph node metastasis, and associated with high HOTAIR expression. Altogether, the positive correlation between HOTAIR and HER2 expression and the relevance to miRNA expression levels (miR-331-3p/ miR-124) supports our hypothesis that ceRNA can sequester miRNAs, thereby protecting their target RNAs from repression.
Lastly, the findings presented in this study have allowed us to conclude that HOTAIR overexpression represents an excellent biomarker of poor prognosis in gastric cancer, and may confer multiple properties required for tumor progression and metastatic phenotype. More importantly, our study indicates that the ceRNA activity of HOTAIR imparts a miRNA/lncRNA trans-regulatory function to protein-coding mRNAs and the ceRNA network may play an important role in gastric pathogenesis. Finally, our experimental data suggest that targeting the HOTAIR/HER2 interaction may represent a novel therapeutic application, thus contributing to better knowledge of the efficacy and tolerance of trastuzumab-based therapy in HER2-positive gastric cancer patients.
It is worth mentioning that the ceRNA activity of HOTAIR may sequester a handful of miRNAs at once, while one miRNA is also capable of controlling multiple genes. Therefore, the multiple properties of HOTAIR are likely due to simultaneous targeting of multiple targets in gastric cancer. We also hypothesize that there may be many other lncRNAs that function as ceRNAs to regulate expression of key genes in gastric cancer. Thus, the identification of these ceRNAs will undoubtedly enhance our knowledge of how lncRNAs function, allowing us to better understand the pathogenesis and development of gastric cancer and ultimately facilitate the development of lncRNA-directed diagnostics and therapeutics against this deadly disease.
Tissue collection
Fresh-frozen and paraffin-embedded gastric cancer tissues and corresponding adjacent non-tumorous gastric samples were obtained from Chinese patients at Jiangsu province hospital between 2006 and 2008. All cases were reviewed by pathologist and histologically confirmed as gastric cancer (stageII,III,IV; 7th Edition AJCC) based on histopathological evaluation. Clinical pathology information was available for all samples (Table 1). No local or systemic treatment was conducted in these patients before the operation. The study was approved by the Research Ethics Committee of Nanjing Medical University, China. Informed consents were obtained from all patients. . Cells were cultured in RPMI 1640 or DMEM (GIBCO-BRL) medium supplemented with 10% fetal bovine serum (10% FBS), 100 U/ml penicillin, and 100 mg/ml streptomycin (Invitrogen) in humidified air at 37°C with 5% CO2.
RNA extraction and qRT-PCR analyses
Total RNA was extracted from tissues or cultured cells using TRIZOL reagent (Invitrogen, Carlsbad, Calif ). For qRT-PCR, RNA was reverse transcribed to cDNA by using a Reverse Transcription Kit (Takara, Dalian, China). Realtime PCR analyses were performed with Power SYBR Green (Takara, Dalian China). Results were normalized to the expression of GAPDH. For miR-331-3p and miR-124 expression detection, reverse transcription was performed following Applied Biosystems TaqMan MicroRNA Assay protocol (Cat. # 4427975 and Cat. # 4427975). U6 snoRNA was validated as the normalizer. The primers were listed in Additional file 3: Table S4. qRT-PCR and data collection were performed on ABI 7500.
Plasmid constructs
HOTAIR cDNA was cloned into the mammalian expression vector pcDNA3.1 (Invitrogen). To express miRNAs, human microRNA precursors with about 80 bp of flanking sequences in both sides were amplified and cloned into the modified pLL3.7 vector (Invitrogen). To construct luciferase reporter vectors, HER2 3'-UTR and HOTAIR cDNA fragment containing the predicted potential micro-RNAs binding sites were amplified by PCR, and then subcloned downstream of the luciferase gene in the pLUC luciferase vector (Ambion, Inc.,Austin, TX, USA). Primers for subcloning and plasmid construction were listed in Additional file 4: Table S3. We also designed shRNA sequence targeted HOTAIR as shown in Additional file 3: Table S4. After annealing of the complementary shRNA oligonucleotides, we ligated the annealed oligonucleotides into pENTR vector (sh-HOTAIR).
Transfection of gastric cancer cells
All plasmid vectors for transfection were extracted by DNA Midiprep kit (Qiagen, Hilden, Germany). Three individual HOTAIR siRNAs (si-HOTAIR) and scrambled negative control siRNA (si-NC) were purchased from Invitrogen (Invitrogen, CA, USA). Target sequences for HOTAIR siR-NAs were listed in Additional file 3: Table S4. The si-HOTAIR, miR-331-3p or miR-124 was transfected into BGC-823 cells respectively, and pCDNA/HOTAIR was transfected into SGC-7901 cells using Lipofectamine2000 (Invitrogen) according to the manufacturer's instructions. At 48 h after transfection, cells were harvested for qRT-PCR analyses or western blot.
Cell proliferation assays
A cell proliferation assay was performed with MTT kit (Sigma, St. Louis, Mo) according to the manufacturer's instruction. Cells were placed into 6-well plate and maintained in media containing 10% FBS for 2 weeks. Colonies were fixed with methanol and stained with 0.1% crystal violet (Sigma, St. Louis, Mo). Visible colonies were manually counted.
Flow-cytometric analysis of apoptosis BGC-823 cells transiently transfected with si-NC or si-HOTAIR were harvested 48 h after transfection by trypsinization. After the double staining with FITC-Annexin V and Propidium iodide (PI), the cells were analyzed with a flow cytometry (FACScan®; BD Biosciences) equipped with a CellQuest software (BD Biosciences).
Hoechst staining assay
BGC-823 cells transiently transfected with si-NC or si-HOTAIR were cultured in six-well cell culture plates, and Hoechst 33342 (Sigma, St Louis, MO, USA) was added to the culture medium; changes in nuclear morphology were detected by fluorescence microscopy using a filter for Hoechst 33342 (365 nm). For quantification of Hoechst 33342 staining, the percentage of Hoechst-positive nuclei per optical field (at least 50 fields) was counted.
Cell migration and invasion assays
At 48 h after transfection, cells in serum-free media were placed into the upper chamber of an insert for migration assays (8-μm pore size, millepore) and invasion assays with Matrigel (Sigma-Aldrich, USA). Media containing 10% FBS was added to the lower chamber. After several hours of incubation, the cells that had migrated or invaded through the membrane were stained with methanol and 0.1% crystal violet, imaged, and counted using an IX71 inverted microscope (Olympus, Tokyo, Japan).
Tumor formation assay in a nude mouse model 5-week-old female athymic BALB/c mice were purchased from the Model Animal Research Center of Nanjing University. All animal procedures were performed in accordance to the protocols approved by the Institutional Animal Care and Use Committee at the Nanjing Medical University. For xenograft models, 5 × 10 6 BGC-823 cells transfected with sh-HOTAIR and pENTR vector (EV) were injected subcutaneously in the right flank of BALB/c nude mice (five mice per group). Tumor volumes were examined every 3 days when the implantations were starting to grow bigger. After 16 days, these mice were sacrificed and tumors were weighted. Tumor volumes were calculated by using the equation V (mm 3 ) = A × B 2 /2, where A is the largest diameter, and B is the perpendicular diameter. The primary tumors were excised and tumor tissues were used to perform qRT-PCR analysis of HOTAIR levels and immunostaining analysis of proliferating cell nuclear antigen (PCNA) protein expression.
Luciferase assay
Human HEK293T cells (2.0 × 10 4 ) grown in a 96-well plate were co-transfected with 150 ng of either empty vector or miR-331-3p, miR-124, 50 ng of firefly luciferase reporter comprising 3'UTR of HER2, wild type or mutant HOTAIR fragment, and 2 ng of pRL-TK (Promega, Madison, WI, USA) using Lipofectamie 2000 (Invitrogen, USA). rno-miRNA-344 acts as a negative control. Cells were harvested 48 h after transfection for luciferase assay using a luciferase assay kit (Promega) according to the manufacturer's protocol. Transfection was repeated in triplicate.
RNA Binding Protein Immunoprecipitation (RIP) assay
RNA immunoprecipitation was performed using the EZ-Magna RIP kit (Millipore, Billerica, MA, USA) following the manufacturer's protocol. BGC-823 cells at 80-90% confluency were scraped off, then lysed in complete RIP lysis buffer, after which 100 μl of whole cell extract was incubated with RIP buffer containing magnetic beads conjugated with human anti-Ago2 antibody (Millipore), negative control normal mouse IgG (Millipore). Anti-SNRNP70 (Millipore)was used as positive control for the RIP procedure. Samples were incubated with Proteinase K with shaking to digest the protein and then immunoprecipitated RNA was isolated. The RNA concentration was measured using a NanoDrop (Thermo Scientific) and the RNA quality assessed using a bioanalyser (Agilent, Santa Clara, CA, USA). Furthermore, purified RNA was subjected to qRT-PCR analysis to demonstrate the presence of the binding targets using respective primers. | 2017-07-08T19:26:35.850Z | 2014-04-28T00:00:00.000 | {
"year": 2014,
"sha1": "fd41888bcde0eccbe486748fa904903ad3c64f2e",
"oa_license": "CCBY",
"oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/1476-4598-13-92",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd41888bcde0eccbe486748fa904903ad3c64f2e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
166700622 | pes2o/s2orc | v3-fos-license | The BIVEE Project: an overview of methodology and tools
EU needs an effective exit strategy from the crisis, with a special attention to SMEs that represent the 99% of the enterprises active in the European production system. To this end, innovation appears to be a key factor to relaunch the EU industrial system. The BIVEE project proceeded for almost 4 years to develop a rich framework, i.e., a methodology and a cloud-based software environment, that includes business principles, models, and best practices, plus a number of advanced software services, to support and promote production improvement and business innovation in virtual enterprise environments (essentially, enterprise networks.)
capability of European SMEs, leveraging on constant production improvement and continuous business innovation.
In conceiving the project, we were aware that there are a number of barriers that EU SMEs still face and which prevent them from systematically adopting the emerging paradigms of collaborative business innovation. Among these barriers, we may cite the high fragmentation of the SMEs industrial fabric, the inadequate organizational and technological culture, the limited resources (such as financial, management expertise and competencies) available for SMEs, the lack of systematic connections with research institutions, and often their naïve and unfocused approach to innovation. However, there are a number of new business models and ICT solutions for supporting and fostering innovation in SMEs, such as Open Innovation, crowdsourcing, new forms of social networking, that we decided to acquire and transform into usable solutions for the European SMEs.
In BIVEE we assumed a broad, holistic approach to business innovation, tackling the methods, concepts, ideas, inventions, artefacts aimed at renovating in a sustainable manner the way enterprises conduct their production processes, considering that all is connected, at the business, technical, economical or social level. This is in line with the policy of the European Commission (see the Innovation Union Flagship initative 2 ), and in particular the strategy aimed at portraying the next generation of innovation-driven enterprises: only a synergic combination of new technologies, new business models capable of supporting and respecting our social and political foundations, could succeed in making the Euroepan enterprise system evolving and pushing forward the entire socio-economic system.
Innovation largely remains a people-centric, 'brain intensive' activity. In this perspective BIVEE proceeded in building a distributed, collaborative, knowledgeintensive framework, where not only emerging ICT solutions but also innovative business models, novel management methods, and cooperative working styles have been integrated to the benefit of interoperable virtual enterprises.
The methodological framework of BIVEE is based on two interlaced spaces: Value Production and Business Innovation space. These are the concrete spaces, where resources and people cooperate to achieve the business goals, but at the same time they correspond to two knowledge spaces where the reality, inside and outside of the enterprise, is extensively represented in digital form. According to the BIVEE approach, such a rich knowledge base is used in the value production space (VPS) activities to support the monitoring and improvement of highly distributed production processes, and in the business innovation space (BIS) activities, to support the continuous innovation in a VE.
On the technological side, the logical architecture of the BIVEE Software Environment has been structured according to the two mentioned virtual spaces (VPS and BIS). Each of which is managed by a software platform: Mission Control Room (MCR) and Virtual Innovation Factory (VIF), respectively. Furthermore, to manage the VE knowledge supporting the activities of the two mentioned spaces, a platform for the management of a Production and Innovation Knowledge Repository (PIKR) has been developed. The three platforms stand on top of the Raw Data Management and Service systems that interface the local (to the individual SMEs) information systems, providing the necessary data interoperability and service integration for such diverse information sources. Among the offered services, we may list the support to a rich gamut of business activities: from production monitoring to market watch, from innovative think-tank support to technology watch, appraisal and evaluation of production and innovation activities, until risk assessment and prevention. An important position is given to innovation monitoring based on a number of Key Performance Indicators (KPIs) capable of tracking the progress of an innovation project and, subsequently, assessing the effects of the adopted improvement and innovation solutions. The four main platforms have been based on the Open Software Architecture paradigm and advanced meshup techniques, by extensively reusing resources available in the FLOSS world and leading edge solutions provided by BIVEE partners.
To validate the BIVEE Environment, the project addressed two trial cases: the first trial concerned the area of distributed co-design, having innovation as a core focus (in Loccioni -General Impianti); the second trial took place on a traditional production/logistic chain (AIDIMA). The involved industry sectors have been carefully selected to validate BIVEE in very different cases: in fact the latter concerns the 'mature' wood and furniture industry, while the former is positioned in the hi-tech sector of robotic and automatic measurement equipments. To achieve a thorough assessment of the BIVEE Framework, the trial cases have been organised in two main phases: Phase 1, the First Monitoring Campaign, where the activities of the two VEs have been monitored in their 'as is' practices, assessing their production and innovation performances before introducing the BIVEE platform; then, after the introduction of BIVEE, in Phase 2, we carried out the Second Monitoring Campaign, appraising the production and innovation performances once the BIVEE solutions have been adopted. Confronting the results collected in the two monitoring campaign has been a very useful activities that yielded to important indications for the implementation of the final version of the BIVEE Framework, released at the end of the project.
The mission of BIVEE
The mission of BIVEE is to provide advanced solutions to boost productivity and innovation capabilities of networked SMEs. To this end BIVEE developed an integrated framework, i.e., a coordinated set of business methods, enterprise models and ICT solutions, to support adaptive, distributed, interoperable business ecosystems (in our case also referred to as Virtual Enterprise Environments: VEE) in pursuing continuous, distributed optimization and innovation practices.
Production optimization and innovation need to be addressed in a synergic fashion. About the former, the main focus of BIVEE is on manufacturing that is mainly based on well defined, deterministic value production processes where tangible and intangible assets are managed in order to achieve some business objectives, in a cost effective way. Here automatic equipments, such as robots, measuring sensors, various actuators, play nowadays a very important role. The BIVEE solutions concern the services aimed at monitoring the distributed production activities and intervene to correct deviations to the planned programs and/or to introduce all possible improvements. In these cases the interventions do not structurally modify the enterprise organization or production maps (e.g., according to the Kaizen approach).
Innovation concerns the capacity to conceive and introduce a marked discontinuity, such as the introduction of a new product on the market or a new production process, that will impact on various enterprise dimensions, such as human resources, production means and methods, organization and finances, marketing strategies, etc. Innovation is a human-centric activity, often non deterministic or serendipitous, in most cases starting with imprecise objectives, presenting a number of criticalities in the planning, forecasting and scheduling the activities, and in the managing the risk. Here BIVEE proposes a specific set of services.
The lifecycle of an innovation project has three basic phases: the inception, the evolution, the conclusion. The inception typically starts according to five (not orthogonal) basic approaches: (i) push-mode and technology driven, when the innovation is generated on the supply side (e.g., by the appearance of a new technology); (ii) pull-mode and demand driven, when the innovation is motivated by a user need (i.e., by the future adopters); (iii) co-creation, when all the stakeholders cooperate together to generate product or process innovation (i.e., open, collaborative approach); (iv) endogenous, when ideas come from within the enterprise of the ecosystem; (v) exogenous, when ideas come from the rest of the world. We believe that the case (iii) is the most effective inception, although the hardest to achieve. In fact, innovative co-creation requires that different people, with different cultures, skills, and needs (and, sometimes, conflicting objectives) share a cooperation space spanning across the whole product lifecycle, from R&D, to design, to production, until after-sales, and across the enterprise boundaries. When stakeholders belong to different realities (different enterprises, but also different roles, different geographical areas, including also external research organization), often remotely interacting, with different levels of engagement and timing, then the added value is more important. A situation that is typical of Internet-based social networking.
The central phase of an innovation project is where ideas are consolidated and progressively elaborated, pushing them forward until an applicable innovative solution is achieved. This is the phase that will be elaborated in detail in this book, with a characterising point of view, concerning innovation projects carried out in a distributed, collaborative open organizations' network. Then the book addresses also the final phase of innovation and its interactions with the production space; in case of successful innovation the outcome will be transferred to production and, eventually, to the market. But also the inverse flow from production to innovation plays a crucial role, carrying over feedbacks on the adopted innovations, providing also stimuli for starting new innovation projects.
Business ecosystems and virtual enterprises
Business ecosystem and virtual enterprise are two central notions in BIVEE. The notion of a virtual enterprise (VE) refers to an organizational and business model where different production organizations (e.g., enterprises, production units) join together with a predefined objective (e.g., achieving a given production or business innovation) and share skills and competencies in order to attain a specific result (e.g., product, service, or a target market share) [AAG 99], [Byunghak, 2001]. Once the business objectives and the production plans are defined, a VE creates a Value Production Space (VPS), i.e., a complex networked business environment where the value creation activities take place. A VPS is organised in different layers (e.g. operational, resources, management layer, etc.), where different resources are allocated (e.g. humans, services, machines) and different activities take place respecting different constraints and factors (e.g., production capacity, budget constraints, laws, deadlines, etc.) The most important characteristic involved in the selection of a production unit (PU) to participate in a VE is its capability. This characteristic reflects not only the availability of the required technology and relevant skill and experience, but also the reliability in terms of its records in respecting commitments, the capacity of producing the required volumes, the flexibility in relation to program changes, and so on. All this information is maintained in the business ecosystem repository (part of PIKR) and is constantly updated.
The coordination and collaboration of SMEs, or production units, in a single VE may raise specific issues, stemming from the fact that the different SMEs may significantly diverge in terms of organization model, management style, decision making methods, information transparency (but also 'hidden agendas'). Furthermore, different PUs and, more specifically, different production phases may be supported by different information systems (process models, data types, etc.) This diversity may cause that the information connected to outputs (products/services) released in one production phase is not fully (at least in an easy way) compatible with the input expected by PUs operating in the successive phase. To cope with this problem, BIVEE adopted a semantic interoperability solution, based on a central repository (including a set of ontologies) for the semantic annotation of the various resources by using a unified reference vocabulary and knowledge-based.
Furthermore, innovation requires continuous decision making, often a to be made in a short time. Since there is not a unitary center of command and there is a shared responsibility among the partners of the VE, there is the need to organize the VE information to support distributed decision making according to non conventional methods. This is achieved primarily by providing solid, reliable fact and knowledge base: a prerequisite to achieve informed decision making. The idea to measure, monitor and control innovation activities by using traditional time-cost-quality performance indicators is in general not suitable and in some cases could even be counterproductive, risking to seriously jeopardise an innovation project.
A second issue is a clear separation between local and global decision making. BIVEE provides the required functionality to overcome this dichotomy, allowing a loose integration (i.e., just to the extent needed by an effective cooperation), where local modelling methods and working styles remain independent, but at the same time offering the solution to provide a global view of such a fragmented picture. Effective global coordination with marked local autonomy present also the advantage of lowering the cost for a new enterprise to join a VE.
A Business Ecosystem (BES) [PEL 04] is a 'protected' space populated by enterprises willing to stay in touch in the perspective to get together to set up a VE. To be accepted in a BES an enterprise needs to satisfy a number of criteria and to respect some common rules, once included in the community. Criteria and rules are freely decided in a democratic way by the community itself. Entering in the BES, can happen by invitation or application. In both cases the enterprise must provide a substantial information, including competencies, skill, capabilities, previous experiences, that composes the enterprise profile. Such profiles are freely accessible by the other members of the BES, and automatically filtered by dedicated BIVEE services that support partner search during the composition of a new VE. The BES rules define also the exit mechanism for enterprises intending to leave, including 'non disclosure' obligations regarding the other partners information acquired while participating in the BES. Among the various advantages offered by the participation in a BES, there is also the collective procurement opportunity.
In a BES, enterprises and stakeholders operate in a participatory area (including both the Production and Innovation Space) organized according to three concentric circles, as depicted in Figure 1.3.1. The inner circle includes the PUs that participate in the VE or VIF (several VE/VIF can be active in a BES at a given time). The middle circle represents the trusted business ecosystem (BES) that gathers all the enterprises that have been recognized and qualified, and are therefore able to guarantee the required levels of quality, performances, trust and security. The outer circle represents the 'open sea', populated by all possible business players, potentially interesting for (and interested to) the ecosystem, but also player who compete in the same market and are potentially distrustful and hostile. The three circles offer different levels of openness and protection.
The key events of the BES lifecycle are listed below. BIVEE provides services to manage the following four basic events.
1. Evolution of the Business Ecosystem, when a new enterprise joins the BES or an existing one updates its capabilities or competences; 2. Exit of an enterprise from the BES, either for its explicit choice or for the progressive failing of the required membership criteria 3. Start of a new VE, in response to a new market opportunity; 4. Termination of a VE having reached its business objectives, or for the failing to fulfil the conditions required by the BES charter; As anticipated, the BIVEE philosophy is based on two interleaved spaces where value production and business innovation take place. In the next two sections we will introduce the two mentioned space.
Value Production Space
The advent of the Internet has already offered new opportunities for enterprises to improve their business and production models, one of the key impact is represented by the increasing flexibility and adaptability of production carried out by a networked enterprise. Distributed, networked production models are difficult to design, implement, deploy, and manage in an optimal way. The 'traditional' approaches to enterprise optimisation are not suited and new solutions are sought to manage networked production paradigms, capable of adopting the power of the Internet.
In BIVEE we introduced the notion of a value production space (VPS), as a digital virtual reality aimed at modelling and representing a complex, distributed reality where a virtual enterprise operates. In a value production space we have production units (Pus, sketchily represented by rounded boxes in the Figure 1.3.1) that can be independent SMEs or organizational units in a single enterprise (we don't need to distinguish at this level). A PU can be of four different sorts: manufacturing, assembly, service, logistics. A production map is represented by a graph, with nodes that represent production units connected by flow arcs. In a production map, that includes also other complementary infrastructures such as storage warehouses and other necessary services, there are branches with alternative or parallel paths (a path is a linear subgraph). The primary role of a value production map is to represent the flow of goods and services, but also other flows, such as the financial and information flows. A value production process evolves over the production map, traversing a number of (pre-) defined paths, links and units.
In our view, a value production space (and the VE operating therein) is not a closed territory: it is open for (impromptu) contributions from the rest of the BES and, in different forms, from the external world; at the same time it is sufficiently protected to ensure the required levels of trust and security to the members operating within such a space (as exemplified by the dotted border line of the production space in Fig. 1.3.1 that represents a VE with its PUs). BIVEE provides the services and the methods to model the VPS, the VE, and to monitor the activities that take place along the production paths.
The Mission Control Room implements the services that BIVEE offers for managing the value production processes taking place in the VPS. In particular, it enables the users to: -Create and maintain the Business Ecosystem, that is the "incubator" of new VEs; -Support the creation of a VE, helping the discovery and selection of partners, and possible evolutions of its composition; -Define and maintain the production map of each VE; -Monitor the distributed production activities, by gathering feed-back from each PU in the VE -Periodically update the production plan distributing the updated tasks and workload to the interested PUs in the VE;
Fig. 1.3.1. A Value Production Map
As seen, while guarantees that the daily routine is constantly controlled for maximum adherence to programs, has the objective of improving current operations. This objective is supported by the production monitoring based on a number of adhoc Key Performance Indicators, tailored to each specific VE.
In setting up a new VE, BIVEE supports the development of two main documents: -Business Plan, that is focused on the creation of the VE, starting from the production objectives, the market to be served, and the production resources involved; it indicates also the expected revenues and costs. One of the main supports provided is the screening of the capabilities of the candidate Production Units, in term of type of production, skills, acceptable volumes, time responsiveness, in order to configure the VE. -Master Production Plan. This document comes afterword and concerns a given delivery plan; it is compiled after having verified the production objectives, with quantities and times, then the feasibility, by the configured VE, in terms of available resources, cost, schedule, and so on.
Once the VE has been set up, the platform provides support to the continuous production monitoring and management activities. These have been formalized through a set of process templates (see Chapter 2) focusing on the production process as seen at VE level, where each component PU is seen as a work centre of a traditional ERP (Enterprise Resource Planning system). The support to the local production management and control system of each single PU is not considered part of the BIVEE mission.
The Value Production Space processes are described according to the SCOR Framework 3 that introduces precise definitions of the functionalities mentioned above. Summarising, the primary production processes of the VE are (according to SCOR): Value Chain Plan, Production Plan, Production Source, Production Build, Production Deliver. These processes are applied to the VE in its entirety, but each PU will take care of specific tasks and will organise its own internal processes accordingly. The VPS processes, seen at VE level, are represented by networks of macro activities performed by the involved PUs. For example Production Assembly, an activity that can be found in the Production Build process, does not detail the specific sequence of operations in the local PU, but sees it as a 'black box', considering in input the consumption of the required components, and in output the staging of complete assemblies, according to the Master Production Plan.
Plans that are distributed to the involved PUs can require some later negotiation in case of unpredicted events. For this reason, advancement feedback is harvested from the PUs, in the form of KPI measures, to create a picture of the progress in the production processes. In case that a PUs is not able to send the required information, for instance if its enterprise system is down, the BIVEE platform provides a Webbased user service for manual data collection.
Production knowledge support
Focusing on the value production activities of a VE, a central role is played by the Production and Innovation Knowledge Repository (PIKR). The objective of this tool is to maintain a constantly evolving knowledge base, aligned with the state of play in the production field, extended along various dimensions: 1. Knowledge sources: the PIKR contains crowd-sourcing mechanisms, in order to harvest knowledge by an extended audience of business, market, technology experts, that may significantly help in production improvement.
2. Organizational memory: the history of all improvements and innovation exercises, whether successful or not, is maintained and can be retrieved in order to evaluate results and lesson learned. 3. Similarity of topics in different production activities: production improvement often stems from addressing the issues of a product or a process with solutions used in different domains. The availability of "knowledge nuggets" coming from more or less contiguous territories can trigger interesting searches of new materials, operations, applications, etc.
Business Ecosystem
To be considered as candidate for participating in a VE, an enterprise has to be screened on the base of a number of characteristics (reliability, technology level, production capacity, etc.) Enterprise profiles, maintained in the PIKR and adjusted to the evolution of the state-of-play, are analysed in the context of the VE interests.
New members of the BES considered for membership after they explicitly apply (push mode) or by invitation (pull mode). The scouting of possible new entries in the BES is supported by the Observatory function; when a new VE needs to be created and none of existing BES members is eligible, the VE manager can activate a BIVEE search to screen enterprises existing in the "outer world" that exhibit interesting characteristics. Out of the list provided by BIVEE semantic crawler, the VE manager can select candidates that can be contacted and invited to join the ecosystem first, and the VE afterwards.
Product and/or Process Improvement
VE changes aimed at achieving the planned improvements need to be implemented in a way that minimise the impact on BAU. Therefore, it is important to maintain a clear separation between current production programs and executions, on the one hand, and feasibility of changes, assessing their effectiveness, on the other.
The triggering of improvement requests, management of the changes, analysis of results, etc. are supported by the functions of the MCR, while the actual test of the implemented improvement, and the relevant feedback, is managed by the VPS monitoring services: the main knowledge objects involved in these services are KPIs, elaborated below.
Ongoing VE production
The SCOR modeling frame is the base of the MCR services that are grouped into the following components: Modeler, aimed at the VE Setup, including the modelling of production maps in the VPS; Assistant, that supports the rollout of the Master Production Plan, with its production processes; Monitoring, that manages the feedback gathering from the operational field, based on the KPIs coming from the PUs. The knowledge objects involved in this case are on one side the Production schedules for the PUs, and on the other the production advancement, reflected by the KPI measures, harvested from the PUs.
.6. A Participatory Space for business innovation
One of the primary goals of BIVEE has been to address innovation with an 'industrial' philosophy. In this perspective, innovation is seen as an intangible 'knowledge artefact' to be built starting from 'raw knowledge', suitably selecting and connecting the required 'knowledge parts', then proceeding to create the 'missing parts' needed to realise the final artefact, i.e., the body of knowledge that represents the sought innovation. The conception of such an engineering approach to innovation has been particularly challenging, since we had a narrow path to go, trying to push the proposed methodology towards a more rigorous and systematic approach, without jeopardizing the creativity required by innovation. Furthermore, the innovation PU1 PUn . . . process is inherently less defined than a production process. In this frame we introduced the notion of a Virtual Innovation Factory, based on the following elements: -Human experts, who are considered, with their intuitions and creativity, the primary engine of innovation -A set of guidelines, intended to be used by the innovation teams in a flexible and discretional fashion -A knowledge repository, holding all (possible) information that can be useful during the innovation project; it holds pre-existing knowledge, as well as the new knowledge produced (and acquired) during the work.
-A virtual collaboration space, where the distributed innovation teams can exchange ideas, questions, comments, answers. The collaboration, that can be synchronous or asynchronous, is always supported by an ubiquitous, easy access interface to the knowledge repository.
-Monitoring and assessment services, supported by a platform for collecting facts and evidences from the various innovation teams, feeding then the KPI Management System that will constantly provide the state of play of the project, based on quantitative elements.
The VIF eventually produces innovation artefacts that are passed to the VE and then applied to the VPS to concretely implement the innovative solutions. In rolling out the innovative solutions, the VE needs to carefully understand the cascading effects of the proposed changes. Such changes generally require the re-alignment of the VE over several dimensions and enterprise areas, beyond the one initially targeted by the sought innovation (here Change Management is centrally involved, not directly addressed by BIVEE since it is a well elaborated discipline, very rich in proposals and scientific results [PAT 08]).
In summary, Business Innovation is a designed, managed transformation of some aspects of an enterprise aimed at a substantial improvement of: the quality of delivery products (goods, services) and the customer satisfaction, production processes and workers satisfaction cost reduction and/or revenue increase sustainability of the production (i.e., implementing transformations respectful of working conditions, social context, environment, ...) An innovation project typically starts with a (more or less) defined objective that is positioned in one focus dimension (e.g., introducing a new product in the catalogue), then it is necessary to identify all the other dimensions that are impacted by the specific innovation. Often, such an impact is overlooked and that implies a high probability of failure. The primary dimension that always is involved concerns enterprise processes, but to a certain extent all the other dimensions are affected..
The business innovation space is organised in a different way with respect to a value production space. The latter typically transforms raw material into finished products (or elementary services into complex services); while in the innovation space we take existing production processes aiming at producing new processes. So, innovation primarily operates on the enterprise itself, that is the object of the transformation. Furthermore, an innovation process is inherently different from a production process, since the former is essentially ill-defined, typically built ad hoc. Even if it is based on existing guidelines and make use of pre-existing practices (i.e., guidelines, sub-processes), large space is left (hopefully) to creativity, intuition, 'lateral thinking' that hardly support to be encapsulated in systematic, repetitive tasks. For this reason, we prefer to avoid the term 'process' (implying a well defined and organised set of activities repeated in time) when we talk about innovation, being 'project' more suited.
An innovation project sees the initial involvement of creative units, followed by specialised units addressing specific issues, such as design, engineering, financial and market ones. All the units cooperate and interact by means of (generally distributed) communication platforms supporting the continuous exchange of ideas, comments and suggestions, design artefacts, mock-ups, blueprints, but also quantitative data and simulation outputs.
Similarly to what introduced for a VPS, a business innovation space is characterised by an innovation map: a connected graph with nodes representing activities (and the teams enacting them) and arcs representing the flow of knowledge. As opposed to a production map, an innovation map is not fully defined when the activities begin, it is dynamically specified while it is traversed along the time, and new steps are dynamically identified, on the basis of the achieved results and the encountered problems. The traversed paths are aimed at progressively increasing the knowledge necessary to achieve the sought innovation. Such a progression typically starts from the existing state of play, a consolidated set of products and production process, gradually acquiring new knowledge until the innovative solutions are fully specified. In the most radical cases, the innovation project may require the extinction of (part of) what exists, to be replaced by radically new solutions 4 . In parallel, we need to consider the risk that an innovation project may fail, therefore it is necessary to show, with reliable forecasting methods, that the undertaken direction has good chances to terminate delivering the expected results.
Innovation management
At the core of the Business Innovation Space is the concept of innovation management. Innovation management is the process of managing ideas through the stages of the innovation cycle until they reach an industrial maturity and eventually the market [HAM 07]. The innovation cycle describes the activities involved in taking an innovative idea, elaborating it to progressively achieve industrial strength products or services. there is the need of finding a virtuous balance between the need to disclose ideas and knowledge, to facilitate (even unexpected) external contributions, and to protect the key knowledge assets, to avoid that even partial results can be acquired and used by competitors.
In BIVEE we proposed the Business Innovation Reference Framework (BIRF) to organise the BIS according a rigorous method while preserving the necessary flexibility. The BIRF is based on three main pillars: (i) the BIVEE Innovation Waves, (ii) the document based collaborative knowledge management, and (iii) the monitoring strategy, based on carefully selected KPIs. All the three pillars are implemented by the collaboration space of the Virtual Innovation Factory (see below).
BIVEE Innovation Waves
An innovation project, typically triggered by the 5 anticipated cases (see Sect. 1.3.2), starts by carefully considering new technological and market opportunities, having also in mind specific client needs. This is achieved with an exploration and scouting phase, including the access to pertinent knowledge available at well renown research and innovation centres (of partners, agencies, universities, etc.) Then, according to BIVEE, an innovation project is organised following the 4 BIRF Innovation Waves: Creativity, Feasibility, Prototyping, and Engineering, as described below.
Creativity Wave: This first wave starts with an innovation idea or a problem to be solved, providing a first description. Then, a team is established that starts to elaborate the initial idea, searching for similar ideas and previous experiences (including past failures) related to it. If the idea is promising, a Virtual Innovation Factory is created, with one or more teams with resources belonging to different real enterprises. Feasibility Wave: This wave starts when the initial idea is sufficiently specified, with its structure, functions, scope, and intended impact, and approved by the corresponding manager. Then it proceeds elaborating the business plan, market analysis (including competitors, customer profiling) and the feasibility documents (e.g., technical, financial, industrial, market feasibility), with risk analysis and patent possibilities. The output of this wave, that includes a preliminary design, is checked by the top management that, in case of positive evaluation, allocates the required budget. testing (in the lab, as well as in the field) are carried out. This is the wave in which the innovation is concretely projected into the real world for the first time, allowing the initial idea to be confronted with implementation and practical issues. Engineering Wave: This is the final wave that concerns the industrialization of the innovative solutions, with the documentation to be transmitted to the VE to start the rollout of the innovative solution in the value production space. Also the prospective budget, break-even point, and business model are defined, together with all the other documents that concur to form the knowledge asset necessary to activate the industrial production. Such knowledge includes documents like the Bill of Material (in case of a manufacturing product), production plans (indicating what to make or buy), delivery strategy and set-up instructions, testing and maintenance procedures.
We call them waves since they are logically sequenced in the time, but they are tightly interconnected and the start of a new wave does not imply that the previous one has been fully accomplished. Furthermore, there will be often the need to jump back and forth to complete a document or to correct it on the bases of later findings. For instance, during the prototyping wave there can be new findings that jeopardize the results of a previous financial feasibility study, requiring therefore to rethink some parts of the innovation under elaboration. The wave approach is sketchily depicted in Figure 1.3.3. The innovation waves represent a powerful framework to guide innovators in achieving an innovation project. They are sufficiently powerful to be used for different kinds of innovation (e.g., product, process, market, etc.) and in different application domains (from automotive to tourism, from health to Government.) The guidelines associated to the waves indicate a number of document templates that, to be filled, require specific knowledge to be gathered. Waves are also associated to a number of carefully conceived Key Performance Indicators and a method to monitor and assess the progress of the work.
The architecture of the VIF platform.
The BIRF framework represents the rationale of the Virtual Innovation Factory (VIF), and in particular of the software platform that supports the innovation projects. The VIF platform has been conceived to support collective knowledge creation and management of innovation. To this end, the primary components are:
Shared Semantic Whiteboard (SSW): This is a knowledge sharing and visualization
Web platform, where each member of the innovation team can post ideas, comments, issues, etc. The SSW can be remotely accessed, it displays elements that can be seen, edited, commented by all the members of the team (both in sync and async mode). Furthermore, the platform proactively searches and associates to SSW elements relevant knowledge items (e.g., documents) extracted from the Open Innovation Observatory and the PIKR (see below). Open Innovation Observatory (OIO): This is a knowledge repository where the dedicated search engine of the VIF collects and organises the material extracted from various public sources, both on the web (the open section) and on the partners local repositories. Among the stored knowledge there is information about worldwide excellence centres, various projects and scientific results, relevant for the community. A careful and continuous observation of the evolution of the external world, managed by OIO, guarantees that BIVEE holds the information necessary to timely trigger suitable optimization and/or innovation actions. Collaborative Innovation Capability Maturity (CICM): the BIVEE framework has also proposed a set of criteria and guidelines for networked SMEs (and enterprises in general) to (i) verify the innovation readiness of the enterprises and (ii) to identify a route that an enterprise can follow to progressively improve its innovation capability. The innovation CMM Model is inspired by the well known capability maturity model originally introduced for the Software Engineering by the SEI (Software Engineering Institute), but in addition to the 5 levels that concern the CMM of a single enterprise (Initial, Repeatable, Defined, Managed, Optimized) it introduces another dimension to cater for the virtual (networked) aspect of the VE. The latter dimension is in turn organised in 4 levels: Single organization, Network Awareness, Network Consent, Network Dedication. The CICM model has guided the implementation of a web-based tool for the self assessment of innovation maturity of a VE (http://innonetscore.de).
An integrated view of VPS and BIS
Business innovation and production improvement both have in common the fact that they aim at changing for better some aspects of the enterprise. As anticipated, a crucial problem is the identification of the key enterprise dimensions where such changes will take place. It is obvious that the enterprise dimensions are not independent one another and, in general, the first changes will trigger a sort of chain effects, with a more or less wide impact. In a broad view, the enterprise transformations can involve the following dimensions: Product, Service, Process, Technology, Organization, Market, Strategy. In BIVEE we decided to focus on the first four dimensions.
Today, the transformations are increasingly happening at a fast pace, with a trend towards a continuous improvement / innovation paradigm. Figure 1.3.4 reports a sketchy representation of two nested cycles, one concerning the optimization cycle and the other the innovation cycle.
. The Optimization-Innovation Cycles
According to the BIVEE approach, the optimization-innovation cycles are tightly interwoven, supported by the two application platforms that we have seen above: the Virtual Innovation Factory and the Mission Control Room. In fact, the two corresponding spaces, BIS and VPS, are also tightly interwoven, while maintaining respective roles, objectives, and characteristics that are inherently different. As anticipated, the idea is that an innovation map takes as input a 'consolidated' value production map and generates as a result a new, innovative, production map.
The Figure 1.3.5 sketchily represents such an integrated view, where the upper part shows the existing production map (PMx), i.e., the AS-IS situation that is going to be changed by the sought innovation; the central part symbolise the innovation space where an innovation solution is progressively elaborated; the lower part illustrates the innovated production map (PMx'), after the innovation has been fully implemented. The BIVEE platform essentially operates on this meta-space, adopting the most effective knowledge representation methods and notations. In the foreseeable future there will be a strong impact on organizational models where enterprise application and services will undergo a transformation of the traditional monolithic on-premise systems (such as ERPs) into heterogeneous collections of services, available from different providers on-demand and, e.g., on a pay-as-you-go basis. The traditional three layers architecture data-applicationpresentation is now implemented by services which allow for a boundless access to heterogeneous data (e.g., with a linked open data approach), for the accomplishment of enterprise operations (planning, forecasting, monitoring, management) and the implementation of new forms of mobile and fixed human-computer interaction (the so-called Service Front Ends). Cloud Computing is also pushing forward a comprehensive service paradigm to implement infrastructure-, platforms-, and software-as-a-Service (IaaS, PaaS, SaaS) solutions. Finally, some functions previously implemented in the application part of the enterprise system are now moving to the infrastructural part (see for instance the idea of an ISU: Interoperability Service Utility), becoming new kinds of network utilities. We envisage that Business Intelligence, and more generally Big Data Analytics, services will be among the key enablers for a virtuous growth of EU enterprises and will be among the value-added services that will characterize next generation enterprise applications capable of influencing new styles of fact-driven management.
In this frame, the macro-architecture of the BIVEE Platform is conceived to put the business user in the centre. In particular, the idea is that managers will progressively become 'coaches', active in monitoring and managing both production and innovation spaces with an open, participatory style. To this end, the organization of the knowledge repositories, the offered services and, overall, the user interfaces, will be conceived to create a comfortable, familiar environment for the (least technical) users. For this reason, particular attention have been placed to achieve advanced graphical user interfaces for the MCR and the VIF, where business people will collaborate in monitoring and managing enterprise entities (much in accordance with the principles indicated FInES Research Roadmap 2025 5 ).
The BIVEE software environment, besides the platforms conceived for the end users, MCR and VIF introduced above, includes the following platforms. Please note that all the platforms and services introduced in this chapter will be addressed in the book by the chapters specifically dedicated to each of them.
Production and Innovation Knowledge Repository (PIKR): This repository
contains the data and knowledge concerning production and innovation activities coming from the different partners of the VE and VIF, with the objective to provide a wide but 'controlled' knowledge sharing. It complements the OIO that, conversely, holds public 'external' knowledge.
In particular, PIKR stores knowledge about all the teams working in the BES with their competences and experiences, the achievements of past projects, but also information of the VE production space evolution. PIKR is centrally based on the various ontologies: DocOnto, for the business document templates, ProcOnto, for business process schemes, and KPIOnto, for the various key performance indicators. The PIKR, that adopts particular security mechanisms to protect its content, is a virtual repository since the actual resources physically reside locally, at VE/VIF's partners sites: it hosts and manages ontology-based images of such resources. It highlights the web-based user interface on the top and the servicedata layer on the bottom. In the middle the four software platforms that provide the end-user services (the two blocks on the left) and the knowledge-data services (the two blocks on the right.)
Trial cases and Impact
Impact creation has been one of the central goals of BIVEE and therefore an integral part of the project. To this end, two pilots have been carefully selected to provide a good coverage of different industrial cases; in fact, they are positioned at the two extremes of the technological scale: one belonging to the low-tech sector of wood and furniture and one to the hi-tech sector of robotics and automated measuring equipments. This strategic choice aimed at demonstrating that the value proposition of BIVEE can really have a wide scope. The second qualifying choice has been to carry out a comparative assessment in two distinct phases, launching two different monitoring campaigns. The First Monitoring Campaign (FMC) aimed at acquiring evidences about the performance of production and innovation activities in the 'as is' situations before the adoption of BIVEE. The First Monitoring Campaign, besides collecting quantitative and qualitative information from the operational fields, brought to the project the strategic advantage to involve end users as early as possible. Getting in contact with the future adopters of BIVEE give us the possibility to introduce them to a number of clues, while allowing us to gain a better understanding of a number of issues that were scarcely considered in the initial design of the software environment. Such issues, if underestimated in the early design phase, would have caused a significant delay in the later phases of the project. This has been an important lesson for the engineers and designers, understanding that a good expertise and a strong theoretical background need to be always confronted with the real needs of the users and the stakeholders.
After the FMC, we finalised the first BIVEE prototype, starting to progressively deploy it, in order to begin the Second Monitoring Campaign (SMC) aimed at gathering the information on the two trial cases observed in presence of the BIVEE platform. The objective has been that of confronting the evidences collected in the SMC with the corresponding evidences collected in the FMC. As usual, the reality fully reveals its complexity only when you start to practice it, any kind of theoretical model will be hardly able to seize it. Therefore, in the SMC we realised that a systematic contrast with the FMC data could be hardly achieved. The reality proved to be faster and slower than expected. Faster, since in a year many things have changed, even in the same industrial context (due to the international crisis, to split and merge of enterprises, etc.) and slower, since we realised that many enterprise phenomena, especially those connected to the structural changes, need time to take place. Therefore, a monitoring campaign of six month is too short to cover a significant time span, both in case of improved production processes and innovation projects. Nevertheless, we were able to collect several evidences showing the positive impact induced by the adoption of BIVEE. At a strategic level, the large majotity of users and stakeholders agreed that BIVEE is going to occupy an important area where there is very little offering and, at the same time, where there is a manifest need. Such a need will be steadily growing in the future, since even when the economy (hopefully) will restart, the call for more effective management of production improvement and innovation will increase. And solutions like BIVEE will be needed furthermore.
The main lesson learned concerned the limit of technology, especially if introduced in a working context where there is still a digital gap. There is a risk embedded in the idea that good technical solutions will 'automatically' improve to way people operate, make decisions, collaborate, produce value. We already knew it, but we had a further evidence that when we deal with socio-technical systems, such as the BIVEE Environment, the 'socio' component plays a central role. Therefore, when designing such a kind of systems, in parallel to the development of the technological solutions, it is necessary to consider, with a marked collaborative and | 2019-05-28T13:15:30.050Z | 2015-08-07T00:00:00.000 | {
"year": 2021,
"sha1": "5df26c33f43a834e540e62bc44b2d7d783f9f21c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2101.06736",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "39a26f0569d354f2abaee4d07eeecb9accc9c946",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science",
"Business"
]
} |
225718885 | pes2o/s2orc | v3-fos-license | Unconfined Compressive Strength of Black Cotton Soil Mixed with Cement and Polypropylene Fibre
: The aim of the study was to determine the value of Unconfined compressive strength after stabilizing the black cotton soil with cement and with combination of cement and polypropylene fibre. In this study laboratory experiments were conducted to evaluate the effect of using polypropylene fibre to enhance the strength of cemented black cotton soil. Three fibre polymer contents were used (0.3, 0.6 and 1% by dry weight of the soil) to examine the unconfined compressive strength of black cotton soil mixed with three different cement contents (5, 10 and 15%). For broad interpretation of the fibre reinforced soil-cement behaviour several factors were considered in this study such as time period (3, 7 and 28days) and at water content 35 %. The 35% water content is taken as to improve the soil because at the time of soil excavation the water content obtained is near 35% so to improve the soil not at OMC, taken 35% water content. This investigation revealed that polypropylene fibres can improve the cemented black cottonstrength, as the random distribution of fibres forms a three-dimensional network, which can link the soil particles together to build a coherent unit and restrict the particles movement. Increasing the cement content increase unconfined compressive strength of the black cotton soil at time period of 3, 7 and 28 days. Polypropylene fibres can be used to rise up the strength of disturbed cemented black cotton soil and the UCS value increases 1.5-2.5 times in comparison to cement content at different time periods. The maximum UCS value is obtained at 15% cement content mixed with fibre content 1% in black cotton soil. From test results observation the soil mixed with cement at high dosage can be replaced by keeping low cement content and high fibre content.
I. INTRODUCTION
There exists variety of soil types in different parts of the country. Some of the soils are problematic from construction point of view i.e. marine clay, laterite soil of Southern region and the Black Cotton (BC) soil. Black cotton soil are expansive in nature due to presence of montmorillonite and illite clay minerals. Because of the swelling and shrinkage characteristics of soil, special treatment of the soil or special design needs to be adopted.Soil stabilization is the action of increasing the power and resilience of soil.The simplest stabilization processes are compaction and drainage (if water drains out of wet soil it becomes stronger). The other process is by improving gradation of particle size and further improvement can be achieved by adding binders to the weak soils [8] (Rogers et al., 1996). Soil stabilization depends mainly on chemical reactions between stabilizer (cementitious material) and soil minerals (pozzolanic materials) to achieve the desired effect. The chief properties of soil which are of interest to engineers are volume stability, strength, compressibility, permeability and durability [7] (Ingles and Metcalf, 1972).For a successful stabilization, a laboratory tests followed by field tests may be required in order to determine the engineering and environmental properties. Results from the laboratory tests, will enhance the knowledge on the choice of binders and amounts. For stabilization purpose there are various materials in the like as chemical stabilizers, pozzolanic stabilizers and geosynthetics. The chemical stabilizers include cement, fly ash, lime, bitumen etc. But these stabilizers are use in very large amount to stabilize the soil. Therefore, there is a need for the development of other kinds of soil additives like fibres which stabilizes the soil in very small amount. The process of soil stabilisation with cement involves the hydration of the cement powder with soil pore water during the in situ mixing process. The primary reaction involves the formation of the two silicate compounds (C3S and C2S) and hydrated lime is deposited as separate crystalline solid phase. The cementitious particles, thus formed, bond together and surround the soil particles forming a solid hardened skeleton. Despite the advantage of ordinary Portland cement in soil improvement, several drawbacks can be considered, particularly from environmental point of view. CO2, NOx (nitrogen oxides) and particulate air suspensions are the most significant problem arises using cement. Cement is considered as one of the major causes for theemission of CO2. Almost one ton of CO2 is released with every ton of cement production.For that reason, cement alone is responsible for the production of 5% ofCO2 annually across world [6] (Huntzingeret al., 2009), [10] (Worrell et al., 2001). NOx is another byproduct of cement production along with CO2, which is produced in the cement kiln (2.3 kg per ton of clinker produced [2] (Bremner, 2001).Therefore, finding new sustainable materials which can totally or partially replacement of the cement is an important challenge nowadays. The most commonly used synthetic material, polypropylene fibre is used in this study. This material has been chosen due to its low cost and hydrophobic and chemically inert nature which does not absorb or react with soil moisture.Polymer is one of the promising materials recently applied for soil stabilization, comprises of long chain of monomers which are associated with one another by adequately solid and adaptable Van der Waals force.Polypropylene fibre is the most widely used inclusion in the laboratory testing of soil stabilization. Currently, polypropylene fibres are used to improve the soil strength properties, to reduce the shrinkage properties and to overcome chemical and biological degradation. Polypropylene fibre also enhance the UCS of the soil reduce volumetric shrinkage strain and swelling pressures of the expansive clays. The effect of polypropylene fibre inclusions on the soil behavior could be visible observed during the UCS test exit the formation of unreinforced specimen resulted in the development of failure plane, while polypropylene fibre reinforced specimen tend to bulge, indicating an increase in ductility of fibre soil mixture. and reinforcement like natural fibres (like prosopis fibre, coir fibres,etc) and artificial fibres (like nylon fibre, waste polythene, PVA fibres, polyester and polypropylene fibre) they also exploredvarious parameters like binder content (from 5 to 20%), fibre content (0.2 to 1%),timeperiod (1 to 28 days) based on various tests. All the above mentioned investigations are done on optimum moisture content of soil. As the natural water content of the soil to be improved in field is different from OMC. Investigations are done on mixing polypropylenefibre with cemented soil to improve strength behaviour at natural water content of existing black cotton soil.
III. MATERIAL USED A. Soil
The black cotton soil used for the study was collected from Bhopal region of Madhya Pradesh. The soil sample was collected in polythene bags and then air dried.
B. Cement Material
The cement which is used here is ordinary Portland cement of Grade 43.
C. Polypropylene Fibre
Properties of polypropylene fibre accordance with the product datasheet supplied by the manufacturer is listed in Table 3. IV. METHODOLOGY In the present investigation the soft soil i.e. black cotton soil is sieved and the soil was mixed with tap water for 5 min in a mechanical mixer at initial moisture content at 35% as at these moisture content soil has very low shear strength and it will merely able to take loads from superstructure. The cement was added after to the slurry at cement content varying from 5 to 15%, and both clay slurry and cement were mixed for 5 min. The mixture was then mixed immediately with polymer fibre (i.e. 0.3, 0.6 and 1%) for 5 min mixing time for each case. This mixture was placed in mould of size1000cc (used in standard light compaction test) in 3 layers of 25 blows using hammer of weight 2.6 kg. From the prepared mould the UCS samples were extracted using the hydraulic jack. The cylindrical shaped Unconfined Compression Test specimens were obtained by driving sample extruder and specimens were ejected out and trimmed off so that the finished dimensions of the specimen being 38mm in diameter and 76mm in length. The conventional Unconfined Compression Test was performed on samples at a strain rate of 0.5 mm per minute at time period of 3, 7 and 28 days.
V. RESULT AND DISCUSSION
The axial stress-strain curve is plotted and the axial stress corresponding to axial strain of 2% was taken for comparison of test result because at 10% and 15 % cement content the sample fails at strain of 2-2.5%.
VI. CONCLUSIONS
This study has proved the ability of polypropylenefibresto be used in improving the behavior of cemented black cotton soil. Polypropylenefibres have been used in three contents (0.3, 0.6 and 1%) and mixed with cemented black cotton soil in threecement ratios (5, 10 and 15%) at natural water content i.e. 35% observed experimentally from the soil sample collected from Bhopal. From the experimental work, the following conclusionscan be drawn: A. The UCS value of the compacted black cotton soil increase with time. The UCS value at 3 days is 18.67 kPa, and at 7days is 22.59 kPa and at 28 days is 28.80 kPa respectively. Thus the 7 days value is 1.21 times the 3 day value and 28 days value is 1.54 time the 3 days value. B. Addition of polypropylene fibre to the cement mixed black cotton soil is studied and it is found that mixing of fibre increases the UCS value of soil. When fibre 0.3% by weight of the soil is added to 5% cement content BC soil the strength increases 1.62 times in 3 days, 2.05 times in 7 days and 1.65 times in 28 days.The same trend is near about seen in 0.6% and 1% fibre content. C. Addition of polypropylene fibre to the cement mixed black cotton soil is studied and it is found that mixing of fibre increases the UCS value of soil. When fibre 0.3% by weight of the soil is added to 10% cement content BC soil the strength increases 2.07 times in 3 days, 2.01 times in 7 days and 2.06 times in 28 days.The same trend is near about seen in 0.6% and 1% fibre content. D. Addition of polypropylene fibre to the cement mixed black cotton soil is studied and it is found that mixing of fibre increases the UCS value of soil. When fibre 0.3% by weight of the soil is added to 15% cement content BC soil the strength increases 1.87 times in 3 days, 2.05 times in 7 days and 1.72 times in 28 days.The same trend is near about seen in 0.6% and 1% fibre content. E. Test results shows that the UCS value corresponding to the 15% cement content in cement mixed soil is 968 kPa and that with 5% cement content and 1% fibre content is 802kPa. These values are nearly the same thus it may be concluded that the effect of 1% fibre content is equivalent to about 10% of the cement. From these it is concluded that mixing 1% fibre content in soil reduces the cement content by 10%, i.e. economical and also environmentally better solution to stabilize the BC soil with cement. | 2020-06-18T09:08:06.692Z | 2020-06-30T00:00:00.000 | {
"year": 2020,
"sha1": "9151e820a49090f46b1e78757f8e3fa797691f21",
"oa_license": null,
"oa_url": "https://doi.org/10.22214/ijraset.2020.6156",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4893398f4dbeeb9d3c1c44be47c3861192d23f60",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
} |
256565354 | pes2o/s2orc | v3-fos-license | Efficacy of Soft-Rot Disease Biocontrol Agents in the Inhibition of Production Field Pathogen Isolates
The Dickeya and Pectobacterium bacterial species cause blackleg and soft-rot diseases on potato plants and tubers. Prophylactic actions are essential to conserve a high quality of seed potato tubers. Biocontrol approaches are emerging, but we need to know how efficient biocontrol agents are when facing the natural diversity of pathogens. In this work, we sampled 16 production fields, which were excluded from the seed tuber certification scheme, as well as seven experimental parcels, which were planted with seed tubers from those production fields. We collected and characterized 669 Dickeya and Pectobacterium isolates, all characterized using nucleotide sequence of the gapA gene. This deep sampling effort highlighted eleven Dickeya and Pectobacterium species, including four dominant species namely D. solani, D. dianthicola, P. atrosepticum and P. parmentieri. Variations in the relative abundance of pathogens revealed different diversity patterns at a field or parcel level. The Dickeya-enriched patterns were maintained in parcels planted with rejected seed tubers, suggesting a vertical transmission of the pathogen consortium. Then, we retained 41 isolates representing the observed species diversity of pathogens and we tested each of them against six biocontrol agents. From this work, we confirmed the importance of prophylactic actions to discard contaminated seed tubers. We also identified a couple of biocontrol agents of the Pseudomonas genus that were efficient against a wide range of pathogen species.
Introduction
Several species of the genera Dickeya and Pectobacterium are causative agents of the blackleg and soft-rot diseases in Solanum tuberosum stems and tubers, respectively [1]. In Europe, the species currently isolated from blackleg and soft-rot lesions on potato plants encompass Pectobacterium atrosepticum, Pectobacterium parmentieri, Pectobacterium brasiliense, Pectobacterium polaris, Dickeya dianthicola and Dickeya solani [2]. D. solani emerged twenty years ago [3]. Some other species such as P. punjabense, P. versatile and P. parvum were less frequently detected in lesions of potato plants [4][5][6]. Over the past years, important efforts in genome sequencing of bacterial isolates deposited in collections and those collected by new samplings contributed to a more accurate delineation of Dickeya and Pectobacterium species (examples in [7][8][9][10]).
From this knowledge in taxonomy and genomics, different molecular tools have been developed for pathogen diagnosis, hence for identifying and discarding the contaminated plant materials from the production process of certified seed potato tubers [11]. For instance, in France, a quality scheme was set up along production of tuber seeds from the in vitro propagation of potato varieties until tuber harvest in the field. Before and after tuber harvest, intensive inspections are carried out in the frame of the official control and certification scheme implemented by SOC (French control and certification body). Before harvest, 1% of symptomatic plants is the highest threshold value of blackleg disease incidence for maintaining a given field in the production scheme. In post-harvest controls, the threshold value is set at 0.2% of dry and wet rots, expressed as a weight percentage of tubers (for details see http://frenchseedpotato.com, accessed on 20 November 2022). In France, each year between 2013 and 2017, which is the sampling period in this study, around 9000 lots were harvested from over 20,000 ha. Over this period, the mean rate of rejected plots was 0.85% per year. A maximum rate reached 2.1% in 2016 with particularly rainy conditions, which are more favorable to the development of the disease. The lowest rate was 0.3% in 2017, most likely because of drier environmental conditions (data from FN3PT http://frenchseedpotato.com, accessed on 20 November 2022).
In parallel to the pathogen diagnosis and prophylaxis approach, research has been carried out to identify potato varieties and genetic determinants involved in response to Dickeya and Pectobacterium pathogens [12,13]. Some others allowed the discovery of a wide variety of biological agents including phages and bacteria [14][15][16][17]. Because Dickeya and Pectobacterium pathogen populations are diverse and dynamic over time and space, we need to understand their structure in potato fields and evaluate the efficiency of the potential biocontrol agents using a refreshed collection of pathogen isolates.
In this work, we sampled 16 production fields that were rejected for producing seed potato tubers. We isolated 463 pathogens and compared the structure of the pathogen populations of these fields. In addition, we harvested asymptomatic tubers from thirteen of these rejected fields and we planted them the next year in experimental parcels. We isolated 206 additional pathogens and then we compared the structure of the pathogen populations in production fields (year N) and experimental parcels (year N + 1). Finally, 41 isolates that were representative of the collected species in the rejected potato fields were tested for their sensitivity against six biocontrol agents Bacillus simplex BA2H3, Pseudomonas brassicacearum PA1G7 and PP1-210F, Pseudomonas fluorescens PA3G8, Pseudomonas lactis PA4C2 and Pseudomonas sp. PA14H7 [14]. These biocontrol agents were previously identified after a screening of 10,000 bacterial isolates for their capacity to inhibit the growth of P. atrosepticum CFBP6276 and D. dianthicola RNS04.9 [14]. Hence, we could evaluate the biocontrol efficiency of these biocontrol agents against a wider range of pathogens that are representative of the species sampled in potato fields.
Isolation and Characterization of the Dickeya and Pectobacterium Pathogens
Between 2013 and 2017, 16 potato production fields (France) were sampled. These production fields were excluded from certified lots of seed tubers because they exhibited more than 1% of symptomatic plants (blackleg disease). In order to study Pectobacterium and Dickeya populations at the field scale, around 30 plants with blackleg symptoms were collected in each field, and a single bacterial isolate was retained from each of the 463 plants, resulting in 463 pathogen isolates.
Out of these 16 production fields, thirteen (P1 to P13) were used for plantation assays. From 800 to 4000 asymptomatic tubers were harvested (year N) in each field and then planted the next year (year N + 1) at the experimental station of Comité Nord (Achicourt, France). Out of the 13 experimental parcels, seven exhibited enough plants with blackleg symptoms for sampling: 206 diseased plants were collected (around 30 per parcel) and then 206 pathogens were isolated.
Our sampling approach revealed pathogen diversity at a field or parcel level but not at a plant individual level. In the case of production fields and experimental parcels, pectinolytic bacteria were isolated from lesions on crystal violet pectate medium [18]. Bacterial isolates were purified on agar plates and characterized at the genus (Dickeya and Pectobacterium) and species levels using the PCR primers listed in Table S1 [19][20][21][22][23][24]. Characterization of the D. solani and D. dianthicola isolates from the production fields P1 to P17 was already done in a previous paper [25].
Growth Inhibition Assays with Biocontrol Agents
Among the pathogen isolates sampled in potato fields between 2013 and 2017, 41 isolates representative of the species diversity were used in growth inhibition assays in triplicate with each of the six biocontrol agents: Bacillus simplex BA2H3, Pseudomonas brassicacearum PA1G7 and PP1-210F, Pseudomonas fluorescens PA3G8, Pseudomonas lactis PA4C2 and Pseudomonas sp. PA14H7 [14]. One hundred µL of each pathogen culture at 10 7 CFU/mL were spread on a Petri dish in triplicate. After drying, 10 µL of each biocontrol agent culture at 10 9 CFU/mL was spotted on the center of each Petri dish. After drying, the bacteria were incubated at 25 • C for 24 h. Then, the inhibition halo was measured for each pathogen strain-biocontrol strain combination.
Statistical Analyses
In the in vitro antibiosis tests, statistical analyses were performed using Rstudio (RStudio team, 2021.09.1 version, Boston, MA, USA, https://www.rstudio.com/, accessed on 20 November 2022) software for Windows using the packages FSA and agricolae. A Kruskal-Wallis test (alpha 0.05) followed by a Dunn test with Benjamini-Hochberg correction (alpha 0.05) was performed to compare each pathogen.
Growth Inhibition Assays with Biocontrol Agents
Among the pathogen isolates sampled in potato fields between 2013 an isolates representative of the species diversity were used in growth inhibition triplicate with each of the six biocontrol agents: Bacillus simplex BA2H3, Pseudom sicacearum PA1G7 and PP1-210F, Pseudomonas fluorescens PA3G8, Pseudom PA4C2 and Pseudomonas sp. PA14H7 [14]. One hundred μL of each pathogen 10 7 CFU/mL were spread on a Petri dish in triplicate. After drying, 10 μL of ea trol agent culture at 10 9 CFU/mL was spotted on the center of each Petri dish. Af the bacteria were incubated at 25 °C for 24 h. Then, the inhibition halo was me each pathogen strain-biocontrol strain combination.
Statistical Analyses
In the in vitro antibiosis tests, statistical analyses were performed usin (RStudio team, 2021.09.1 version, Boston, MA, USA, https://www.rstudio.com on 20 November 2022) software for Windows using the packages FSA and a Kruskal-Wallis test (alpha 0.05) followed by a Dunn test with Benjamini-Hoc rection (alpha 0.05) was performed to compare each pathogen.
Our sampling approach (one isolate from one symptomatic plant, repeat 30 times in each experimental field) allowed us to access patterns of pathogen p in each field. Two to six species were identified in each field. Clearly, tw emerged among the analyzed fields, those enriched in Pectobacterium (P1, P3, P and the others in Dickeya species, either D. solani or D. dianthicola. Parcels P1, Figure 1. Species assignation. The gapA sequence was used for species assignation: (a) 463 isolates collected from 16 potato fields, which were rejected for seed tuber production because of blackleg symptoms, and (b) 206 isolates collected from seven parcels exhibiting blackleg symptoms after plantation of tuber seeds collected from symptomatic fields.
Our sampling approach (one isolate from one symptomatic plant, repeated around 30 times in each experimental field) allowed us to access patterns of pathogen population in each field. Two to six species were identified in each field. Clearly, two patterns emerged among the analyzed fields, those enriched in Pectobacterium (P1, P3, P4 and P14) and the others in Dickeya species, either D. solani or D. dianthicola. Parcels P1, P3 and P4 contain only Pectobacterium isolates, while P14 contains also Dickeya isolates. In these four parcels enriched in Pectobacterium species, the relative abundance of the dominant species, namely P. atrosepticum in P1, P3 and P4 and P. parmentieri in P14, varies from 40% to 63%. In Dickeya-enriched patterns, D. dianthicola was dominant in P2, P7, P9, P12, P15 and P17 and D. solani in P5, P6, P8, P10, P11 and P13. Among the Dickeya-enriched patterns, the dominant species may represent up to 96% of the isolated strains.
In Pectobacterium species, the gapA nucleotide sequence revealed the presence of different alleles that give an insight into the infra-specific diversity of these pathogens. In the collected isolates, we identified two or three alleles in P. atrosepticum, P. carotovorum, P. odoriferum and P. parmentieri, and six in P. brasiliense, eight in P. versatile and 10 in P. polaris. Different alleles of the same species could coexist in the same production field (Table S2): for instance, three gapA alleles were identified in P. parmentieri isolates recovered from field P3, four gapA alleles in P. brasiliense from field P15 and five gapA alleles in P. polaris from field P1, suggesting the coexistence of different strains of the same species in the same field.
Species Patterns in Experimental Parcels
Out of the 16 production fields, thirteen (P1 to P13) were used for plantation assays using the harvested asymptomatic tubers. These production fields exhibited either a All seven sampled experimental parcels showed a Dickeya pattern ( Figure 2). Seed tubers used for planting these experimental parcels were collected from production fields that also exhibited a Dickeya pattern, suggesting a vertical transmission of pathogens via tuber seeds. By comparing the species pattern (the relative abundance of species) of each pair of production fields and planted experimental parcel (P2 and P2rep, P6 and contain only Pectobacterium isolates, while P14 contains also Dickeya isolates. In these four parcels enriched in Pectobacterium species, the relative abundance of the dominant species, namely P. atrosepticum in P1, P3 and P4 and P. parmentieri in P14, varies from 40% to 63%. In Dickeya-enriched patterns, D. dianthicola was dominant in P2, P7, P9, P12, P15 and P17 and D. solani in P5, P6, P8, P10, P11 and P13. Among the Dickeya-enriched patterns, the dominant species may represent up to 96% of the isolated strains. In Pectobacterium species, the gapA nucleotide sequence revealed the presence of different alleles that give an insight into the infra-specific diversity of these pathogens. In the collected isolates, we identified two or three alleles in P. atrosepticum, P. carotovorum, P. odoriferum and P. parmentieri, and six in P. brasiliense, eight in P. versatile and 10 in P. polaris. Different alleles of the same species could coexist in the same production field (Table S2): for instance, three gapA alleles were identified in P. parmentieri isolates recovered from field P3, four gapA alleles in P. brasiliense from field P15 and five gapA alleles in P. polaris from field P1, suggesting the coexistence of different strains of the same species in the same field.
Species Patterns in Experimental Parcels
Out of the 16 production fields, thirteen (P1 to P13) were used for plantation assays using the harvested asymptomatic tubers. These production fields exhibited either a All seven sampled experimental parcels showed a Dickeya pattern (Figure 2). Seed tubers used for planting these experimental parcels were collected from production fields that also exhibited a Dickeya pattern, suggesting a vertical transmission of pathogens via tuber seeds. By comparing the species pattern (the relative abundance of species) of each pair of production fields and planted experimental parcel (P2 and P2rep, P6 and P6rep,
Efficiency of Biocontrol Agents Facing Natural Diversity of Pathogens
We evaluated whether the biocontrol agents B. simplex BA2H3, P. brassicacearum PA1G7 and PP1-210F, P. fluorescens PA3G8, P. lactis PA4C2 and Pseudomonas sp. PA14H7 were active against the pathogens isolated from the sampled potato fields. We collected 738 measures of growth inhibition assays by testing in triplicate the six biocontrol agents against 41 pathogen isolates, all collected in production fields: five isolates of D. dianthicola, five of D. solani, seven of P. atrosepticum, five of P. brasiliense, five of P. parmentieri, four of P. versatile, three of P. carotovorum, three of P. polaris, two of P. odoriferum, one of P. punjabense and one of P. parvum (a list in Table S2).
Efficiency of Biocontrol Agents Facing Natural Diversity of Pathogens
We evaluated whether the biocontrol agents B. simplex BA2H3, P. brassicacearum PA1G7 and PP1-210F, P. fluorescens PA3G8, P. lactis PA4C2 and Pseudomonas sp. PA14H were active against the pathogens isolated from the sampled potato fields. We collecte 738 measures of growth inhibition assays by testing in triplicate the six biocontrol agen against 41 pathogen isolates, all collected in production fields: five isolates of D. dian thicola, five of D. solani, seven of P. atrosepticum, five of P. brasiliense, five of P. parmentier four of P. versatile, three of P. carotovorum, three of P. polaris, two of P. odoriferum, one of P punjabense and one of P. parvum (a list in Table S2).
Considering all pathogen isolates, Pseudomonas sp. PA14H7 and P. brassicacearu PA1G7 and PP1-210F exhibited the highest growth inhibition capacity, while P. fluorescen PA3G8, P. lactis PA4C2 and Bacillus simplex BA2H3 were less efficient (Figure 3). In the next step, we refined our analysis of the collected 738 halo measurements b considering each pathogen species for each biocontrol agent (Figure 4 and 5). Pseudomona sp. PA14H7 and P. brassicacearum PA1G7 and PP1-210F were able to antagonize th growth of a broad spectrum of species (Figure 4). In the presence of Pseudomonas s PA14H7, the more sensitive pathogen species (classes a or b in statistical analysis) wer D. dianthicola, D. solani, P. atrosepticum and P. parvum, while the other Pectobacterium spe cies appeared as less sensitive (classes c, d or e in statistical analysis). In the presence of P brassicacearum strains PA1G7 and PP1-210F, the more sensitive species were D. dianthicol D. solani, P. atrosepticum, P. parvum and P. punjabense, whereas P. versatile also appeared a sensitive in the presence of P. brassicacearum strains PA1G7. Other results obtained wit biocontrol strains B. simplex BA2H3, P. lactis PA4C2 and P. fluorescens PA3G8 are pre sented in Figure 5. These strains are less efficient for inhibiting the growth of the 41 teste pathogens. In the next step, we refined our analysis of the collected 738 halo measurements by considering each pathogen species for each biocontrol agent (Figures 4 and 5). Pseudomonas sp. PA14H7 and P. brassicacearum PA1G7 and PP1-210F were able to antagonize the growth of a broad spectrum of species (Figure 4). In the presence of Pseudomonas sp. PA14H7, the more sensitive pathogen species (classes a or b in statistical analysis) were D. dianthicola, D. solani, P. atrosepticum and P. parvum, while the other Pectobacterium species appeared as less sensitive (classes c, d or e in statistical analysis). In the presence of P. brassicacearum strains PA1G7 and PP1-210F, the more sensitive species were D. dianthicola, D. solani, P. atrosepticum, P. parvum and P. punjabense, whereas P. versatile also appeared as sensitive in the presence of P. brassicacearum strains PA1G7. Other results obtained with biocontrol strains B. simplex BA2H3, P. lactis PA4C2 and P. fluorescens PA3G8 are presented in Figure 5. Figure 3 were analyzed different way. For each biocontrol agent, we calculated mean value and variation for all halos lected with pathogen isolates belonging to the same species. Statistical differences (Kruskal-W test multiple comparison (Dunn), p-value adjusted with the Benjamini-Hochberg method, alp 0.05) are indicated by different letters (a to e); horizontal bar indicates mean value.
Discussion
Several species-specific tools were developed for identifying and quantifying Dickeya and Pectobacterium pathogen species in plant samples, as well as for characterizing the collected bacterial isolates [11]. In this work, we mainly used a unique marker gapA for characterizing all the collected isolates from blackleg lesions. These symptomatic tissues
Discussion
Several species-specific tools were developed for identifying and quantifying Dickeya and Pectobacterium pathogen species in plant samples, as well as for characterizing the collected bacterial isolates [11]. In this work, we mainly used a unique marker gapA for characterizing all the collected isolates from blackleg lesions. These symptomatic tissues were sampled in potato tuber production fields, which were excluded from tuber-seed production because of a high prevalence of blackleg symptoms. A strength of this PCRsequencing tool is the absence of a priori on the taxon to be identified [24]. Another interest is the generation of nucleotide sequences that may be stored and used in alignments and relation trees with reference sequences according to an up-to-date taxonomy. For instance, this approach allowed us to identify a rare taxon P. punjabense in our collection of field isolates before the development of a dedicated species-specific tool [5]. The gapA marker is appropriate to uncover emerging or uncharacterized taxon.
In this work, we took advantage of the gapA marker for comparing Dickeya and Pectobacterium species patterns of potato tuber production fields. We focused our study on fields that were excluded from tuber-seed production because of a high prevalence of symptomatic plants (>1%). This work provided an understanding of what diversity of pathogens is discarded by the prophylactic strategy that is set up along the tuber-seed certification process. According to the relative abundance of the taxons in the 16 fields we sampled, we identified two major patterns: the Pectobacterium and Dickeya patterns. These patterns are characterized by a predominance of one species that represents at least 40% of the isolated pathogens. In this study, the predominant species were D. solani, D. dianthicola, P. atrosepticum and P. parmentieri. Together with predominant species, companion species were identified in each sampled field. Their numbers varied from one to five, among the following species: D. solani, D. dianthicola, P. atrosepticum and P. parmentieri, and among some less frequently isolated species such as P. brasiliense, P. carotovorum, P. odoriferum, P. parvum, P. polaris, P. punjabense and P. versatile. The predominance of some species could mirror a greater capability to exploit the potato host (hence a greater aggressiveness), and/or a greater competitiveness against other Dickeya and Pectobacterium pathogens and microbiota. Because only 16 fields were sampled, we could not exploit our data for testing statistical hypotheses of association or exclusion between species. We are currently expanding our samplings to investigate species relationships.
Several cases of predominant species and synergistic or antagonist relationships between Dickeya and Pectobacterium pathogens were reported. These relationships were considered at a species level, when behavior is shared by all (or almost all) strains of the same species, or at an infra-species level, when only one strain (or a few) exhibits a particular trait. In Norway, a study on 34 seed tuber lots revealed P. atrosepticum as a predominant species [26]. In Finland, a long-term survey (over 14 years) of seed tubers showed a predominance of P. carotovorum (52% of all samples) and, a more frequent co-occurrence of Pectobacterium-Pectobacterium species rather than Pectobacterium-Dickeya species [27]. In France, a long-term fields survey (over 10 years), revealed Pectobacterium species as predominant over Dickeya species in collected lesions and a non-random distribution of Dickeya and Pectobacterium predominant species in field survey [25]. In the USA, a field survey revealed D. dianthicola and P. parmentieri as predominant species; virulence assays in parcels showed D. dianthicola as more virulent than P. parmentieri [28]. A synergistic effect of the co-inoculation of the two species resulted in increased disease severity compared to single-species inoculation [28]. In Morocco, sampling revealed P. brasiliense and D. dianthicola as predominant species in distinctive production fields [29].
The predominant species reflects local contingencies that could be influenced by the impact of climatic factors: the optimal growth temperature is higher for Dickeya species as compared to Pectobacterium species, as discussed by Degefu et al. (2021) [27]. Relationships between D. solani and D. dianthicola appeared complex: competitive and synergistic behaviors were reported as depending on the colonized plant tissues. In greenhouse assays, D. dianthicola outcompeted D. solani in aerial parts of potato plants, while the two species co-existed in tubers [25]. In the case of D. solani, different properties were highlighted as potentially contributing to settlement in potato agrosystems: a capacity to initiate symptoms with a low inoculum in relation to a particular regulation of the pelED promoter [30]; a capacity to exploit a variety of nutrient resources [31], a capacity to produce a wide spectrum of anti-microbial compounds, improving its competitive fitness against other Pectobacterium and Dickeya species and microbiota [32,33]. Some other reports suggest that the production of antibiotics and their cognate resistance genes could also influence the dynamics of Pectobacterium and Dickeya pathogens by modifying the relative abundance of taxons and clones [34,35].
Facing the diversity of Pectobacterium and Dickeya pathogens, the main strategy deployed in all countries is a prophylactic approach consisting of discarding symptomatic parcels and lots of seed tubers. The identification and introgression of plant alleles for decreasing the sensitivity of S. tuberosum is a promising strategy, still in infancy [12,13]. Over the past years, intensive efforts allowed the identification of biocontrol agents: phages and bacteria [14][15][16][17]. A challenging issue is identifying biocontrol agents controlling a wide diversity of Pectobacterium and Dickeya species. Cocktails of phages or bacteria have been proposed to solve this issue [14][15][16], but an increased cost of production and patents could be expected. Another non-exclusive approach is to search for a biocontrol agent targeting a wide range of Dickeya and Pectobacterium species. In this work, we challenged six bacterial biocontrol agents against a wide diversity of pathogen isolates that we collected from potato fields. We showed that three of them exhibited a wide range spectrum in growth inhibition of pathogens. The biocontrol strain Pseudomonas sp. PA14H7 was very active, including against the predominant species that we identified in our sampling. The mechanism of action of biocontrol activity is under investigation.
Conclusions
This work highlighted distinctive patterns of Dickeya and Pectobacterium species in sampled production fields, which were excluded from the certification process of potato tuber seeds. Moreover, the gapA analysis revealed the infraspecific diversity at the plot scale and underlines the complex relationship existing between strains involved in the symptom expression. These patterns are maintained in parcels planted with rejected seed tubers, highlighting the importance of a certification scheme for rejecting contaminated tubers. Representatives of the Dickeya and Pectobacterium diversity in the potato fields were challenged by several biocontrol agents allowing the identification of a couple of biocontrol agents that were efficient against a wide range of pathogen species. | 2023-02-04T16:14:44.256Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "5a87206ae6df38f611bafe9c3354f64e3bdf13ae",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/11/2/372/pdf?version=1675255761",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b72824f747d1a1ec6811bd120c6c29b766ec3dec",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
250964370 | pes2o/s2orc | v3-fos-license | Treatment for complete bilateral duplex kidneys with severe hydronephrosis and ureterectasis of the upper moiety in a child: A case report and literature review
Aim To explore the treatment experience of the duplex kidney. Method A case of the complete bilateral duplex kidney with severe hydronephrosis and ureterectasis in the upper moiety of the kidney diagnosed in the Department of Urology of Kunming Children's Hospital from 2021 to 2022 was retrospectively analyzed and relevant literature was reviewed. Results A 2-month-old baby girl was admitted to the hospital because of hydronephrosis of bilateral kidneys found by prenatal ultrasound for 3 months and fever for 3 days. After being given the relevant examinations, the girl was diagnosed with complete bilateral duplex kidneys with severe hydronephrosis and ureterectasis in the upper moiety, and urinary tract infection. The patient's urinary tract infection was poorly controlled after positive anti-infective therapy, so a bilateral ureterostomy was performed. After the surgery, urinary tract infection was soon cured. A bilateral ureteroureterostomy was performed 13 months later, and the patient recovered after 7 days. Conclusion Cutaneous ureterostomy combined with late ureteroureterostomy for children with complete bilateral duplex kidneys with severe hydronephrosis in the upper moiety and ureter are not only beneficial to caregivers’ nursing after the operation, but also have significance for salvaging renal function.
Aim: To explore the treatment experience of the duplex kidney. Method: A case of the complete bilateral duplex kidney with severe hydronephrosis and ureterectasis in the upper moiety of the kidney diagnosed in the Department of Urology of Kunming Children's Hospital from 2021 to 2022 was retrospectively analyzed and relevant literature was reviewed. Results: A 2-month-old baby girl was admitted to the hospital because of hydronephrosis of bilateral kidneys found by prenatal ultrasound for 3 months and fever for 3 days. After being given the relevant examinations, the girl was diagnosed with complete bilateral duplex kidneys with severe hydronephrosis and ureterectasis in the upper moiety, and urinary tract infection. The patient's urinary tract infection was poorly controlled after positive anti-infective therapy, so a bilateral ureterostomy was performed. After the surgery, urinary tract infection was soon cured. A bilateral ureteroureterostomy was performed 13 months later, and the patient recovered after 7 days. Conclusion: Cutaneous ureterostomy combined with late ureteroureterostomy for children with complete bilateral duplex kidneys with severe hydronephrosis in the upper moiety and ureter are not only beneficial to caregivers' nursing after the operation, but also have significance for salvaging renal function. KEYWORDS child, duplex kidney, renal malformation, ureterostomy, ureteroureterostomy Background Duplex kidney is a common disease in pediatric urology. There are many controversies due to the children's deformities and treatment methods are diverse and complicated (1). We report a case of severe hydronephrosis and ureterectasis in the upper moiety of bilateral duplex kidney with ectopic ureter on the left side and ureterocele on the right side, and summarize the characteristics of the disease for improving the understanding of pediatricians of the disease.
Clinical material Patient information
The patient's parents informed and agreed to this case report: A 2-month-old baby girl was admitted to the hospital because of hydronephrosis of bilateral kidneys found by prenatal ultrasound for 3 months and had a fever for 3 days. The child was found to have bilateral hydronephrosis in her mother's late pregnancy, and ultrasound was not regularly performed after birth. The child had a fever for 3 days without an obvious cause, and the highest temperature was 39°C. After treatment in a local hospital, the child still had a recurrent high fever. After being referred to Kunming Children's Hospital, the child was given relevant examinations.
Physical examination
No positive signs were found.
Imaging examination
Ultrasound of urinary system showed bilateral duplex kidney ( Figure 1). Severe hydronephrosis and ureterectasis were found in the bilateral upper moiety. Intravenous pyelography (IVP) showed there was an ureterocele in the bladder ( Figure 2). Computerized tomography (CT) revealed severe hydronephrosis and ureterectasis of bilateral upper moiety, left ureteral opening was ectopic in location, and the right ureterocele was found ( Figure 3).
After discussion, cystoscopy and bilateral cutaneous ureterostomy was performed for the patient (Figure 4). In cystoscopy, ectopic insertion of lift upper moiety ureter and ureterocele of right upper moiety ureter were found. The patient's urinary tract infection (UTI) was cured soon after bilateral cutaneous ureterostomy, and she was discharged 5 days later ( Figure 5).
No UTI a Figure 5 fever occurred in the patient during discharge. The results of routine urine examination were normal, and ultrasound indicated that bilateral hydronephrosis was relieved obviously. 13 months after the ureterostomy, the child was returned to the hospital for further surgery. IVP showed no hydronephrosis in bilateral duplex kidney ( Figure 6). Voiding cystourethrography (VCU) showed mild vesicoureteral reflux (VUR) on the right lower moiety ureter ( Figure 7). The patient recovered and was discharged on the 7th day after bilateral ureteroureterostomy. The D-J tubes were removed 1 month after operation ( Figure 8). Preoperative ultrasound showed severe hydronephrosis and ureterectasis of bilateral upper moiety.
Discussion
Prenatal ultrasound is necessary for both pregnant and fetus. With the continuous development and maturity of ultrasound technology, it not only helpful for diagnosing an obstruction of the urinary tract in pregnancies, but also meaningful for permitting the prenatal anatomical assessment of most congenital anomalies of the kidney and urinary tract during the second or third trimester (2,3). The incidence of duplex kidney malformation is about 0.8% and it is common Preoperative IVP showed an ureterocele in the bladder (↑). Results of preoperative CT showed severe hydronephrosis and ureterectasis of bilateral upper moiety, left ureteral opening, and the right ureterocele. to women (4,5). Among children diagnosed with hydronephrosis before birth, 5%-7% are due to duplex kidney (6). The duplex kidney can be divided into incomplete type (Y type) and complete type. The incidence of complete ureteral deformity is about 0.2% (7). According to Weigert-Meyer-Rule, the upper pole is normally seen as ectopic and therefore dysplastic due to obstruction, whereas the lower pole is related to vesicoureteral reflux (8-10). Fetal interventions include ultrasound-guided bladder puncture and drainage and transurethral incision of ureterocele under fetal cystoscope. Although the decompression effect is good, there are risks of premature rupture of membranes, premature delivery, infection, bleeding, and fetal death, which need to be carefully evaluated and considered (11,12). Nearly 60% of patients with duplex kidney are asymptomatic and need no treatment after birth. Patients with ectopic ureteral opening, ureteroceles, hydronephrosis, calcuil, urinary tract infection, or non-functional need surgical treatment (13). Treatment for duplex kidney needs to be individualized, and heminephroureterectomy is the earliest technique used in duplex kidney therapy. If the duplex kidney function is good and there is no dysplasia, the upper moiety can be reserved. Common surgical methods include ureteric reimplantation, ureteroureterostomy, pelopureteroplasty, etc. (14). The objective of treatment is to prevent urinary tract infection, and renal damage and to achieve urinary continence (15).
The surgical indication for hemi-nephroureterectomy is upper renal function <10% due to recurrent urinary tract infection with vesicoureteral reflux, or severe obstruction (16, 17). And most scholars agree that upper moiety resection should be performed in patients with recurrent urinary tract infection and ipsilateral abdominal pain (18). However, it is controversial whether the operation should be performed in patients with renal function <10% and no relevant clinical symptom. Proponents believe that surgical removal of nonfunctional or dysplasia renal tissue and ureters can prevent the long-term occurrence of hypertension or pyelonephritis (1,19). On the contrary, the possibility of long-term hypertension and pyelonephritis in the non-functional kidney is low, and the operation will easily affect the blood supply and function of the lower kidney, leading to kidney loss (15,20,21).
Ureteroureterostomy is usually performed at the level of the iliac vessels to avoid dissociation of the colon, interference with the nerves and vessels of the bladder, and extensive dissociation of the ureter (22). The procedure is not only easier than common sheath ureteral reimplantation but also can protect bladder function from damage. Anastomotic fistula, anastomotic stenosis, and ureteral stump complications are the main postoperative complications of this procedure, and the overall incidence is similar to that of common sheath ureteral bladder replantation (23).
In duplicated ureters with VUR, without obstruction, and with the preserved function of both renal moieties, the gold standard surgical intervention is ureteral reimplantation. However, the incidence of surgical complications is as high as Patient's situation after bilateral cutaneous ureterostomy. Result of IVP after ureterostomy showed no hydronephrosis in bilateral duplex kidney. 10%-12.5%, and bladder function will inevitably be disturbed. Studies showed that about 10% of patients require a second surgery (21,24).
For patients with severe hydronephrosis combined with severe urinary tract infection and sepsis, timely removal of obstruction and urine drainage are beneficial to the recovery Result of VCU before ureteroureterostomy showed mild vesicoureteral reflux (VUR) on the right lower moiety ureter. Postoperative situation of the D-J tubes.
Wu et al. 10.3389/fsurg.2022.1019161 Frontiers in Surgery 08 frontiersin.org ureterocele (TIU): TUI has an invasive, cosmetic, and no external drainage surgical effect. However, this method has secondary or aggravating risks of VUR on the affected side, and patients with large cysts may also have postoperative cyst wall prolapse in the urethra. ③ Cutaneous ureterostomy: This operation has a definite curative effect, no external drainage after the operation, and postoperative nursing of infants is relatively convenient. In this case, considering that both external renal drainage tubes should be placed after pyelostomy, and unilateral TIU has no definite significance for the relief of obstruction on the other side, so bilateral ureterostomy was performed in this case. The patient's hydronephrosis was significantly relieved after cutaneous ureterostomy, and IVP showed that the development of both kidneys was normal after 13 months. It indicated that timely relief of obstruction was of significance to salvage renal function.
At present, it is still controversial whether ureteral bladder reimplantation should be performed for duplex kidney with lower moiety VUR. Due to the low grade of reflux, ureteric reimplantation was not performed in our case for the following reasons: On the one hand, mild reflux has the possibility of self-healing. On the other hand, if the patient's grade of VUR is aggravated or recurrent urinary tract infection occurs, only a single ureteral bladder reimplantation will be required in the future. Compare with common sheath ureteral reimplantation, it requires a smaller bladder capacity, ureter diameter, and bladder mucosal tunnel length, which is not only beneficial to reduce surgical trauma and bladder disturbance, but also has a higher surgical success rate (24).
When indicated, the type of surgery for children with the complicated duplex renal anomaly is based on renal moiety function and lower tract anatomy, and sequential treatment is meaningful to reduce bladder disturbance, reduce surgical trauma and improve the success rate of surgery.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Author contributions
CW and FJ performed the surgery and drafted the original manuscript. HZ and ZY collected data and participated in to amend the manuscript. BY and LL designed the operation scheme and amended the manuscript. All authors contributed to the article and approved the submitted version.
Funding
This study did received the support from: Yunnan Province Clinical Research Center for Children's Health and Disease. | 2022-07-23T15:06:13.133Z | 2022-11-02T00:00:00.000 | {
"year": 2022,
"sha1": "b22da5d95c51d0d0954d501a39bbafd78ef481aa",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "36f5c35a54ceaf821b165579736c67ee2e1be01b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
144891285 | pes2o/s2orc | v3-fos-license | Study of latent inhibition at high-level creative personality The link between creativity and psychopathology
This paper tries to find a proper answer, approaching the link between creativity and psychopathology in terms of cognitive connections and personality traits common to creative and mentally disturbed individuals. To verify our hypothesis we conducted a latent inhibition task and we applied several questionnaires. The results indicated a significant relationship between a variety of creativity indicators and low scores of latent inhibition that were related previously with the presence of mental illness. We used IQ as a mediating variable between creativity and latent inhibition. It seems that creativity can also be associated with high scores at the clinical scales. © 2011 Published by Elsevier Ltd. Selection and peer-review under responsibility of PSIWORLD 2011
Introduction
The stereotype about artists suffering from psychiatric disorders suggests that mental disorders and creative imagination have common roots. Specialty literature abounds in theories which link creativity with mental disorders, as well as detractors of these theories (Chan 2001). The general theoretical framework in which our research is grounded posits that individuals who possess a high degree of creativity have thought and personality structures similar to accentuated personalities (Andreasen, 1987).
Purpose of study
After having studied the available data, it was observed that creative individuals possess certain means to process information (hyperactive imagination, latent inhibition) leading to such a strong
Objectives
This paper tries to review all the findings for a better understanding of the social context that sustains this controversy.
By transferring these conclusions in the context of an organization whose field of interest is publicity, we expect individuals with a high creativity level to have a thinking style (information-processing strategies) similar to those who obtain high scores on clinical scales, which leads to a higher predisposition to mental disorders, compared to less creative individuals. We also assume IQ to be a mediating variable between creativity and latent inhibition (insofar as low/ high levels of latent inhibition are compared to moderate/ high IQ levels and creative outputs). At the same time, we believe there is a positive correlation between creativity and high scores on clinical scales.
Participants
In order to verify the truth value of these hypotheses, we have conducted our investigation on a sample of 43 subjects, with ages ranging from 20 to 35 years old, belonging to middle and upper class social environments, with average and high revenues, homogenous as per gender variables. Of the 43 subjects, 22 work in the creative department of advertising agencies and the others in departments which do not absolutely require creative abilities. Creative subjects sample consisted of music composers, actors, photographers, visual artists, copywriters, interpreters, art directors and audio-video producers.
Procedure
For the latent inhibition task, the subjects participated in a two-phase experiment: In the first part of the experiment, the subjects in the pre-exposure phase were shown an audio-video version of the latent inhibition task. They had to listen to a series of 30 meaningless syllables (masking material), presented 5 times, with no break that would mark the end or the beginning of each repeating cycle. A white noise (which was the target stimulus) was randomly superposed 25 times throughout the recording. The subjects received the masked task to determine how many times the syllable "bim" repeats itself.
In phase 2 (the actual test phase), the recording was put on replay, while a series of 25 yellow discs appeared on the screen. The apparition of yellow discs coincides with the apparition of target stimuli (white noise) which could be heard during the recording with the meaningless syllables. The subjects were asked to determine which are the auditory stimuli signaling the apparition of the yellow dots. When the subject would correctly identify the apparition of the yellow dots from 3 consecutive attempts, the recording would be stopped. In the case of a wrong answer, the presentation would go on until the subject would discover the rule. The subject's score for this task (attempts to identify the rule) is determined by the number of yellow discs visible on the screen at the time point coinciding with a correct answer.
The subjects in the non-pre-exposure phase are asked to view the same recording, except for the fact that the target stimulus (white noise) is absent from the pre-exposure phase of the task.
Instruments
Additionally, throughout the tests previously conducted, other several variables have been assessed, such as creativity, intelligence level and presence of accentuated traits in the creative sample subjects. Applying Torrance tests of creative thinking (figural and verbal) enabled us to make a separation between creative and non-creative individuals. Then an intelligence test and a test for accentuated personalities were applied on all participants. In order to assess the intelligence level, we made use of Raven's progressive matrices, while for investigating pathological tendencies, including both neurotic and somatic psychopathological disorders, we used the DA 307 Questionnaire. The factors captured through this assessment are the following: demonstrativity, hyper-exactness, hyper-perseverance, lack of control hyperthymia, dysthymia, lability, exaltation, emotivity, anxiety, neuroticism, dependence and desirability.
Data analysis
For testing the first hypothesis, the Pearson linear correlation coefficient was used, precisely for showing how the values of the two variables vary as influenced by one another (latent inhibition and creativity, respectively latent inhibition and scores obtained on clinical scales). Subsequently, low/ high inhibition levels are compared to moderate/ high IQ levels and to creative outputs, in order to test the second hypothesis. Multiple linear regression was used, showing thus how much information on latent inhibition is contained in the simultaneous combination of the two variables, associated, e.g. creativity and the intelligence coefficient
Results
The Pearson correlation coefficient outlines negative, but significant correlations between latent inhibition and the variable associated to high scores in the accentuated personalities questionnaire (r=-0,76,p=0.00), which leads to confirmation of the first hypothesis, justifying the supposition that subjects with low latent inhibition scores rank high on scales assessing pathological tendencies.
The correlation matrix, shows in this case as well significant negative correlations between latent inhibition and figural creativity, e.g. creativity through words. Thus, we may conclude that the first general hypothesis is confirmed, which further means that we are allowed to imply the existence of a means of processing information common to creative individuals and individuals obtaining high scores on clinical scales. This finding could be explained by the fact that individuals with a low latent inhibition level are more predisposed to mental disorders, not being capable to filter relevant stimuli which frequently interposing focused thinking processes. At the same time, a low inhibition level is associated with exceptional cognitive flexibility, which triggers creative behaviour.
As the regression tables show, this time too, the hypothesis has been verified for each form of creativity. As per the above data, the R² value proves that 72% of the figural creativity variation is determined by the latent inhibition and intelligence coefficient variables, as well as the interaction between them. Individual regression coefficients have significant values, which underlines the description of a significant relationship between the predictor and criterion variables. Confirmation of this hypothesis shows that the ability to exploit unusual ideas (creativity) is sustained as well by general intelligence. As Simonton said, a minimal level of intelligence is necessary for an exceptional creativity (Simonton, 2000). However, as time went by, psychology and related fields have witnessed the shaping of a certain personality profile, typical for creative geniuses. Eysenck (1995) states that independence, neuroticism, nonconformism are associated to psychoticism, representing, at the same time, traits which sustain innovative activities. With regards to latent inhibition, Peterson and Carson (2000) say that this less restrictive means of processing information is associated with openness towards experience and creativity. Other authors believe that a high level of self-sufficiency is a characteristic of the most creative individuals (Barron, 1963). In order to clarify some aspects pertaining to the highly creative individuals personality, and especially the prevalence of personality disorders within this sample, we elaborated the third general hypothesis according to which individuals having high scores on creativity scales rank high on clinical scales as well. To verify if there's any consistency in the reciprocal variation of the two variables' values, we applied Pearson's linear correlation coefficient. A significant positive correlation was observed between figural creativity and the variable which we defined as accentuated personality, r=0.76, p<0.001. Significant results have been observed as well for the second form of creativity, the figural type, and presence of accentuated personality traits. After applying Pearson's correlation coefficient for each scale of the DA 307 Questionnaire and the two forms of creativity, the results of the research confirmed that individuals with verbal creativity abilities are emotive, sensitive, impressionable, easily affected especially by unhappy events, however not really anxious and not necessarily demonstrative, as we might have expected. The subjects getting high scores for figural creativity are characterized by uncensored behaviors, however without ranking high on the neuroticism dimension.
Discussion
Such data adds to the conclusions of empirical research conducted so far on the creative individuals' personality profile, reducing the power of a whole range of speculations made over time on the link between creativity and mental health. The results and the analyses captured in this study indicate a significant substantial relationship between a variety of creativity indicators and low scores for latent inhibition, previously associated with underlying psychotic states. There is also substantial research evidence to suggest that a high level intelligence coefficient has a moderate positive effect on the expression of latent inhibition, functioning as either a disadvantage, a deficit hindering selective attention, or as facilitating factor of creativity. This study would be incomplete, however, if we failed to acknowledge the issue of the genius within the context of postmodern condition. As postmodernism is characterized by -pluralism, identity & self fragmentation, decentralized control, dissemination of information (Derrida, 1976), skepticism, double encoding (Jencks, 989), globalization (Jameson, 1992), emergence of simulacra, as well as the role of the media in transforming and recreating the real (Braudrillard,1988) -nonconformism, hybridity, identity conflicts are enjoying an increasingly wider acceptance, which leads to a deeper understanding of geniuses and their particularities. For future research, it would be desirable to develop a procedure employing multiple methods to assess latent inhibition, on a larger sample size, on different creative groups. Preferably, future projects would also benefit from introducing additional potentially moderating factors, such as memorizing ability, hyperactive imagination, but also personality dimensions which may have an impact on the relationship between latent inhibition and creativity. Furthermore, it would be interesting to look at the way such | 2019-05-05T13:07:46.526Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "43502c6d2c35d1ffe90bb14f1b69a53fcad34e42",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.sbspro.2012.01.142",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "00b99bb63c40d52777492810dd9087a3e7589e19",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
235801911 | pes2o/s2orc | v3-fos-license | Inhibition of mTORC1 through ATF4-induced REDD1 and Sestrin2 expression by Metformin
Background Although the major anticancer effect of metformin involves AMPK-dependent or AMPK-independent mTORC1 inhibition, the mechanisms of action are still not fully understood. Methods To investigate the molecular mechanisms underlying the effect of metformin on the mTORC1 inhibition, MTT assay, RT-PCR, and western blot analysis were performed. Results Metformin induced the expression of ATF4, REDD1, and Sestrin2 concomitant with its inhibition of mTORC1 activity. Treatment with REDD1 or Sestrin2 siRNA reversed the mTORC1 inhibition induced by metformin, indicating that REDD1 and Sestrin2 are important for the inhibition of mTORC1 triggered by metformin treatment. Moreover, REDD1- and Sestrin2-mediated mTORC1 inhibition in response to metformin was independent of AMPK activation. Additionally, lapatinib enhances cell sensitivity to metformin, and knockdown of REDD1 and Sestrin2 decreased cell sensitivity to metformin and lapatinib. Conclusions ATF4-induced REDD1 and Sestrin2 expression in response to metformin plays an important role in mTORC1 inhibition independent of AMPK activation, and this signalling pathway could have therapeutic value.
Background
Metformin (1,1-dimethylbiguanide hydrochloride) belongs to the biguanide class of drugs and is a widely used drug administered orally to treat type 2 diabetes mellitus [1]. Epidemiological studies have demonstrated that metformin use is associated with decreased cancer incidence and mortality in patients with diabetes [2,3]. Furthermore, accumulating evidence suggests that metformin exerts antitumour effects in many cancers [4][5][6]. However, the underlying molecular mechanism by which metformin reduces tumour incidence and inhibits cancer cell growth in vitro and in vivo has not been clearly elucidated. The well-accepted mechanism of metformin action is inhibition of mitochondrial respiratory complex I and activation of AMP-activated protein kinase (AMPK) in response to energy depletion [7]. AMPK is a heterodimeric protein complex that plays an essential role in sensing energy and suppressing cell growth under lowenergy conditions [8]. Activated AMPK phosphorylates multiple downstream targets to maintain cellular energy homeostasis [8,9].
One of the central downstream targets inhibited by AMPK is mechanistic target of rapamycin complex 1 (mTORC1), a serine/threonine kinase that has a critical role in controlling cell growth and cellular metabolism by integrating various environmental signals, such as growth factors, amino acids, and glucose [10,11]. mTORC1 directly phosphorylates downstream substrates, including ribosomal S6 kinase 1 (S6K1) and eukaryotic initiation factor 4E (eIF4E)-binding protein 1 (4E-BP1), to regulate protein synthesis to promote cell proliferation [12]. mTORC1 is tightly regulated by multiple upstream pathways. The response of mTORC1 signalling to growth factors is mediated by the small GTPase Ras homology enriched in brain (Rheb), which is negatively regulated by tuberous sclerosis complex (TSC1/2) proteins [13][14][15]. When the PI3K/Akt pathway is activated by growth factors, Akt phosphorylates TSC2 and disrupts the TSC1/2 complex [16,17]. Energy levels signal to mTORC1 through AMPK by two mechanisms [18]. Firstly, AMPK directly phosphorylates the TSC2 on S1387 to activate TSC2 and promote inhibition of mTORC1 inhibition through the Rheb axis [19,20]. The second, AMPK phosphorylates raptor on serines 722 and 792 to directly inhibit mTORC1 activity [21]. Some studies have reported that metformin inhibits the mTORC1 signalling pathway independent of AMPK activation [22][23][24]. However, the molecular mechanisms involved in AMPK-independent mTORC1 inhibition by metformin have not been fully elucidated.
In the present study, we investigated the molecular mechanism(s) by which metformin induces mTORC1 inhibition in non-small cell lung cancer (NSCLC) cells. We found that inhibition of mTORC1 in response to metformin requires ATF4 and that ATF4-induced upregulation of REDD1 and Sestrin2 is implicated in this effect. REDD1 and Sestrin2 are necessary for mTORC1 inhibition by metformin treatment, and the response occurs through an ATF4-dependent mechanism in NSCLC cells. In conclusion, ATF4-induced REDD1 and Sestrin2 expression triggered by metformin plays an important role in mTORC1 inhibition independent of AMPK activation.
Cell viability assay
Cell viability was assessed by measuring the mitochondrial conversion of MTT. The proportion of converted MTT was calculated by measuring the absorbance at 570 nm. The results are expressed as the percentage reduction in MTT under the assumption that the absorbance of the control cells was 100%. The MTT experiments were repeated three times.
RNA extraction and reverse transcription polymerase chain reaction (RT-PCR) RNA was isolated from H1299 cells using TRIzol Reagent according to the manufacturer's instructions (Invitrogen; Thermo Fisher Scientific). cDNA primed with oligo dT was prepared from 2 μg total RNA using M-MLV Reverse Transcriptase (Invitrogen; Thermo Fisher Scientific).
Statistical analysis
Data are expressed as the mean ± standard deviation (SD) of three independent experiments. Statistical analysis was performed using one-way analysis of variance followed by Tukey's post hoc test with the GraphPad Prism software (Version 5.0; GraphPad Software Inc., San Diego, CA, USA). P < 0.05, P < 0.01 and P < 0.001 were considered to indicate statistically significant results.
Metformin induces mTORC1 inhibition through AMPK activation
We first investigated the impact of metformin on mTORC1 activity in NSCLC cells. H1299 cells were treated with metformin at the above mentioned concentrations for 24 h. As shown in Fig. 1A, metformin inhibited mTORC1 activity, as shown by the decrease in S6K phosphorylation. Phosphorylation of 4E-BP1 was decreased by metformin, as evidenced by a shift to fastermigrating species [25]. Phenformin, a metformin analogue also inhibited mTORC1 activity, as assessed by reduced phosphorylation of S6K1 and 4E-BP1. It has been reported that metformin requires AMPK to inhibit mTORC1 [26]. As expected, metformin and phenformin both induced AMPK activation, as evaluated by the activating phosphorylation of Thr172 in AMPKα and Ser79 in the AMPK substrate acetyl-CoA carboxylase (ACC) (Fig. 1A). Next, we explored the effect of the absence of AMPK on metformin-induced mTORC1 inhibition. AMPKα siRNA abrogated AMPKα expression and prevented ACC phosphorylation induced by metformin treatment (Fig. 1B). The metformin-induced decrease in phfv. osphorylated S6K was restored by knockdown of AMPKα (Fig. 1B). These data suggest that AMPK activation contributes to mTORC1 inhibition in response to metformin.
Metformin induces mTORC1 inhibition through ATF4
It has been reported that metformin can also inhibit mTORC1 through an AMPK-independent pathway [22][23][24]. Since activation of the PERK-eIF2α-ATF4 axis was triggered by metformin [27], we investigated whether ATF4 is involved in metformin-induced mTORC1 inhibition. Metformin induced ATF4 protein expression but did not affect the induction of ATF4 mRNA ( Fig. 2A-D). The metformin-induced decrease in phosphorylated S6K was restored by knockdown of ATF4 ( Fig. 2E and F), suggesting that ATF4 is necessary for metforminmediated inhibition of mTORC1.
REDD1 and Sestrin2 expression in the presence of metformin is regulated by ATF4
We previously reported that ATF4 facilitates the transcription of the REDD1 gene [28]. Thus, we investigated whether REDD1 expression is upregulated by metformininduced ATF4 activation. Metformin and phenformin both induced REDD1 protein and mRNA expression in a dose-dependent manner ( Fig. 3A and B). ATF4 siRNA almost completely blocked the upregulation of REDD1 in the presence of metformin ( Fig. 3C and D). Sestrins are stress-inducible proteins that regulate metabolic homeostasis [29]. We investigated whether Sestrins are upregulated by metformin or phenformin. As shown in Fig. 3E and F, Sestrin2 protein and mRNA levels were upregulated under metformin or phenformin treatment. However, metformin and phenformin had no impact on the gene or protein expression of Sestrin1 or Sestrin3. We further investigated whether ATF4 is responsible for the upregulation of Sestrin2 expression in response to metformin. H1299 cells were transfected with ATF4 siRNA and treated with metformin. ATF4 siRNA blocked the upregulation of Sestrin2 in response to metformin ( Fig. 3C and D). These data suggest that ATF4 activation is important for the induction of REDD1 and Sestrin2 expression by metformin treatment.
AMPK and ATF4 do not affect each other's expression in the presence of metformin
We investigated whether AMPK and ATF4 affect each other's expression in the presence of metformin. We first investigated the protein expression of ATF4 and its downstream targets REDD1 and Sestrin2 by treatment with metformin in AMPK knockdown cells. The downregulation of AMPKα did not alter the induction of ATF4, REDD1 or Sestrin2 expression by metformin (Fig. 4A). Next, we investigated AMPK activation by metformin in cells with knockdown of ATF4 and its downstream targets REDD1 and Sestrin2. H1299 cells were transfected with ATF4, REDD1 and Sestrin2 siR-NAs and then treated with metformin. Metformininduced AMPKα and ACC phosphorylation was not changed by siRNAs against ATF4, REDD1 or Sestrin2 ( Fig. 4B-D). These data suggest that AMPK and ATF4 do not affect each other's expression in response to metformin.
REDD1 and Sestrin2 expression induced by metformin is involved in mTORC1 inhibition
We investigated whether REDD1 and/or Sestrin2 induction by metformin suppresses mTORC1 activity. As shown in Fig. 5A-C, the decreased phosphorylation of S6K by metformin was recovered in cells treated with REDD1 siRNA or Sestrin2 siRNA. Silencing both Interestingly, compared with that in the control siRNAtreated cells, the metformin-induced Sestrin2 expression was higher in the REDD1 siRNA-treated cells (Fig. 5A and D), and the metformin-induced REDD1 expression was higher in the Sestrin2 siRNA-treated cells (Fig. 5B and E). These data suggest that REDD1 and Sestrin2 are important for the inhibition of mTORC1 triggered by metformin treatment. panels; a, b, c) The p-S6K protein expression was quantified using ImageJ software and fold change with respect to control after normalization to respective S6K protein bands was plotted as histogram (n = 3; ***P < 0.001; ns, not significant). (d, e) The indicated mRNA levels were estimated by real-time PCR analysis. The real-time PCR results for each sample were analysed according to the 2 −ΔΔCt method using β-actin as the internal control. Gene transcription is presented as the fold change relative to the control sample (n = 3; *p < 0.05; **p < 0.01; ***p < 0.001; ns, not significant). CTL: control, SESN2: Sestrin2 Lapatinib enhances cell sensitivity to metformin, and knockdown of REDD1 and Sestrin2 decreases cell sensitivity to metformin and lapatinib.
Next, we investigated the effect of metformin on H1299 cell viability. A less than 25% decrease in cell viability was observed in H1299 cells treated with 10 mM metformin for 24 h (Fig. 6A). The combination of kinase inhibitor and biguanide has been reported to have increased antitumour efficacy [30], and we investigated whether lapatinib, a dual EGFR and HER2 kinase inhibitor, enhances cell sensitivity to metformin. Interestingly, lapatinib potentiated the metformin's effect on the ATF4, REDD1, and Sestrin2 expression and the AMPK phosphorylation (Fig. 6B). Lapatinib enhanced the metformin-induced inhibition of S6K phosphorylation and the inhibitory effect of metformin on the cell viability ( Fig. 6B and C). To investigate whether REDD1 and Sestrin2 are involved in cell sensitivity to lapatinib and metformin, we knocked down REDD1 and Sestrin2 in H1299 cells, followed by lapatinib and metformin treatment. siRNA silencing of both REDD1 and Sestrin2 abrogated REDD1 and Sestrin2 expression but did not affect AMPKα phosphorylation induced by metformin treatment (Fig. 6D). Treatment with REDD1 and Ses-trin2 siRNA significantly increased viability in cells treated with metformin and lapatinib, suggesting that expression of both REDD1 and Sestrin2 is involved in cell sensitivity to metformin and lapatinib (Fig. 6E). Next, we knocked down ATF4 in H1299 cells, followed by lapatinib and metformin treatment to investigate whether ATF4 is involved in cell sensitivity to lapatinib and metformin. siRNA-mediated knockdown of ATF4 abrogated the expression of ATF4 and its downstream targets REDD1 and Sestrin2 induced by metformin (Fig. 6F). However, ATF4 siRNA did not affect metformininduced AMPKα phosphorylation (Fig. 6F). ATF4 siRNA significantly increased viability in cells treated with metformin and lapatinib (Fig. 6G). These results suggest that ATF4-mediated REDD1 and Sestrin2 expression is involved in cell sensitivity to metformin and lapatinib.
Discussion
In the present study, we provide evidence that metformin inhibited mTORC1 signalling independent of AMPK. We found that metformin inhibited mTORC1 signalling via ATF4-induced REDD1 and Sestrin2 expression. Furthermore, we demonstrated that treatment with a combination of metformin and lapatinib significantly reduced the viability of NSCLC cells. siRNAs targeting REDD1 and Sestrin2 significantly increased viability in cells treated with metformin and lapatinib.
Our study advances the current understanding of the molecular mechanism used by metformin to regulate mTORC1 pathways as a cancer therapy.
Metformin treatment induced AMPK activation and mTORC1 inhibition (Fig. 1A). It has been reported that metformin requires AMPK to inhibit mTORC1 [26]. In this study, we found that metformin-induced AMPK activation contributed to mTORC1 inhibition. However, downregulation of AMPK did not fully recover metformininduced mTORC1 inhibition (Fig. 1B). This result suggests that there are additional mechanisms involved in the metformin-mediated inhibition of mTORC1.
REDD1 is one of the best characterized suppressors of mTORC1. REDD1 promotes the association of PP2A with PKB/Akt, ultimately leading to TSC2 activation and mTORC1 inhibition [31]. Sestrin2 has also been reported to inhibit mTORC1 signalling via activation of AMPK [32,33]. More recently, Sestrin2 was proposed to inhibit mTORC1 through modulation of GATOR complexes [34,35]. In this study, we found that mTORC1 was partially suppressed by metformin-induced expression of REDD1 or Sestrin2 ( Fig. 5A and B). These data suggest that induction of either REDD1 or Sestrin2 alone by metformin cannot completely inhibit mTORC1, and REDD1 and Sestrin2 act together to inhibit mTORC1 following metformin treatment. Interestingly, when compared with control siRNA-treated cells, it was observed that metformin-induced Sestrin2 expression was more elevated in REDD1 siRNA-treated cells, and metformin-induced REDD1 expression was more elevated in Sestrin2 siRNA-treated cells. Further research is needed to confirm in these findings.
It has been reported that metformin increases REDD1 expression in a p53-dependent manner [22]. Because the H1299 cell line used in this study lacks p53, metformininduced REDD1 expression may be p53-independent. It has been reported that activation of the PERK-eIF2α-ATF4 axis is triggered by metformin [27], and upregulation of REDD1 and Sestrin2 by leucine deprivation is mediated by ATF4 [36]. We found that REDD1 and Ses-trin2 induced by metformin are mediated by ATF4. Furthermore, we showed that siRNA targeted against ATF4, REDD1, and Sestrin2 did not change the AMPK activation induced by metformin. Additionally, AMPKα siRNA did not change ATF4-induced REDD1 and Sestrin2 expression. These data suggest that ATF4-induced mTORC1 inhibition by metformin occurs independent of AMPK activation.
The combination of kinase inhibitor and biguanide has been reported to have increased antitumour efficacy [30]. Lapatinib, a dual EGFR and HER2 kinase inhibitor, enhanced the metformin's effect on the ATF4, REDD1, and Sestrin2 expression and the inhibitory effects of metformin on the viability of H1299 cells. In cells with reduced viability due to combined metformin/lapatinib treatment, treatment with ATF4 siRNA or REDD1/Ses-trin2 siRNA significantly increased viability, indicating that ATF4-mediated REDD1 and Sestrin2 expression is involved in cell sensitivity to metformin and lapatinib.
In conclusion, ATF4-mediated REDD1 and Sestrin2 expression triggered by metformin plays an important role in mTORC1 inhibition independent of AMPK activation, and this signalling pathway could have therapeutic value. | 2021-07-13T13:52:03.082Z | 2021-07-12T00:00:00.000 | {
"year": 2021,
"sha1": "1a1c32b6fdca5bc7ba870f1057e16d53f3905be5",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-021-08346-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a89b3e484f4ca22095e92b7d468d7fa4d1bfe65c",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56288675 | pes2o/s2orc | v3-fos-license | Twist-Routing Algorithm for Faulty Network-on-Chips
This paper introduces Twist-routing, a new routing algorithm for faulty on-chip networks, which improves Maze-routing, a face-routing based algorithm which uses deflections in routing, and archives full fault coverage and fast packet delivery. To build Twist-routing algorithm, we use bounding circles, which borrows the idea from GOAFR+ routing algorithm for ad-hoc wireless networks. Unlike Maze-routing, whose path length is unbounded even when the optimal path length is fixed, in Twist-routing, the path length is bounded by the cube of the optimal path length. Our evaluations show that Twist-routing algorithm delivers packets up to 35% faster than Maze-routing with a uniform traffic and Erdös-Rényi failure model, when the failure rate and the injection rate vary.
Introduction
The transistor technology scales in microprocessors, and more and more powerefficient cores are integrated on a single chip.The communication between these onchip cores should be efficient.Therefore, Networks-on-chips (NoCs), instead of simple buses, are becoming a promising choice for on-chip interconnects for their better scalability [1]- [6].Unfortunately, the reliability of the on-chip components is reduced as critical dimensions shrink, and a NoC might be a single point of failure [7].As the silicon ages, the error rates become quite high [8], because of oxide breakdown, electromigration, and thermal cycling [7].Hence, it is critical that some failures in the network do not cause an entire chip to fail.
There are some NoC reliability solutions based on architectural protection against faults in the router logic [9] [10] [11].But not all faults can be toleranted this way [12].
In recent works, faults are modeled by disabling such links, and a complete router loss is modeled by marking all the links connected to the affected router as faulty.The goal is to route packets around faults and finally reach the destination.Recent route-reconfiguration solutions to bypass faulty links or routers can be broadly divided into two kinds, buffered solutions and deflection solutions.Buffered solutions include Ariadne [13], uDirec [12], Hermes [14], which all utilize traditional wormhole routing [15], and routing tables.Those algorithms typically take some time to update routing tables when a new fault is detected, and incur reconfiguration overhead.The deflection solutions for non-faulty chips are introduced by BLESS algorithm [16] to overcome the significant energy consumption and design complexity caused by buffer usage.Then, CHIPPER [17] and minBD [18] develop the idea of deflection routing.For faulty chips, the Mazerouting algorithm provides a deflection routing algorithm, which is the first routing algorithm which provides guaranteed delivery in a fully-distributed manner at low cost and low reconfiguration overhead [19].
The Maze-routing is the state-of-the-art solution of deflection routing for faulty chips.However, the path length which is found by Maze-routing is unbounded even when the optimal path length is fixed.We proposed a improved algorithm named Twist-routing, taking inspiration from the idea of GOAFR+ routing algorithm, which was originally proposed for ad-hoc wireless networks [20] [21] [22].Using our algorithm, the path length is bounded by the cube of the optimal path length.Our algorithm inherits the property of Maze-routing, and provides guaranteed delivery at low cost and the same low reconfiguration overhead.The experiments show that our algorithm is 35% faster than Maze-routing when the failure rate equals to 0.3, and the injection rate is 0.003, and keeps fast when injection rate increases.
Twist-Routing Algorithm
The Twist-routing algorithm is a practical routing algorithm for faulty NoCs, which is based on Maze-routing for faulty NoCs and GOAFR+ routing algorithm for ad-hoc wireless networks.The faulty model is described in Section 0. We briefly review the Maze-routing algorithm in Section 2.1.In Maze-routing, a packet is alternately in greedy and face-routing [23] mode.In Twist-routing, these two modes remains, but we use bounding circles to limit the search range in a face-routing step, proposed in Section 2.3.This enables us to prove a theoretical bound of Twist-routing in Section 2.4.The interactions of Twist-routing and deflection are described in Section 2.5.
The Model
The model of the faulty on-chip routing is a mesh, where routers are placed on each grid points, and links are available between adjacent routers.Each routers can be good or bad, and each links can be healthy or faulty bidirectionally.A bad router is modeled by disabling all of its four links.In modern chips, packets are splited into flits, and routed from source node to the destination.In the routing algorithm, each router accepts input flits from all nearby healthy links, permute them according to some rules, and send them back to all nearby healthy links.Because links are bidirectional, there are as many output links as input links, so all flits can go somewhere after the routing.
The Maze-Routing Algorithm
The Maze-routing add a header to each flits, containing some metadata of this flit.They are src , the source; dst , the destination; best md , the closest Manhattan distance to dst that the packet has reached so far assuming a fault-free mesh; mode , being one of greedy , clockwisely face-routing ( ), or counter-clockwisely face-routing ( ); trav n and trav dir , the node and direction which indicates the destination is unreachable if it is visited again.
In Maze-routing, each flit is routed to a productive and healthy output if possible.
This is called the greedy mode.If there is no such output, the flit changes itself into face-routing mode (randomly chosen from and ).In face-routing mode , the flit takes the first healthy output on the left of the ray from cur to dst , and then goes clockwisely.In face-routing mode , the flit takes the first healthy output on the right of the ray, and then goes counter-clockwisely.Effectively, the flit traverses the face underlying the ray from cur to dst .The flit changes back to greedy mode when it goes to a router that can forward it closer to its destination than the node where it entered face-routing mode, i.e., the best md in header can be reduced by a neighbor link.If the best md cannot decrease until the flit has traversed the whole face, which is detected by revisiting trav n on the direction of trav dir , then there is no path between src and dst .We can drop this flit, and report this failure to src using the same algorithm as needed.
The Use of Bounding Circles
Twist-routing is based on Maze-routing, with the extra usage of bounding circles.The bounding circle is always centered at the destination of the flit, and its radius is recorded in the header, namely c.Notice that in Maze-routing, once face-routing mode is chosen, the direction is fixed until the flit changes back into greedy mode.In Twist-routing, we draw a bounding circle with We use these values in the our experiments.
Proofs of Being Faster
Maze-routing can be very bad in some cases (see Figure 2 for one example of such cases).
Assume the big tree contains n edges.Maze-routing randomly choose between two directions when entering face-routing mode.If Maze-routing chooses the good direction, the flit will reach the destination with 4 hops.If Maze-routing chooses the bad direction, the flit has to go to the big tree and goes all the way back, and takes 2 10 n + hops to reach the destination in total.In average, Maze-routing takes 7 n + hops, which is ( ) . In this example, Twist-routing chooses between two directions, too.One direction leads to 4 hops.If we take the other direction, the flit will goes back without entering the tree because of the use of the bounding circle, and takes 8 hops to reach its destination.On average, it takes 6 hops only.
In the previous example, the length of the optimal path m is a constant, but Mazerouting needs ( ) hops.So Maze-routing cannot be bounded by any expression of m.However, Twist-routing runs in ( ) O m hops, which is asymptotically better than Maze-routing.Now we prove this bound by two theorems.
Theorem 1.If the destination of a flit is reachable from the source, and m is the length of the optimal path of this flit, the radius of the largest bounding circle used by Twist-routing without deflection is no more than ( ) , where 0 c is the initial radius of the bounding circle.
Proof.There is a case where we never enlarge the bounding circle, so the largest circle is the initial one, with radius 0 c .Otherwise, we only enlarge the bounding circle to 1 C with radius k α only if we meet a boundary of the bounding circle C with radius k.Only if we first meet the other boundary of C later, we may meet the boundary of 1 C , and enlarge the bounding circle again.So if we found an edge which leads to closer to destination within the bounding circle C with radius k, we will not meet the other boundary of C, and the radius of the bounding circle never exceeds k α .Assume that we use the bounding circles that c m c α ≤ < .We want to prove the radius of the largest bounding circle never exceeds 2 m α , and it is enough to show that it never exceeds 2 c α .Then it is enough to show that in the bounding circle with radius c α , the face routing can always find an edge that goes closer to the destination.Supposing not, then we assume in the face routing step, we go through path p.The path p splits the bounding circle with radius c α into two parts, and exactly one of them is reachable from the source within the bounding circle of radius c α .In other words, the destination is unreachable from the source within the bounding circle with radius c α .
But since the length of the optimal path from the source to the destination is m, the optimal path lays in the bounding circle of radius c α completely, i.e., the destination is reachable from the source within the bounding circle.That is a contradiction.□ and each time we enlarge the bunding circle exponentially, the total hops of one face routing step are ( ) Now consider that 0 best md m ≤ ≤ , and each reduction of best md takes at most ( )
2
O m hops, so all we need is ( )
3
O m hops in total to transport this flit using Twist-routing.□
Deflection Implications
At one router, there are at most 4 input flits.Some flits have to be buffered or deflected , in which next is the next router of this flit, and its mode are reset to greedy .This makes the header and the state of this flit consistent.
To avoid deadlocks and licklocks, our algorithm needs to work with some deflection based mechanism proposed in literature.We mostly use minBD due to its high performance.The original method to avoid livelocks in minBD is to circularly make one flit golden for a long time L.However, in faulty chips, L needs to be at least as large as the longest path in the graph, which can be ( )
2
O n large, where the chip is n by n.This renders the golden method to avoid locks not efficient.Instead of making one flit golden, we prioritize old flits to new flits globally to avoid livelocks.And we disable the buffer redirection in minBD because it is not compatible with our oldest-flit-based livelock-avoiding method 2 .
Simulations
We compared Twist-routing algorithm with the original Maze-routing using an ad-hoc simulator 3 .Note that in Maze-routing, flits are independent to each other, and multiple flits are assembled to the original packet when received.For simplicity, we assume there is only on flit per packet in our simulator.We implements Maze-routing and minBD deflection method with buffer size equals to 4 in our simulator.Meanwhile, we implemented Twist-routing with minBD, too.In both algorithms, we use oldest-flit based livelock-avoiding method and without buffer redirection.
In order to compare the performance of two algorithms, we computed the average flit latency in the network under different injection rates using a uniform traffic 4 .We use 32 32 × networks for evaluation.We use Erdös-Rényi model to generate faulty links, where the failure rate of any edge is 0.1 or 0.3.We generate 5 faulty chips for each case, and compute the average result across them.For each case, we run the simulations for 1000 cycles.
In a typical setting, the distances to deflect clockwisely or to deflect counter-clock wisely can be so much different.By backtracing and trying the other direction when running away from the bounding circle, our algorithm should provide better performance than the origin Maze-routing Algorithm.The simulation result shows the correctness of this conclusion.After careful measurement, in the case when the failure rate equals to 0.3, and the injection rate is 0.003, Twist-routing is 35% faster than Mazerouting.When the injection rate increases, Twist-routing keeps being fast (see Figure 3 for details of all results). 2 When the buffer redirection is enabled, we cannot avoid redirecting the oldest flit into the buffer, because the local information is not enough for us to determine if a flit is globally oldest or not.If the oldest flit enters the buffer, the delivery guarantee will be broken.
Figure 2 . 3 O
Figure 2. A setting where Twist-routing performs way better than Maze-routing.Theorem 2. If the destination of a flit is reachable from the source, and m is the length of the optimal path of this flit, Twist-routing can find a path with length ( ) 3 O m for this flit without deflection.Proof.Twist-routing consists of face routing steps and greedy routing steps.A greedy step reduce the dir of them are taken by other flits.If such case happenes, the flit may take a non-productive output, exit the face it is traversing, or be buffered and reappears in other input ports later.These behaviors result in inconsistency of the 1 Actually, the best md decreases in the next greedy step instead of face-routing step, but since each facerouting step is always followed by a greedy step, we may regard the next greedy step as if it is part of facerouting step, and say face-routing step reduces the best md by one.if the out | 2018-12-15T11:37:17.334Z | 2016-11-11T00:00:00.000 | {
"year": 2016,
"sha1": "9c3f22ddabf61afbd60b81d1a0aa2204e62cb950",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=71928",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "9c3f22ddabf61afbd60b81d1a0aa2204e62cb950",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
250498569 | pes2o/s2orc | v3-fos-license | Evaluation of the effect of different sedative doses of dexmedetomidine on the intestinal motility in clinically healthy donkeys (Equus asinus)
Aim Gastrointestinal effects of different doses of dexmedetomidine in donkeys are still unidentified. The current study aimed to evaluate the impact of different doses of dexmedetomidine on the motility of selected parts of the gastrointestinal tracts in donkeys using transabdominal ultrasonography. Materials and methods An experimental crossover study was conducted on 30 healthy donkeys of both sexes (15 males and 15 females; 160 ± 60 kg). With a two-week washout period, each donkey received an injection of either a normal saline solution or three different doses of dexmedetomidine (3, 5, and 7 μg/kg, respectively). All medications were administered intravenously in equal volumes. The contractility of selected intestinal segments (duodenum, jejunum, left colon, right colon, and cecum) was measured 3 min before administration (zero time) and at 15, 30, 45, 60, 90, and 120 minutes after administration. Results Small and large intestinal motility was within the normal ranges before IV injection of normal isotonic saline or dexmedetomidine at a dose of 3, 5, and 7 μg/kg. Two Way Repeated Measures ANOVA output of the data displayed a statistically significant the between time and treatments for the contractility of each of the duodenum (P = 0.0029), jejunum (P = 0.0033), left colon (P = 0.0073), right colon (P = 0.0035), and cecum (P = 0.0026), implying that the impact of treatment on the gastric motility varied among different time points. The simple main effect analysis revealed that the IV dexmedetomidine at 3, 5, and 7 μg/kg doses significantly inhibited (P ≤ 0.01) the bowel contractility compared to the administration of isotonic saline. Conclusion Dose-dependent inhibitory effect of dexmedetomidine on intestinal motility was reported in donkeys following intravenous administration. This inhibitory effect on intestinal motility should be considered in clinical practice.
Dexmedetomidine, an active enantiomer of medetomidine, is the most potent alpha-2 adrenoceptor agonist with calming, analgesic, and muscle relaxing properties [5,6]. Dexmedetomidine has beneficial pharmacological properties including its rapid distribution and half-life distribution, which encourages its use for equids. It allows rapid changes in the depth of sedation and rapid recovery after stopping its infusion [7].
No previous studies have investigated the effect of dexmedetomidine on the gastrointestinal tracts of donkeys, unlike in rats there was previous research demonstrated the effect of dexmedetomidine on the rats' gastric emptying and gastrointestinal transit [16]. Another study has assessed the effect of a low dose of dexmedetomidine on the gastrointestinal tracts of humans and revealed a decrease in gastric emptying rate [16,17]. There is a paucity of data describing the gastrointestinal effects of different alagesic and sedative doses of dexmedetomidine in equines. The current study hypothesized that injection of dexmedetomidine at different doses would have an inhibitory effect on the gastrointestinal function in donkeys. Therefore, this research was designed to assess the impact of using the intravenous injection of dexmedetomidine at a dose of (3, 5, 7 μg/kg) on the intestinal peristaltic motility in healthy donkeys using transabdominal ultrasonography.
Results
Clinical examination revealed that all the selected donkeys were clinically healthy throughout the experiment. There were no signs of infection at the needle puncture site, regional IV infusion site reaction, sudden onset hypersensitivity, and nervous system disorders throughout the observing time following IV isotonic saline solution or dexmedetomidine at different doses used. Each IV injection of isotonic saline solution had an analgesia score of 0 (0-0), manifested by a strong reaction to painful stimuli, a sedation score of 0 (0-0) is characterized by the donkeys being conscious, sensitive to noise, and environmental stimuli and an ataxia score of 0 (0-0) is characterized by the donkeys being able to walk without stumbling quickly.
Five miutes after the IV injection of dexmedetomidine at dose rates of 3, 5, and 7 μg/kg in the selected donkeys persuaded complete mutual perineal and tail analgesia, with a noted score 3 (3-3) until 30 minutes in all treatment groups. The level of analgesia was moderate in both 3 and 5 μg/kg groups with a noted score 2 (1-2) but the scores were higher in group 7 μg/kg with a noted score 3 (3-3) at 45 and 90 miuntes post dexmedetomidne injection (Table 1).
Mild sedation manifested by intermittent retort to external stimuli, lethargy, and minor drop of the head, eyelids, and lips in donkeys recorded post injection of 3 μg/kg dexmedetomidine, in which the sedation score was 1 (1-1). However, deep sedation, that was manifested by reducing animals'awareness, dropping head, lips, and eyelids, and decresing of response to external stimuli noted at 5 and 7 μg/kg of dexmedetomidine, in which the sedation score was 3 (2-3). Sedation was started at the 5 minutes and lasted until the 45 minutes at 3 μg/kg or 90 minutes at 5 and 7 μg/kg post dexmedetomidine administration (Table 2).
Moderate ataxia, and stumbling walking began at 5 minutes in all dexmedetomidine goups in which the ataxia score was 2 (2-2). Ataxia lasted up to 15 minutes after administration for 3 μg/kg, and up to 30 minutes after administration for 5 μg/kg, and up to 45 minutes after administration for 7 μg/kg (Table 3). Table 1 Analgesia score median (range), post-intravenous injection of isotonic saline or Dexmedetomidine (3, 5, and 7 μg/kg) in Donkeys a,b,c,d : Variables with different superscript letters in the same column are significantly different at P < 0.05
The results of the two-way repeated measures ANOVA demonstrated a statistically significant (P < 0.01) effect of time and both treatments for the contractility of each of the duodenum, jejunum, left colon, right colon, and cecum, implying that the impact of treatment on gastric motility varied among different time points.
The simple main effect analysis revealed that the IV dexmedetomidine at 3, 5, and 7 μg/kg doses significantly altered bowel contractility compared to administration of isotonic saline (P ≤ 0.01). After IV injection of normal saline in the donkey under experiments, the contractility of each of the examined portions of the small and large intestine did not significantly fluctuate during the 2 h driving period and stayed within the typical levels until 120 minutes post-administration. Whereas the contractility of each of the examined portions of the small and large intestine was changed post IV injection of dexmedetomidine in the chosen donkeys (Figs. 1, 2, 3, 4 and 5). At 15,30,45, and 60 minutes after injection, intravenous dexmedetomidine at 3 μg/kg caused a significant decrease in both duodenal (P ≤ 0.003) and jejunal motility compared to placebo (P ≤ 0.005). At 30 minutes after administration, the minimum contractions (contraction / 3 minutes) of both duodenum and jejunum were 3.5 ± 1.2 and 3.5 ± 1.3, respectively (Figs. 1 and 2). Nevertheless, dexmedetomidine at 5 and 7 μg/kg doses caused a significant reduction in both duodenal (P ≤ 0.003) and jejunal (P ≤ 0.005) motility frequencies compared to placebo at 15, 30, 45, 60, and 90 minutes post-injection. The minimum contractions (contraction / 3 minutes) of both duodenum and jejunum after IV dexmedetomidine (5 μg/kg) were 2.7 ± 1.0 and 2.5 ± 1.0, respectively, which were noted at 45 minutes post-administration (Figs. 1, 2). The minimum contractions (contraction / 3 minutes) of both duodenum and jejunum after IV dexmedetomidine at 7 μg/kg were 1.5 ± 1.1 and 1.5 ± 1.1, respectively, which were noted at 60 minutes postadministration (Figs. 1 and 2).
The left colon showed significantly decreased motility at 15, 30, and 45 minutes post IV injection of 3 μg/kg Table 2 Sedation score median (range), post-intravenous injection of isotonic saline or Dexmedetomidine (3, 5, and 7 μg/kg) in Donkeys a,b,c,d : Variables with different superscript letters in the same column are significantly different at P < 0.05
Group
Time zero 5 minutes 15 minutes 30 minutes 45 minutes 60 minutes 90 minutes 120 minutes Table 3 Ataxia score median (range), post-intravenous injection of isotonic saline or Dexmedetomidine (3, 5, and 7 μg/kg) in Donkeys a,b,c : Variables with different superscript letters in the same column are significantly different at P < 0.05
Discussion
The results of the current study showed that intravenous injection of dexmedetomidine at 3, 5, and 7 μg/kg significantly inhibited of perstalitic movement of different intestinal segments. Dexmedetomidine is an α-2 adrenoreceptor agonist that is gaining interest as a part of the balanced anesthetic protocol in equine anesthesia. It provides deep sedation and has a minimum alveolar concentration sparing effect [18]. It may result in a higher quality of recovery than the other balanced protocols used in horses [19,20]. Dexmedetomidine focuses on researchers' attention to find out its adverse effects on various parts of the body, including the gastrointestinal tract. Therefore, this study is the first to investigate the effects of dexmedetomidine on the motility of both the small intestine (duodenum and jejunum) and the large intestine (left colon, right colon, and body of cecum) in donkeys (Equus asinus) using transabdominal ultrasonography.
The frequency of duodenal, jujenum, left colon, right colon and cecal contractions in donkeys are closely similar reported in horses [21,22] and in the perivous study in [23].
In this current study, the analgesic effect of dexmedetomidine was observed at the 5 minutes and after IV administration and lasted up to the 30 minutes postadministration for 3 μg/kg dose, 45 minutes for 5 μg/kg, and 60 minutes for 7 μg/kg dose, consistent with the findings of previous studies [8,9]. The dose of dexmedetomidine used in this investigation was determined based on prior equine studies [8,24]. The sedative effect of dexmedetomidine was observed 5 minutes after its IV administration and lasted 60 minutes post-administration for 3 μg/kg dose and 90 minutes for both 5 and 7 μg/kg. These findings are comparable to those previously reported in donkeys, where increasing dexmedetomidine dosages from 4 to -5 μg/ kg increased the sedation time from 30 to 60 minutes [8]. Dexmedetomidine also has a dosedependent sedative effect that does not exceed a certain level [25]. Therefore, dexmedetomidine has a beneficial pharmacological profile, including rapid redistribution and a short half-life [18,26]. There were significant differences between treatments for the analgesia and sedation scores. For the ataxia scores, there were no significant differences between treatments. This finding was confirmed by [27], which of demonstrated that a higher dose of epidural xylazine in equines has not been proven to induce ataxia. More research is needed to determine whether increasing dexmedetomidine doses causes substantial changes in ataxia scores.
The anatomical location and the ultrasonographic presence of the visualized sections of both the small and large intestine using abdominal ultrasonography agreed with those previously described [28,29]. Before IV injection of normal isotonic saline or different selected doses of dexmedetomidine in the donkeys under study, the regularity of contractility of both small (duodenum and jejunum) and large (left colon, right colon, and cecum) intestines were within normal ranges, which are directly comparable those reported by [23,28,30]. The IV injection of isotonic saline solution in the donkeys did not influence the contractility of the visualized sections of the small and large intestine during 120 minutes motoring period and stayed inside the ordinary varies till 120 minutes post-injection as formerly described in humans [31]. The effect of different doses of dexmedetomidine on gastrointestinal motility was consistent across all donkeys at 90 minutes. Clonidine and dexmedetomidine are alpha 2 adrenoceptor agonists that induce sedation, reduce anesthetic and analgesic doses, and improve peri-operative hemodynamic balance [32]. Dexmedetomidine, unlike donkeys, inhibits gastric, small bowel, and colonic motility in animal and human studies [32,33].
In the previous studies conducted on humans and animals, dexmedetomidine inhibited all gastrointestinal tract motor function segments. Its antiperistatical effects are due to the inhibition of excitatory cholinergic pathways in the enteric nervous system via 2-adrenoceptors or activated inhibitory neural pathways [21,[34][35][36]. Dexmedetomidine is a promising agent for palliative sedation due to its unique mechanism of action, which causes dose-dependent sedation without a significant risk of respiratory depression [4,35]. In horses, the decreased gastrointestinal motility was an anticipated finding following administration of dexmedetomidine, which is one of the negative effects of α2-adrenoceptor agonists on equine gastrointestinal motility, which has been extensively described in the literature [37,38]. Furthermore, detomidine and medetomidine decreased gastrointestinal motility in horses for 120 and 90 minutes [38], respectively. While in the current study, the inhibition effect of dexmedetomidine on the donkey's gastrointestinal motility lasted only 60 minutes [39]. demonstrates that donkeys appear to metabolize many anesthetic and sedative drugs differently than horses.
Based on these findings, intravenous injection of dexmedetomidine in the studied donkeys resulted in a significantly decline in the motility of the duodenum, jejunum, left colon, right colon, and cecum when compared to placebo. The greatest inhibitory effect was done at dose 7 μg/kg and take a long obvesration period than other treatments. The noticible point appears to be cecal motility was the most affected than other intestinal parts motility in healthy donkeys. In donkey, the effect of IV dexmedetomidine began at the 5 minutes and lasted up to 30-90 minutes based on the dose given. The results of pharmacokinetics studies revealed that dexmedetomidine concentrations decreased rapidly with an elimination half-life ranging between 7.19 and 8.87 minutes, and the last detection time varied between 30 and 60 minutes. The plasma concentrations of dexmedetomidine peaked 1-4 minutes post-administration [35].
The current study's limitations necessitate further research to investigate the pharmacokinetics of dexmedetomidine in donkeys. In addition, since our study's grading system is subjective, it may not accurately assess sedative, analgesic, and ataxic effects. The same person who measured analgesia, sedation, and ataxia was blinded to the medication administered to overcome this limitation. As the current study was performed on healthy donkeys depending on the investigational design, the results obtained may not reflect the actual characteristics of diseased donkeys with disturbed gastrointestinal tract motor function. Consequently, more research is needed to determine the influence of this drug in donkeys with impaired gastrointestinal tract motor function.
Conclusion
The current study revealed that IV administration of dexmedetomidine at different recommended sedative doses caused a potent inhibitory effects on the small and large intestinal perstatlic movenet in healty donkeys "Equus asinus". Consequently, it may be beneficial to raise awareness of this potential effect, particularly when used in equines with disturbed gastrointestinal tract motility.
Study sample
This experimental study included 30 healthy donkeys (Equus asinus) (15 males and 15 females) aged between 5 to 9 years old and weighing between 100 to 220 kg. The inclusion criteria for the selected donkeys were (1) clinically healthy, (2) free from any gastrointestinal disorders, (3) free from any evidence of other systemic diseases, and (4) easily manageable without any sedation. These donkeys were purchased from Dakahlia province (Egypt). They were in the stall's interior of the animal barn for 2 weeks prior to the study. On arrival, the donkeys were immunized and dewormed with ivermectin glue (Bimectin ® , Bimeda Animal Health Ltd., Ireland) at a dosage rate of 0.2 mg/kg. The feeding regimen for the selected donkeys was a uniformly balanced share comprising sliced wheat straw ad libitum, grain (1.5 kg), and crushed corn (1.5 kg), supplemented with all the necessary trace elements and minerals. The diet was offered twice daily at fixed times; 7.00 am and 7.00 pm to reduce the effect of the type of the diet on the contractility of the gastrointestinal tract.
Furthermore, animals had free access to tap water. The Animal Welfare and Ethics Committee of the Faculty of Veterinary Medicine, Code No. R/63 validated all animal care and testing procedures following the Guidelines for Animal Use and Care published by the Faculty of Veterinary Medicine, Mansoura University, Egypt.
Study design
Each donkey was randomly assigned to one of four trials, with a two-week washout period, which began 1 h after feeding. The first group (placebo) received an IV of 20 mL of normal isotonic saline. The second, third, and fourth groups (treatment groups) were treated with dexmedetomidine hydrochlorid (Precedex ® , Lakeforest, USA) at the dosage of 3, 5, and 7 μg/ kg IV. respectively. For sedation doses, the medication was diluted with sodium chloride to a total volume of 20 mL after preparing the required dose of dexmedetomidine for each donkey. One-third of the dose was administered as an IV bolus, with the remaining two-thirds being injected slowly over 2 minutes.
In the experiment, donkeys, analgesia, sedation, and ataxia were measured using 0 to 3 scoring system, as previously stated [40]. Analgesia was proven with deep muscle pinpricking with a 2.5-cm-long hypodermic needle. The needle was repeatedly inserted into the underlying tissues via the skin of the neck, shoulder region, coronary band, paralumbar fossa, and hip area. As progressive pain signals, repetitive head, neck, trunk, limb, and tail movements to avoid the needle and attempts to kick and rotate the head to the painful site were observed. The needle was placed in slightly different bilateral positions for each test, ranging from caudal to cranial. The period from drug administration to sensation impairment was defined as the time of effect onset. The time between the disappearance and recurrence of pinprick stimuli was defined as the antinociceptive duration. The degree of analgesia was graded from 0 to 3: 0 = no analgesia (strong reaction to harmful stimuli, like kicking); 1 = mild analgesia (mild reaction, such as shifting the heads towards the stimulus spot); 2 = moderate analgesia (minimal and recurring reaction); and 3 = complete analgesia (no response to noxious stimulation). The degree of sedation was rated on a scale of 0 to 3: 0 = no sedation (donkeys maintained their original attitude and were sensitive to noise and stimulus); 1 = mild sedation (reduced attention with slight responses to external stimulation, irregular stumbling, and the ability to resume walking); 2 = moderate sedation (somnolence, dullness, and occasional response to external stimuli; slight sunken of the head, lips, and upper eyelids; and marked stumbling and walking); and 3 = deep sedation (recumbence or collapsing while walking; obvious lethargy, head droop, and failure to respond to environmental cues). The degree of ataxia was graded from 0 to 3 as follows; 0 = normal; 1 = mild (slight stumbling but quickly able to walk afterward); 2 = moderate (observable stumble and apparent ataxic walk); 3 = extreme (recumbency or landing while walking). The same person who measured analgesia, sedation, and ataxia was blinded to the medication administrated. The degree of analgesia, sedation, and ataxia was measured before injection (time zero) and at 5, 15, 30, 45, 60, 90, and 120 minutes after injection. Since the solid phase of gastric discharging begins within 30 minutes of eating, each trial in this study began 1 hour after the donkeys had finished eating.
The motility of each of the duodenum, jejunum, left colon, right colon, and cecum was measured over 3 minutes via trans-abdominal ultrasonography before administration (time zero) and at 15, 30, 45, 60, 90, and 120 minutes after injection of the drug. The measuring unit is (contraction / 3 minutes). The donkeys were not given food or water during the ultrasound scanning.
Transabdominal ultrasonography
The abdominal region expanding from the seventh intercostal space backward up to the lumbar fossa was bilaterally clipped and prepared for the ultrasonographical examination. The coupling gel was applied to those areas, and a linear transducer (2.5-5 MHz) (iVis 60 Expert Vet ® , Chison Medical Imaging Co. Ltd., China) was selected. The scan depth was initially set to maximum penetration and then adjusted to different depths based on the scanned individual structure to obtain the best definition of structures and maximize image quality. The left colon and jejunum were scanned from the left abdominal wall, and the duodenum, right colon, and cecum were examined from the right abdomen. As previously stated [28,29], the physiological position and structure of the ultrasound image were used to identify the specific parts of the intestine in each donkey. All ultrasound procedures to quantitatively assess the motility of the selected parts of the intestine were initiated 1 h after finishing eating (the first meal) and were done by the same person to prevent any variations and reviewed by two experts.
Data analysis
Data were analyzed using the SPSS software for Windows, version 21.0; IBM Inc., Chicago, IL). The normally distributed were analyzed based on the Kolmogorov-Smirnov test output. The non-parametric Kruskal-Wallis test with post hoc Dunn's multiple comparison test was used at various time points to evaluate statistical differences between evaluated parameters (analgesia, sedation, and ataxia) treatments. For parametric data of the intestinal contractility frequencies, two-way repeated measures ANOVA was used to evaluate the impact of time, treatment, and interaction between time and treatment. Wilks' lambda test was utilized to evaluate within-group and time x treatment binding evidence. Meanwhile, Wilks' lambda test revealed a statistically significant difference between groups. The One-Way ANOVA test was used to determine which group was statistically different at each time point. The data were presented in run charts of the intestinal cramp during the observation period in both experiments. The level of statstcal significance was determined at P < 0.05 in all statistical analyses. | 2022-07-14T14:06:31.339Z | 2022-07-14T00:00:00.000 | {
"year": 2022,
"sha1": "3130168b084ed5b0ee492d3378589657375d505d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "3130168b084ed5b0ee492d3378589657375d505d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266592309 | pes2o/s2orc | v3-fos-license | Isotherm Study, Adsorption Kinetics and Thermodynamics of Lead Using Combination Adsorbent of Chitosan and Coffee Ground Activated Carbon
The presence of lead metal in water naturally, due to its mobility, can cause the nature of water to become toxic and endanger the environmental ecosystem by causing bioaccumulation within the food chain. The purpose of this study was to determine the maximum adsorption capacity through an isotherm model, ascertain the rate of adsorption kinetics when utilizing chitosan and coffee grounds as adsorbents to reduce lead concentrations in industrial wastewater, and analyze its thermodynamic properties. The research method was carried out using experiments in the laboratory followed by quantitative data analysis to determine the isotherm model and adsorption kinetics. The results showed that the adsorption isotherm conforms to the Langmuir isotherm model with a correlation coefficient of 0.9970 with a maximum adsorption capacity of 1.0511 mg.g -1 which indicates that chemical adsorption occurs in the mono layer with a homogeneous distribution of adsorption sites with adsorption energy constant, with negligible interactions between lead metal molecules (adsorbate). The kinetics of lead adsorption using chitosan-activated carbon coffee grounds following the Weber-Morris/intra-particle diffusion model with a correlation coefficient of 0.9920 with a diffusion rate of 76.512 g.mg -1 .hour -1 indicating that intra-particle diffusion is the rate step limiting in the overall biosorption process. Negative ΔG o values indicate that the adsorption reaction takes place spontaneously, ΔH o of 0.8130 indicates an endothermic reaction, and ΔS o of 4.1888 indicates an increase in the randomness of the adsorption process at the adsorbent interface and lead during adsorption.
INTRODUCTION
Bekasi Regency is Southeast Asia's largest industrial zone.A preliminary study conducted on a wastewater sample from one of textile industries in the Bekasi district showed a lead metal concentration of 1.02 mg/L.Notably, this concentration of lead metal exceeds the water quality standard limit concentration of 0.1 mg/L as outlined in the 2014 Waste Water Quality Standard Environment Ministry Ordinance Number 5 (Ministry of Environment, 2014).Lead (Pb) is a heavy metal commonly encountered in effluents originating from electronics and silicon semiconductor industries, primarily due to its prevalence as a constituent in these materials.The high concentration of lead in these industries can result in its release into the environment.Lead exhibits natural mobility and can be distributed through various means, including chemical reactions, biological processes, geochemical interactions, volcanic activities, and human activities.Because lead metal is naturally present in water due to its mobility, the toxic nature of the water made it a problem all over the world from the 20 th century to his 21 st century.With a density of 11.400 kg/m 3 , lead is a heavy metal found in nature and typically occurs in the form of bluish minerals alongside elements like oxygen and sulfur.Due to its toxic characteristics, lead is recognized as a hazardous metal that can lead to neurocognitive impairments.This is attributed to factors such as its lethal dosage, assimilation rate, and half-life within the human body (M. S. Kim et al., 2014).
Various methods, including chemical precipitation, adsorption, membrane filtration, ion exchange, and coagulation-flocculation, are employed to mitigate the presence of heavy metals in wastewater.Recent research efforts have primarily focused on exploring alternative adsorbents that not only offer cost-effectiveness but also exhibit environmentally friendly attributes, characterized by their ease of operation and high efficiency (H.Kim, Hwang, & Sharma, 2014).Chitosan (β-1,4,2-amino-2-deoxy Dglucose) is an organic material derived from chitin obtained through a deacetylation process at high temperatures using a strong base (Nuryono et al., 2020).Chitosan has been used as an adsorbent to reduce heavy metals, but it has the disadvantage of increasing water turbidity, requiring further treatment.Combining chitosan and coffee grounds increases the recyclability of the sorbent, improves the chemical stability and adsorption capacity of the sorbent, and improves the reduction efficiency (Das, Chakraborty, Chatterjee, & Kumar, 2018).The utilization of chitosan and activated carbon derived from coffee grounds as adsorbents has demonstrated effective reduction capabilities for various heavy metals.For instance, cadmium levels were reduced by 74.54%, and nickel levels by 73.43% (Purnama, 2019).Additionally, these adsorbents have been found to efficiently reduce lead metal, achieving an adsorption efficiency of 92.26% and resulting in a final concentration of 0.774 mg/L within a contact time of 120 minutes (Said, 2018).Furthermore, these materials have also been successful in reducing drug contaminants present in wastewater, including metamizole, acetyl salicylic acid, acetaminophen, and caffeine (Lessa, Nunes, & Fajardo, 2018).
A previous study demonstrated that the reduction of lead concentration in industrial wastewater using natural chitosan and activated carbon from coffee grounds as sorbents resulted in a reduction of 90.86%, yielding a final concentration of 0.09 mg/L (Nurhidayanti, Ilyas, & Suwazan, 2021).Previous research has explored the isothermal model and reaction kinetics of metallic arsenic reduction (Nurhidayanti & Nugraha, 2022), as well as the reaction kinetic analysis and adsorption isotherms of chicken egg shells, membranes, and synthetic dyes (Hevira & Gampito, 2022).However, the proper lead metal adsorption isotherm model has not been studied to determine the adsorption capacity of the use of chitosan and coffee grounds adsorbents in reducing lead concentrations in industrial wastewater (Nurhidayanti et al., 2021;Suwazan & Nurhidayanti, 2022;Suwazan, Nurhidayanti, Fahmi, & Riyadi, 2022).The purpose of this study is to explore the maximum adsorption capacity through an isotherm model, to determine the rate of adsorption kinetics in the use of chitosan and coffee grounds adsorbents in reducing lead concentrations in industrial wastewater, and to investigate its thermodynamic ascpects.
METHODS
This study was conducted at PT. Tuv Nord Indonesia and Pelita Bangsa University from June to December 2022.The research employed laboratory experiments followed by quantitative data analysis to determine isothermal models, adsorption kinetics, and thermodynamics.The materials utilized in this study included chitosan, ZnCl2 p.a solution 0.1 N (Merck), HCl p.a solution 0.1 N (Merck), NaOH p.a solution 0.1 N (Merck), lead stock solution 1000 mg/L, and coffee grounds obtained from coffee shops as waste.The tools employed for this study comprised beakers, an analytical balance, filter paper, volume pipette, funnel, porcelain cup, universal indicator, oven, spatula, acrylic plate, hot plate, sieve, furnace, desiccator, rubber suction/bulb, aluminum foil, ball mill, magnetic stirrer, vacuum, Fourier Transform-Infrared (FT-IR) Perkin-Elmer UATR Spectrum Two, and Scanning Electron Microscopy-Energy Dispersive X-Ray (SEM-EDX) JEOL JSM-6510LA.The Research procedure in this study follows the flowchart as presented in Figure 1.The research process from stages 1 to 3 has been carried out in 2021 (Nurhidayanti et al., 2021).The operational conditions employed included pH control, mass variation ranging from 0.6 to 1.4 grams, activated carbon particle size of 160 mesh, initial lead concentration of 1.02 mg/L, stirring speed of 100 rpm, contact time spanning from 5 to 25 minutes, and a temperature range of 25 to 55°C.The scope of this research is at point 4 of the framework in the picture above.The isotherm models used in this study are Langmuir, Freundlich, Dubinin Raduskevich (D-R) and Temkin isotherms.Determination of the adsorption capacity (q) uses equation 1 (Sunsandee, Ramakul, Phatanasri, & Pancharoen, 2020).
𝑞𝑞 = (𝐶𝐶
Where q is biosorption capacities (mg/g), Ci is the initial concentration of lead (mg/L), Ct is the concentration of lead at time t (mg/L), V is volume of lead solution, and m is mass of adsorbent used in the reaction mixture (g).
Data analysis was performed using final lead concentration data that underwent an adsorption process using the adsorbent chitosan charcoal coffee powder with mass change (from 0.6 grams to 1.4 grams) to obtain the maximum adsorption capacity.The results of this analysis are reflected in the isotherm equations.
Biosorption equilibrium data were fitted to linear Langmuir, Freundlich, Temkin, and Dubinin-Raduskevich isotherms (D-R).The Langmuir isotherm equation has the nonlinear form (Wang & Guo, 2020a): where qe is the equilibrium biosorption capacity (mg/g), Ce is the concentration of at equilibrium (mg/l), qm is the maximum biosorption capacity (mg/g) and KL is the Langmuir equation constants (L/mg) that can to be determined by vs based on the linear plot of Ce.The Freundlich isotherm equation has the following non-linear forms (Wang & Guo, 2020a): Where KF is the Freundlich constant, and 1/n is the biosorption intensity.The value 1/n < 0 indicates the reaction takes place irreversible.If 0 < 1/n < 1, the biosorption reaction is desired, while if 1/n > 1, the biosorption reaction is not desired.Plotting Ce versus Qe can solve the Freundlich model in equation ( 3).The determination of KF and qm is generated from the slope and intercept resulting from the regression equation.This D-R isotherm model is expressed by the following equation (Wang & Guo, 2020a): Where − (mg/g) is the maximum biosorption capacity; 〖− is the activity coefficient (mol 2 /J 2 ); Ɛ (kJ/mol) is the biosorption potential based on Polanyi potential theory.The Temkin isotherm model is expressed in the following equation (Wang & Guo, 2020a): Where R is the universal gas constant, T is the temperature; A (L/g) is the equilibrium constant and b (J/mol) is the Temkin constant related to the heat of biosorption.
To investigate the mechanism of the adsorption process, pseudo-first-order adsorption, pseudo-secondorder adsorption models, Elovich and Webber Morris were used to test the adsorption data.The pseudo-first-order model (Wang & Guo, 2020b) is expressed by Equation (7): where qe is equilibrium biosorption capacities (mg/g) and qt is the amounts of lead adsorbed on the adsorbent at time (mg/g), t is time, and k1 is the pseudo-first-order rate constants (min-1) The pseudo second-order index model (Wang & Guo, 2020b) is given in Eq. ( 8): where k2 is the constant of the pseudo-second-order rate (g/mg/min), which is obtained by plotting versus t.
The Elovich model has been expressed in Equation ( 9): Then a graph of the relationship qt versus ln t is made which will produce slope as a value of 1/b and an intercept as a value of 1/b ln (ab).The Webber Morris model has been expressed in Equation ( 10): Where ki is the intra-particle diffusion constant.Then graph the relationship between qt versus t 1/2 which will produce the slope as the value of ki.Determination of the appropriate isotherm model and adsorption kinetics was carried out based on the correlation coefficient with the largest R 2 value close to 1.0 using Microsoft excel software.
The thermodynamic behavior of the biosorption of lead on adsorbent can be described by the thermodynamic parameters, including the change in free energy (ΔG o ), enthalpy (ΔH o ) and entropy (ΔS o ), which were calculated based on the following equation (Sunsandee et al., 2020).
ΔG o = −RT ln Keq (11 Where R is the universal gas constant (8.314J/mol K), T is the temperature (K) and KD is the equilibrium constant.
RESULTS AND DISCUSSION
The results of the FT-IR and SEM-EDX analysis with operating conditions pH, mass variation of 0.6-1.4gram, activated carbon particle size of 160 mesh, the ratio of chitosan activated carbon and coffee grounds is 50:50 , initial lead concentration of 1.02 mg/L, stirring speed (100 rpm), contact time of 5-25 minutes, and the temperature used 25-55 o C are presented in Figures 2a and 2b The FT-IR analysis result showed the presence of various functional group in the biosorbent, including CH (as an alkane), NH (possibly as a secondary/primary amine and amide), N=O (nitro), CO (possibly as an alcohol/ether/ester/carboxylic acid/ anhydride), CN (amine) C-Cl (chloride), and N=O (nitro).This indicates that the interaction between activated carbon from coffee and chitosan involves both physical interaction and a chemical reaction that results in the formation of a nitro group (NO2) in the activated carbon made from coffee grounds as part of the chitosan biosorbent.The introduction of the nitro functional group enhances adsorption capacity due to electrostatic interaction with lead metal cations, thereby increasing the adsorption ability of the chitosan-activated carbon composite (Nurhidayanti, Ilyas, Suwazan, & Fajar, 2022).The results of SEM-EDX consistent with previous research that adding activated carbon from coffee grounds to chitosan can improve the biosorbent active site and open up more surface pores, which increasing the absorption of cadmium and lead metals in PXI industrial effluent (Sahu, Singh, & Koduru, 2021).In comparison to using chitosan adsorbent or coffee grounds activated carbon individually, the combination of the two is more efficiently employed as an adsorbent due to the increase in pore size and quality of the adsorbent.The adsorption capacity of an adsorbent is positively correlated with its surface area, signifying greater efficiency in adsorbing target contaminants (Joshi, Kataria, Garg, & Kadirvelu, 2020).
The use of chitosan coffee grounds sorbent to reduce lead concentrations in industrial wastewater is shown in Figure 3.
The figure above shows that the highest reduction in lead metal concentration was in the use of chitosan adsorbent with a coffee grounds activated carbon mass of 1.4 grams to 0.09 mg/L.This implies that as the mass of coffee grounds activated carbon, in conjunction with chitosan, increases during the adsorption process, there is a corresponding enhancement in the reduction of lead concentration.This correlation can be attributed to the amplified adsorption capacity, which is directly proportional to the augmented active absorption sites on the biosorbents.The increased mass of activated carbon consequently leads to a higher potential for the removal of lead metal from wastewater due to the heightened availability of active sites for adsorption (Naga Babu, Reddy, Kumar, Ravindhranath, & Krishna Mohan, 2018).The results of data analysis performed using Microsoft Excel on several isotherm equations are presented in Figures 4 to 7.
The figure above shows that the correct isotherm model for the adsorption process of lead metal using chitosan-activated carbon coffee grounds is the Langmuir model because the highest correlation coefficient is 0.9970.This is later followed by the models of Temkin, Freundlich and Dubinin-Raduskevich.The magnitude of the adsorption isotherm parameters is presented in Table 1.
The table above shows the calculated data regarding several adsorption isotherm parameters, namely the Langmuir constant which is 40.204 with a maximum adsorption capacity of 1.0511 mg/g and a separation factor (RL) value of 2.3805 which means that adsorption is favorable (RL>1) (Khalil et al., 2020).The Freundlich equation shows that there is a Freundlich constant (KF) of 0.1869 and a biosorption intensity (1/n) of -3.8425 (<0), which means that the adsorption reaction takes place in irreversible (Pagalan et al., 2020).Consequently, it is feasible to conclude that the significant lead adsorption on biosorbent that has been chemically activated by phosphoric acid verifies the presence of enhanced porosity and a high specific surface.The laboratory-produced active carbon has a strong affinity for this heavy metal.(Benyekkou, Ghezzar, Abdelmalek, & Addou, 2020).
The Dubinin Raduskevich isotherm equation shows that there is a maximum adsorption capacity (qmD-R) (Qe(mg/g) ln Ce (mg/L) of 3.7123 mg.g -1 , the biosorption potential based on Polanyi potential theory (ε) is 8x10 -9 kJ/mol.The Temkin isotherm equation shows that the Temkin constant associated with the heat of biosorption is 0.0199 J/mol.In this equation, bT is referred to as the Temkin harmony constant, which is linked to the highest binding energy.On the other hand, B is essential to characterize the heat of adsorption.The Temkin constant, denoted as b, is associated with the heat of adsorption measured in kJ/mol.As per the Temkin adsorption isotherm, direct fittings were achieved by plotting qe against ln Ce at the experimental temperatures (RT-298K), as depicted in Figure 6.These linear relationships facilitate the examination of the Temkin adsorption isotherm parameters bT and B. The overall heat of adsorption diminishes with an increase in adsorption due to the interaction between lead and the adsorbent surface.
The values of the Temkin adsorption constants bT, B, and R2 are presented in Table 1.According to the data extracted from the fittings and included in Table 1, it is evident that the Temkin adsorption isotherm model aligns well in comparison to the Freundlich and Dubinin Raduskevich isotherm models (Sultana et al., 2022).Based on the data analysis carried out, the correlation coefficient of the Langmuir model> Temkin> Freundlich> Dubinin Raduskevich.This shows that the adsorption isotherm follows the Langmuir isotherm model with a correlation coefficient of 0.9970 with a maximum adsorption capacity of 1.0511 mg.g -1 which indicates that chemical adsorption occurs in the mono layer with a homogeneous distribution of adsorption sites with adsorption energy.constant and negligible interactions between lead metal molecules (adsorbate).
The results of data analysis performed using Microsoft Excel on several kinetics equations are presented in Figures 8 to 11.The magnitude of the adsorption kinetics parameters is presented in Table 2.
The table above shows the calculated data regarding several adsorption kinetic parameters, namely the PFO constant of 82.217 mg.g -1 with an adsorption capacity based on weight at equilibrium of 16.816 mg.g -1 .The PSO equation shows that there is a PSO constant of 9.6813 mg.g - 1 .hour - with an adsorption capacity based on weight at equilibrium of 0.0022 mg.g -1 .The Elovich equation shows that there is an Elovich constant of 76.512 mg.g -1 .The intra-particle diffusion equation shows the Weber Morris constant of 133.31 mg.g -1 .By plotting Qt against t 1/2 a straight line was found as displayed in Fig. 11 and the magnitudes of k and qe were estimated from the intercept and slope of the straight line, respectively.The calculated values of qe, C and R 2 are displayed in Table 2.The constant, k was found to be 133.31mg.g -1 which indicates that the boundary layer thickness is inversely proportional to the internal mass transfer possibility (Sultana et al., 2022).However, the probability of the internal mass transfer is increased with the increase of the boundary layer.The correlation coefficient factor (R 2 ) is measured as 0.9920 which reveals that the adsorption rate kinetics is an intraparticle diffusion process.The rate constant (ki) is 76.512 g.mg -1.h -1 and the linear form of the plot indicates that the as developed chitosan and coffee grounds adsorbent is suitable for the uptake of the lead from the aqueous solution.Based on the data analysis conducted, the correlation coefficient for the kinetic model of intra-particle diffusion is greater than that of PFO, Elovich, and PSO.This suggests that the adsorption kinetics follow the intraparticle diffusion kinetics model, with a correlation coefficient of 0.9920 and a diffusion rate of 76.512 g.mg - 1 .h - .This indicates that intra-particle diffusion is the ratelimiting step in the overall biosorption process and is influenced by the biosorption half-life obtained under these conditions (Park et al., 2019).
Negative ΔG o values indicate that the adsorption reaction takes place spontaneously, ΔH o of 0.8130 indicates an endothermic reaction, and ΔS o of 4.1888 indicates an increase in the randomness of the adsorption process at the adsorbent interface and lead during adsorption.Coffee Ground Activated Carbon showed that the adsorption isotherm follows the Langmuir isotherm model with a correlation coefficient of 0.9970 with a maximum adsorption capacity of 1.0511 mg.g -1 which indicates that chemical adsorption occurs in the mono layer with a homogeneous distribution of adsorption sites with constant adsorption energy and negligible interactions between lead metal molecules (adsorbate).Study of lead adsorption kinetics using chitosan-activated carbon coffee grounds following the Weber-Morris/intra-particle diffusion model with a correlation coefficient of 0.9920 with a diffusion rate of 76.512 g.mg -1 .h - indicating that intra-particle diffusion is the rate step limiting in the overall biosorption process.Negative ΔG o values indicate that the adsorption reaction takes place spontaneously, ΔH o of 0.8130 indicates an endothermic reaction, and ΔS o of 4.1888 indicates an increase in the randomness of the adsorption process at the adsorbent interface and lead during adsorption.
ACKNOWLEDGMENT
Our thanks go to DIPA DIRJENDIKTI KEMENDIKBUDRISTEK for Fiscal Year 2021 for research funding support that has been provided through grants with research contract number 036/KP/7.NA/UPB/VII/2021 so that research can be completed properly.
a) Results of FT-IR spectrum and b) SEM-EDX from biosorbent
Figure 8 .Figure
Figure 8. Plot ln(Qe-Qt) vs t on the PFO kinetics model
Table 1 .
Parameters of lead adsorption isotherm using chitosan-activated carbon of coffee grounds No .
Table 2 .
The results of rate constant investigated | 2023-12-29T16:21:05.424Z | 2023-12-22T00:00:00.000 | {
"year": 2023,
"sha1": "a92532394ffda614d91677f7e11a2d075c79e327",
"oa_license": "CCBYNCSA",
"oa_url": "https://jrtppi.id/index.php/jrtppi/article/download/173/123",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2e27590714bc73a1a3b7f278c49abe540cdf3fab",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": []
} |
11674399 | pes2o/s2orc | v3-fos-license | Stress-Reactive Rumination, Negative Cognitive Style, and Stressors in Relationship to Depressive Symptoms in Non-Clinical Youth
The role of cognitive vulnerability in the development of depressive symptoms in youth might depend on age and gender. The current study examined cognitive vulnerability models in relationship to depressive symptoms from a developmental perspective. For that purpose, 805 youth (aged 10–18, 59.9% female) completed self-report measures. Stress-reactive rumination was strongly related to depressive symptoms. Negative cognitive style (i.e., tendency to make negative inferences) in the domains of achievement and appearance was more strongly and consistently related to depressive symptoms in girls compared to boys. Negative cognitive style in the interpersonal domain was positively related to depressive symptoms in both girls and boys, except in early adolescent girls reporting few stressors. To conclude, the cognitive vulnerability-stress interaction may be moderated by the combination of age and gender in youth, which may explain inconsistent findings so far. Current findings highlight the importance of taking into account domain specifity when examining models of depression in youth.
Introduction
Developmental models of depression in adolescence have conceptualized cognitive vulnerability within the context of a diathesis-stress account (see Hyde et al. 2008), in which cognitive vulnerability represents the diathesis. The cognitive vulnerability-stress model proposes that cognitive vulnerability factors are more likely to lead to depression in the presence of stressors. A cognitive vulnerability factor that has been hypothesised to interact with stressors in the prediction of depression is negative cognitive style (see Abramson et al. 1989). Negative cognitive style can be defined as the general tendency to make negative attributions and inferences about the causes, consequences, and implications of stressful events. More specifically, these attributions and inferences include the tendencies to view (1) the causes of negative events as global and stable, (2) negative events as having many disastrous consequences, and (3) the self as flawed and deficient after the occurrence of negative events. Stressors in youth have been defined as ''environmental events or chronic conditions that objectively threaten the physical and/or psychological health or wellbeing of individuals […]'' (Grant et al. 2003, p. 450). Stressful negative life events and daily hassles (generally taken together) have represented the stress-component in cognitive vulnerability-stress models in youth (e.g., Abela 2001; Abela and Payne 2003;. Major life events are related especially to the onset of depression (Brown and Harris 1978;Kendler et al. 2001;Kessler 1997;Monroe and Harkness 2005) whereas daily hassles predict increases in psychological symptoms (Kanner et al. 1981) and may be related to the recurrence of depression (see Monroe and Harkness 2005).
A variable closely related to negative cognitive style is stress-reactive rumination, which is defined as ''the tendency to ruminate on the negative inferences following stressful events'' (Robinson and Alloy 2003, p. 276). Alloy and colleagues (Alloy et al. 2000;Robinson and Alloy 2003) introduced the concept of stress-reactive rumination to explain the onset and duration of depression, hypothesising that the effect of negative inferences (i.e., a negative cognitive style) on depression is more detrimental when these inferences are actively rehearsed (i.e., ruminated upon). Indeed, Alloy and colleagues found that individuals who have a negative cognitive style, combined with a tendency to ruminate on negative inferences, were particularly vulnerable to develop depressive episodes (Alloy et al. 2000;Robinson and Alloy 2003). Whether stress-reactive rumination moderates the relationship between negative cognitive style and depressive symptoms has not been examined in youth to the authors' best knowledge. The examination of the potential interplay between two cognitive vulnerability factors, one reflecting negative thought content, and the other the repetition of the negative content, may contribute to knowledge on the pathogenesis of depression and on how to target cognitive vulnerability to depression in youth. Finally, stress-reactive rumination may worsen the effects of stressors on depressive symptoms in the context of a cognitive vulnerability-stress model. This hypothesis also has not yet been tested.
When testing cognitive models of depression in youth, developmental factors should be taken into account. Cognitive diatheses have been thought to become stable predictors of depressive symptoms during adolescence, when cognitive capacities are further developing and maturing (see Cole et al. 2008;Turner and Cole 1994). Recent longitudinal studies involving youth samples indicate that age might moderate the relationship between the cognitive variables and depressive symptoms (Cole et al. 2008;Turner and Cole 1994). Empirical support for cognitive vulnerability-stress models is stronger in adolescent samples compared to child samples (Abela and Hankin 2008;Joiner and Wagner 1995;Lakdawalla et al. 2007). The interaction between cognitive vulnerability and stressors may occur somewhere between the ages of 11 and 15 (see Cole et al. 2008;Hyde et al. 2008). Furthermore, Abela and Hankin (2008) have suggested that cognitive factors may be relatively independent factors in childhood, but may become more interrelated in adolescence, during which a solid combination of these factors may make an individual vulnerable to develop depressive symptoms. This may imply that stress-reactive rumination, negative cognitive style, and age interact in adolescence.
Studies so far have examined the moderating effect of age on cognitive variables, with age being indicative of the level of development or maturation. However, it may be interesting to examine another variable that may reflect the level of maturity more closely, i.e., puberty. Studies have shown that the gender difference in depression rates emerges in puberty, with girls reporting more depressive symptoms than boys (see Hankin et al. 2008). Pubertal status has been linked to the increase in depressive symptoms in girls (Angold and Costello 2006). Angold et al. (1998) found that after mid-puberty, girls had higher rates of clinical depression compared to boys. Age did not significantly moderate this relationship, which could suggest that the emergence of the gender difference in depression rates is caused by puberty-related, rather than age-related changes. The moderating role of pubertal status instead of age in the testing of cognitive vulnerability-stress models has not yet been examined to our knowledge.
Further, models explaining gender differences in depression have proposed that cognitive vulnerability factors combined with high levels of stressors may be related more strongly to depressive symptoms in girls compared to boys (see Nolen-Hoeksema and Girgus 1994). Empirical support for the moderating role of gender is mixed. Prospective studies in child and early adolescent samples (range of mean ages of the samples: 8.9-12.9) have shown that cognitive vulnerability moderates the effects of stressors on depressive symptoms only in girls (Abela and McGirr 2007;partial support in Abela 2001), whereas other studies involving adolescents (range of mean ages: 11.9-18.1) have found support for a cognitive vulnerability-stress model only in boys Morris et al. 2008;Stone et al. 2010). In sum, findings indicate that the moderating roles of both age and gender, as well as their potential interplay, should be included in the examination of cognitive models of depression in youth. Finally, researchers (Hyde et al. 2008;Mezulis et al. 2002;Mezulis and Funasaki 2009) have argued that domain specificity of vulnerability factors should be taken into account when examining models of depression. Findings show that women have a stronger tendency to ruminate on stressors related to physical appearance and interpersonal problems than men (Mezulis et al. 2002), and may be more likely to develop negative cognitive styles in the domains of interpersonal relationships and physical appearance. How domain specificity of cognitive vulnerability factors is related to depressive symptoms in adolescence has not been examined yet from a developmental viewpoint.
The Current Study
This study aimed to examine three cognitive vulnerability models for depressive symptoms in non-clinical youth from a developmental viewpoint. First, it was hypothesized that stress-reactive rumination would moderate (i.e., exacerbate) the relationship between negative cognitive style and depressive symptoms (Model 1). Second, stressreactive rumination was hypothesized to moderate (i.e., exacerbate) the relationship between stressors and depressive symptoms (Model 2). Third, it was hypothesized that negative cognitive style would moderate (i.e., exacerbate) the relationship between stressors and depressive symptoms (Model 3). Regarding domain specificity, it was explored whether different results would be obtained when examining specific domains of negative cognitive style instead of the aggregate score for negative cognitive style.
Age and gender were taken into account as potential moderators in the examination of these three cognitive models. More specifically, it was expected that cognitive vulnerability factors (negative cognitive style/stress-reactive rumination) and stressors would worsen each other's relationship with depressive symptoms more strongly as age increases. Furthermore, as evidence regarding the moderating role of gender is mixed (i.e., some studies show a significant interaction between cognitive vulnerability and stressors only in girls and other studies only in boys) the moderating role of gender was explored in combination with the moderating role of age. Further, it was examined whether pubertal status would be a more sensitive moderator in these models compared to age.
Participants and Procedure
Participants were recruited at 35 primary and 6 secondary schools in the southern regions of The Netherlands. Principals of schools were approached and informed about the purpose of the study. When given permission to recruit at their school, the researchers came into the classrooms during regular class and held a 10-min talk in front of all pupils. In this talk, the purpose of this study was explained and informed consent forms were handed out and returned 2 weeks later. On average, 25% of the children who were approached agreed to participate. We obtained written informed consent from all parents and from all children aged 12 and above, in accordance with formal regulations. A total number of 805 participants completed the questionnaires. Some had more than 10% missing values on one of the measures and were therefore excluded from that measure. As a consequence, sample size ranged between 751 and 805 across the various analyses.
The mean age of the sample was 12.4 years (SD = 1.9; age range 10-18); 59.9% was female. Age at baseline was skewed towards the younger ages (%boys/%girls): 15/17% was age 10, 25/22% was age 11, 23/17% was age 12, 15/15% was age 13, 12/13% was age 14, 7/9% was age 15, and 4/6% was age 16-18. About half of the participants (47.8%, N = 385) received secondary education, of which 38.2% (N = 147) were in pre-university education, 37.1% (N = 143) in school of higher general secondary education, and 24.7% (N = 95) in lower professional secondary education. Ethnicity was not reported, but considering the ethnic constellation of the southern regions of the Netherlands, it is acceptable to assume that about 95% of the sample were Caucasian. Participants completed a battery of questionnaires at home. They did not receive compensation for their participation. Little information is available on how the study's participants differed from those who did not participate. The proportion of the sample that exhibited clinically significant levels of depressive symptoms was 12.9% (CDI cut off score C 16; see Timbremont et al. 2004). The research protocol was approved by a local Institutional Review Board.
Depressive Symptoms
The Children's Depression Inventory (CDI; Kovacs 1981; Dutch/Flemish version: Braet 2001, 2002) is based on the Beck Depression Inventory for adults. The CDI is a widely used self-report questionnaire which aims to measure the level of depressive symptoms in children. For each of the 27 items three statements are given, of which the subject has to choose one (e.g., ''I am sad sometimes/I am often sad/I am always sad'') that represents best how he or she has been feeling the last 2 weeks. Reliability in terms of internal consistency is good and the convergent validity of the CDI is supported (Timbremont and Braet 2001).
Stress-Reactive Rumination (from Here Referred to as ''SR-Rumination'')
The Dutch version of the Stress-Reactive Rumination Scale for Children (SRRS-C) is a downward extension of the SRRS developed for adults (Robinson 1997;Robinson and Alloy 2003). The SRRS-C was translated into Dutch, and subsequently back translated by a native English speaker and then was approved by the original authors. The SRRS-C aims to measure the frequency of negative thoughts about negative inferences following stressful events (e.g., ''I think about how the stressful event was totally my fault''). The SRRS-C consists of nine items which are scored on a four-point Likert type scale (i.e., 1 = almost never, 2 = sometimes, 3 = often, 4 = almost all the time). Reliability (a = .82) and concurrent criterion validity of the SRRS-C are adequate to good; furthermore, SR-rumination can meaningfully be distinguished from emotion-focused rumination and worry (Rood et al. 2010).
Negative Cognitive Style (from Here Referred to as ''NCS'') The Adolescent Cognitive Styles Questionnaire (ACSQ; Hankin and Abramson 2002) measures inferential styles in response to negative events. The original ACSQ consists of 12 hypothetical negative event scenarios covering the domains of academic/scholar achievements and interpersonal relations. In the current study, we used a version which also contains a third domain particularly relevant for adolescence, i.e. ''physical appearance'' (4 items). Examples of hypothetical event scenarios in the different domains are: ''You want to go to a big party, but nobody invites you'' (interpersonal), ''Someone says something bad about how you look'' (appearance), and ''You take a test and get a bad grade'' (achievement). Each hypothetical event scenario is accompanied by five questions, measuring internal/external attribution of the cause, inferences about stability and globality of the cause, and inferences about consequences and self-worth, rated on a seven-point scale. An aggregate score can be computed by summing up scores on all scales, with high scores defining a high NCS. The psychometric properties (reliability, test-retest reliability, and construct validity) of the ACSQ are supported (Hankin and Abramson 2002).
Stressors
The Children's Life Events Scale (CLES; as described in Abela and Véronneau-McArdle 2002) is composed of two questionnaires. The first 37 items are derived from the Children's Hassles Scale (Kanner et al. 1987) and describe daily hassles (e.g. ''You had to clean up your room''). Responses are rated on a four-point scale, with 0 = ''when it didn't happen''; 1 = ''when it occasionally happened''; 2 = ''when it often happened''; and 3 = ''when it happened all the time''. The other 22 items, taken from the Coddington Life Stress Scale (Coddington 1972), describe relatively serious life events (e.g., ''Your mother or father lost her/his job''). One can answer ''yes'' or ''no'' dependent on whether the life event occurred the past year. For the current study the two scales were collapsed into one single scale labeled ''stressors'', which is consistent with previous studies (e.g., Abela and Sarin 2002). For that purpose, the daily hassles items were dichotomized, with original scores 1, 2 and 3 recoded in 1 (''it happened''), and original score 0 remaining 0 (''it didn't happen'').
Pubertal Status
The Physical Development Scale (PDS; Petersen et al. 1988) is a self-report questionnaire measuring perceived pubertal status. The PDS consists of multiple choice questions regarding growth spurt, skin changes, and pubic hair. An example of an item is: ''Have you noticed any skin changes?''. The items have four answering options, ranging from ''1 = not yet…'' to ''4 = seems completed''. A high total score on the questionnaire indicates a high pubertal status. The version for girls also includes items on the menarche and breast growth, while the version for boys includes items on voice changes and facial hair growth. The PDS is acceptably reliable in terms of internal consistency; validity, however, needs further investigation (see for reviews Coleman and Coleman 2002;Schmitz et al. 2004). Studies have shown that youth are capable of making a rough estimation of their pubertal status (Bond et al. 2006;Coleman and Coleman 2002;Petersen et al. 1988;Schmitz et al. 2004). It should be emphasized that self-perception of pubertal status is measured rather than actual pubertal status (Dorn et al. 2006). The original (English) version was translated into Dutch for this study.
Statistical Analysis
The data were analysed using SPSS version 18.0. For individuals with less than 10% missing values on a single selfreport measure, a regression technique was used to impute the missing values by estimating the value on the basis of the scores of that individual on the remaining items, as well as on the scores of others on the item for which a value was missing. Cases with more than 10% missing values on one of the measures were excluded from that specific measure. Binary logistic regression analyses were performed to check whether scores on questionnaires were missing at random or missing not at random (i.e., whether missing scores could be explained by the independent and dependent variables). Missing ACSQ total scores were significantly predicted by age (Exp b = 1.20, p = .04), indicating that the older the participant, the more likely the ACSQ was completed. The ACSQ was the last questionnaire in the battery and therefore may not have been completed by some of the younger participants due to tiredness or boredom.
Because the sample was nested within schools, the intraclass correlation was checked in order to determine whether intra-unit dependency needed be controlled for. The ICC indicated low homogeneity of depressive symptoms within schools (ICC = .02), which justified regular regression analyses. Before carrying out the analyses, assumptions were checked. The total scores on the CDI were not normally distributed and therefore underwent a square root transformation, resulting in skewness and kurtosis values between -1.0 and ?1.0 (for all variables skewness range: -.01 to .90; kurtosis range: -1.10 to .50). Examination of plots of the standardized residuals against the standardized predicted values, partial plots, and normal probability plots of the residuals for each regression model indicated no violations of the assumptions of homogeneity of variances, homoscedastity, and linearity. All variables were standardized prior to creating interactions.
We carried out normal regression analyses with depressive symptoms as dependent variable. The models were tested separately from each other, following a top-down procedure starting with a full model (i.e., the four-way interaction) and subsequently eliminating interactions that were not significant. The starting models were as follows: (1) four-way interaction between NCS, SR-rumination, age, and gender; (2) four-way interaction between SR-rumination, stressors, age, and gender; and (3) four-way interaction between NCS, stressors, age, and gender. Domain specificity of NCS was examined by re-running the analyses for Models 1 and 3, with NCS in the domains of scholar achievement, interpersonal relations, and physical appearance separately (from here referred to as ''NCS-achievement'', ''NCS-interpersonal'', ''NCS-appearance''), instead of the aggregate score for NCS. Finally, we tested alternative models repeating the same series of analyses with pubertal status instead of age.
General Findings
Descriptive statistics for the total sample as well as for boys and girls separately are presented in Table 1, together with the reliability coefficients of all measures. All questionnaires showed good reliability in terms of internal consistency. Girls scored higher on SR-rumination and pubertal status compared to boys. The ACSQ subscales stability, globality, consequences, and self-worth were substantially related to depressive symptoms (r = .44-51), whereas the internality scale correlated low with depressive symptoms (r = .17). All ACSQ subscales were highly interrelated (r = .42-.86). For the internality scale the correlations with depressive symptoms and the other ACSQ subscales were substantially lower than for the other subscales (p \ .001). Therefore, the internality dimension was not included in the composite scale of the ACSQ. SR-rumination, NCS, and stressors were all strongly associated with depressive symptoms; while age and pubertal status were modestly related to depressive symptoms (see Table 2).
Model 1: NCS Moderated by SR-Rumination
The interaction between NCS and SR-rumination was not significant, nor did age and gender moderate the relationships between the variables (independently and in interaction with each other) and depressive symptoms. Only the main effects of SR-rumination and NCS were significant, indicating that both variables are related to depressive symptoms independently of each other. Age and sex were not significantly related to depressive symptoms when controlling for NCS and SR-rumination. The final (reduced) model is displayed in Table 3. The analyses with NCS per domain yielded almost identical findings, i.e., significant main effects were found for NCS per domain and SR-rumination. Results are therefore not reported.
Model 2: Stressors Moderated by SR-Rumination
The interaction between stressors and SR-rumination was not significant, nor did age and gender moderate the relationships between the variables and depressive symptoms. Only the main effects of SR-rumination and stressors were significant, indicating that both variables were independently related to depressive symptoms. The final model is displayed in Table 3.
Model 3: Stressors Moderated by NCS
The four-way interaction between NCS, stressors, gender, and age was significant (b = -.12, p = .02), see Table 3. The four-way interaction was examined more closely by splitting the data on high/low age (mean ± 1 SD) and on gender. The interaction term between NCS and stressors was significant only in middle to late adolescent boys (b = .33, p = .03). NCS related to depressive symptoms at the level of a trend in middle to late adolescent boys reporting many (mean ?1 SD) stressors (b = .81, p = .10), whereas this relationship was not significant in middle to late adolescent boys reporting few (mean -1 SD) stressors (b = .30, p = .37). In early adolescent boys, stressors were significantly associated with depressive symptoms (b = .59, p = .001), whereas NCS was not (b = .11, p = .39). In girls, NCS (b = .41, p = .001) and stressors (b = .39, p = .001) were independently related to depressive symptoms, meaning that the strength of the relationship between one variable and depressive symptoms is not conditional on the other variable.
Regarding the domain specificity of NCS, 1 results showed a significant four-way interaction between stressors, NCS-achievement, age, and gender (b = -.11, p = .02) in a similar way as with the aggregate NCS: NCSachievement and depressive symptoms were significantly related in middle to late adolescent boys reporting many stressors (b = .99, p = .001), but not in those reporting few stressors (b = .02, p = .95). In early adolescent boys, stressors (b = .59, p \ .001) were significantly related to depressive symptoms, whereas NCS-achievement was not (b = .13, p = .31). In girls, NCS-achievement (b = .35, p \ .001) and stressors (b = .41, p \ .001) were related to depressive symptoms, independently of each other.
The four-way interaction between NCS-interpersonal, stressors, age, and gender was significant (b = -.10, p = .04). The interaction was split on gender and next on high/low (mean ± 1 SD) events. The interaction between NCS-interpersonal and age was significant for girls reporting few stressors (b = .36, p = .02), indicating that NCS-interpersonal and depressive symptoms were positively related in middle to late adolescent girls reporting few stressors (b = .59, p = .30), and negatively related in early adolescent girls reporting few stressors (b = -.30, p = .47). NCS-interpersonal and depressive symptoms were significantly related in girls reporting many stressors (b = .49, p \ .001). NCS-interpersonal (b = .26, p \ .001) and stressors (b = .47, p \ .001) were independently related to depressive symptoms in boys.
Pubertal Status Versus Age
The analyses with pubertal status instead of age yielded different results with regard to the main models. The fourway interaction between NCS, stressors, pubertal status, and gender approached significance (b = -.10, p = .08), indicating that the interaction between NCS and stressors was only significant in boys who perceived their pubertal status as high (b = .42, p = .007). NCS was more strongly related to depressive symptoms in boys with high pubertal status that reported many (mean ?1 SD) stressors (b = .73, p = .16) compared to boys with high pubertal status reporting few stressors (b = .21, p = .74). In boys who reported low pubertal status, both NCS (b = .48, p = .001) and stressors (b = .27, p = .002) were significantly associated with depressive symptoms. The relationship between NCS and depressive symptoms was moderated by pubertal status in both Model 1 (b = .07, p = .01) and Model 3 (b = .07, p = .01), indicating that NCS was more strongly related in participants reporting high pubertal status (mean ?1 SD; b = .68, p = .001) compared to participants reporting low pubertal status (mean -1 SD, b = .47, p = .001). Next to SR-rumination and stressors, pubertal status was modestly related to depressive symptoms (b = .06, p = .03), indicating that participants reported more depressive symptoms as they perceived their pubertal status as higher.
Discussion
Stress-reactive rumination in combination with negative cognitive style may predict onset of depression in adults (Robinson and Alloy 2003). Not much is known about whether these two cognitive vulnerability factors interact in relationship to depressive symptoms in youth. Furthermore, research has shown that cognitive vulnerability-stress interactions in relationship to depressive symptoms emerge somewhere between the ages of 11-15 (Hyde et al. 2008). Stressors 9 Neg cogn style 9 Age 9 Gender -.12 .05 -2.38 .02
Studies suggest that the interaction between cognitive
ACSQ Adolescent Cognitive Styles Questionnaire, SRRS-C Stress-Reactive Rumination Scale for Children, CLES Children's Life Events Scale vulnerability and stressors may function differently in girls and boys during adolescence; however, evidence is inconsistent and may point to moderation by a combination of age and gender. This study aimed to examine three cognitive vulnerability models for depressive symptoms in non-clinical youth from a developmental viewpoint. The first model proposes that stress-reactive rumination moderates the relationship between negative cognitive style and depressive symptoms; the second model hypothesizes that stress-reactive rumination moderates the relationship between stressors and depressive symptoms; and the third model hypothesizes that negative cognitive style moderates the relationship between stressors and depressive symptoms. The potentially moderating effects of age, pubertal status, and gender were examined in all models. Domain specificity of negative cognitive style was explored. Stress-reactive rumination (''SR-rumination'') was related to depressive symptoms, independently of negative cognitive style (''NCS'') or stressors, of which main effects were significant in both boys and girls. Second, NCS and stressors were both related to depressive symptoms in girls, independently of each other. The relationship between NCS and depressive symptoms approached level of significance in middle to late adolescent boys, but only in the presence of many stressors, supporting a cognitive vulnerability-stress model in middle to late adolescent boys. However, the examination of domain specificity of NCS yielded different results: NCS in the appearance domain was more strongly related to depressive symptoms in girls compared to boys, indicating that negative attributions and inferences about appearance may be associated with depressive symptoms in girls particularly. Furthermore, NCS in the interpersonal domain was related to depressive symptoms in boys and girls, except in early adolescent girls reporting few stressors, thus supporting a cognitive vulnerability-stress model in early adolescent girls.
With regard to the extension of the model of Robinson and Alloy (2003) to a youth sample, findings showed that NCS and SR-rumination accounted for a significant portion of the variance in depressive symptoms independently of each other. Our findings are thus not in line with Robinson and Alloy (2003) and Alloy et al. (2000), who found that SR-rumination worsens the effects of NCS on depression in adults. NCS and SR-rumination may not yet interact in youth because rumination has not stabilised yet in middle adolescence (Hankin 2008).
The finding that SR-rumination did not moderate the relationship between stressors and depressive symptoms is inconsistent with earlier studies demonstrating moderation of stressors by general forms of rumination (Kraaij et al. 2003;Skitch and Abela 2008). An explanation might be that SR-rumination is specifically focused on negative inferences and attributions and as such, is hypothesized to worsen the effect of NCS rather than the effect of stressors. One may argue that SR-rumination does not worsen the relationship between stressors and depressive symptoms in participants that do not have a highly NCS, suggesting a three-way interaction between these variables. Therefore, the three-way interaction between NCS, SR-rumination, and stressors was tested post-hoc, which was not significant. More research is needed to examine the possible interaction between NCS, SR-rumination, and stressors, for example in prospective high-risk designs.
Current findings provide support for a cognitive vulnerability-stress model (indicating that the aggregate NCS was only related to depressive symptoms in combination with many stressors) in middle to late adolescent boys, but not in girls and early adolescent boys. These findings are partially consistent with Cole et al. (2008) and Turner and Cole (1994) regarding the moderating role of age. However, present findings show that the moderating role of age only appeared in boys (and thus depended on gender). These findings are partially consistent with studies supporting the cognitive vulnerability-stress model for boys only Stone et al. 2010). It is important to note that the prospective studies of Hankin et al. and Stone et al. were conducted with middle to late adolescents, whereas other studies that found support for the interaction only in girls examined younger samples (e.g., Abela and McGirr 2007). Thus, current findings suggest that inconsistent results regarding the cognitive vulnerability-stress model in youth so far may be due to the moderating role of gender being dependent on age.
Current findings implicate that middle to late adolescent boys with a high NCS may only be vulnerable to develop depressive symptoms when experiencing many stressors, whereas girls with a high NCS may be vulnerable to depressive symptoms even without experiencing stressors. This might explain why girls after the age of 13 are more vulnerable to develop depressive symptoms compared to boys (see Kessler 2003;Kuehner 2003). Current findings regarding domain specificity also suggest that the moderating roles of age and gender depend on which domain of cognitive vulnerability is examined in interaction with stressors, supporting the plea of Hyde et al. (2008) and Mezulis et al. (2002) for the importance of examining domain specificity of cognitive vulnerability factors in developmental models of depression. To conclude, the cognitive vulnerability-stress interaction may be moderated by the combination of age and gender in youth, which may explain inconsistent findings so far.
Age Versus Pubertal Status
When controlling for SR-rumination and stressors, age was not significantly associated with depressive symptoms; whereas pubertal status was, indicating that depressive symptoms increase as pubertal status increases. The twoway interaction age (or pubertal status) by gender (included in all models under test) was not significant; whereas it would be expected that girls report more depressive symptoms as level of maturation (age/pubertal status) increases compared to boys. Results showed that although the four-way interaction between NCS, stressors, gender, and age was significant while the four-way interaction with pubertal status was marginally significant, the interpretation of these interactions was largely similar, i.e., NCS and depressive symptoms were significantly related only in the presence of many stressors in middle to late adolescent boys (or in boys reporting a high pubertal status).
In the model with SR-rumination, the relationship between NCS and depressive symptoms was stronger in adolescents who perceived their pubertal status as high, whereas age did not moderate this relationship. Perceived pubertal status, reflecting the subjective experience of morphological changes related to puberty (Angold and Costello 2006), may be a more sensitive moderator of NCS than age. However, contrary to age, how pubertal status is perceived and reported may also be influenced by depressive symptoms. When examining models of depression from a developmental perspective, age may be preferred over pubertal status, as age is a less complex variable. However, the current results do not seem to rule out that pubertal status may have additional value in examining cognitive models in youth.
Strengths and Limitations
This study has notable strengths that concern the large sample size, the wide age range, and the introduction of SR-rumination. The large sample size allows testing higher-order interactions, and thus testing models from a developmental perspective by including the potentially moderating roles of age and gender. The age range of the study sample captures the transition from childhood to adolescence and covers all phases of pubertal development. Theoretically, the introduction of SR-rumination is novel and contributes to existing research on cognitive vulnerability in youth. Moreover, the moderating role of SRrumination was examined in two models. Furthermore, the inclusion of pubertal status as an alternative to age is explored. Finally, examining domain specificity of NCS in youth is a new important avenue of research which can shed more light on the development of the gender difference in depressive symptoms.
The most important limitations of the current study concern the reliance on self-report measures, the crosssectional design, and the representativeness of the sample. A problem with the Coddington Life Events Scale (and the Daily Hassles subscale in particular) may be that this selfreport measure may reflect the self-perceived experience of stressors rather than actual experienced stressors. However, Wagner et al. (2006) demonstrated that ratings on the CLES (assessing stressful life events and daily hassles) did not differ from an objectively rated interview assessing stressful life events in terms of over-reporting as a function of depression. Another limitation is the cross-sectional design, which merely allows drawing conclusions on associations between variables. Moreover, problematic issues such as construct overlap and shared method variance cannot be adequately handled. Furthermore, the low consent rates may have introduced a certain bias in the current sample, limiting the extent to which current results can be generalised to the Dutch youth population and to clinically depressed youth. Therefore, future research should focus on examining these relationships in representative community samples and in clinically depressed youth.
Clinical and Theoretical Implications
The findings from the current study may have some implications for future research and clinical practice. For future research, it would be interesting to investigate the developmental nature of the models using longitudinal designs, taking into account domain specificity of vulnerability factors. Prospective low/high-risk designs and experimental research can shed more light on causal relationships between stressors, SR-rumination, NCS, and depressive symptoms. With respect to clinical implications, we recommend that psychological treatment of depressive symptoms in youth should target ruminative thinking, and focus on altering NCS, both of which can be emphasized in cognitive therapy; and improve problem-solving or coping with stressors, which is targeted in behavioral activation therapy (Dimidjian et al. 2006). An interesting new approach to the treatment of depressive symptoms is mindfulness-based therapy (see Segal et al. 2002), which helps dealing with ruminative thinking and NCS. There is evidence that mindfulness techniques incorporated into dialectic behavior therapy are helpful in decreasing suicidality and depressed mood in depressed adolescents (Miller et al. 2007).
Final Conclusion
Stress-reactive rumination was strongly related to depressive symptoms. The strength of this relationship was similar for boys and girls, and did not differ as a function of age. Stress-reactive rumination did not moderate the effects of negative cognitive style, nor the effects of stressors in the association with depressive symptoms. Stress-reactive rumination and negative cognitive style may not interact in youth as cognitive vulnerability factors may not have stabilised yet. Negative cognitive style in the domains of achievement and appearance was more strongly and consistently related to depressive symptoms in girls compared to boys, independently of stressors. Negative cognitive style in the interpersonal domain was related to depressive symptoms in both girls and boys, except for early adolescent girls reporting few stressors, thus supporting a diathesis-stress pattern only in early adolescent girls. Negative cognitive style in the achievement domain was only significantly related to depressive symptoms in middle to late adolescent boys reporting many stressors, thus supporting a diathesis-stress pattern only in older boys. Moderation by pubertal status instead of age yielded slightly different results, that is, in the model with stress-reactive rumination, the relationship between negative cognitive style and depressive symptoms was stronger in adolescents who perceived their pubertal status as high, whereas age did not moderate this relationship. Current findings highlight the importance of taking into account domain specifity of vulnerability factors in the examination of developmental models of depression in youth. | 2014-10-01T00:00:00.000Z | 2011-03-31T00:00:00.000 | {
"year": 2011,
"sha1": "a0cbdcdd329982f95d37e6353a66f95973c17946",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10964-011-9657-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba1c6b5b9484c6571f2f3f10b6e710fb4ef48e3f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
208563493 | pes2o/s2orc | v3-fos-license | Transcript isoform sequencing reveals widespread promoter-proximal transcriptional termination
Higher organisms achieve optimal gene expression by tightly regulating the transcriptional activity of RNA Polymerase II (RNAPII) along DNA sequences of genes1. RNAPII density across genomes is typically highest where two key choices for transcription occur: near transcription start sites (TSSs) and polyadenylation sites (PASs) at the beginning and end of genes, respectively2,3. Alternative TSSs and PASs amplify the number of transcript isoforms from genes4, but how alternative TSSs connect to variable PASs is unresolved from common transcriptomics methods. Here, we define TSS/PAS pairs for individual transcripts in Arabidopsis thaliana using an improved Transcript Isoform sequencing (TIF-seq) protocol and find on average over four different isoforms corresponding to variable TSS/PAS pairs per expressed gene. While intragenic initiation represents a large source of regulated isoform diversity, we discover that ∼ 14% of expressed genes generate relatively unstable short promoter-proximal RNAs (sppRNAs) from nascent transcript cleavage and polyadenylation shortly after initiation. The location of sppRNAs coincides with increased RNAPII density, indicating these large pools of promoter-stalled RNAPII across genomes are often engaged in transcriptional termination. RNAPII elongation factors progress transcription beyond sites of sppRNA formation, demonstrating RNAPII density near promoters represents a checkpoint for early transcriptional termination that governs full-length gene isoform expression.
The results presented here make important contributions to our understanding of transcriptional control. In particular, the presence of sppRNAs draws attention to the importance of elongation factors in assisting Pol II in the early phases of transcriptional elongation. The main weakness of the manuscript is that the data is fairly descriptive. It is not clear how sppRNAs might contribute to gene regulation in the normal life cycle or whether sppRNAs are present in other plants or animals. Additional experiments in either of these veins would increase the significance and impact of the work.
Regarding the writing, the manuscript would be improved by taking advantage of the longer page limits afforded by this journal. A clear introduction and a longer discussion of the significance of the results would be very helpful.
Reviewer #2 (Remarks to the Author): The manuscript by Ard et al performs TIF-seq (transcript isoform sequencing, a high throughput sequencing technique that captures both 5' and 3' end of polyadenylated RNA molecules) in whole Arabidopsis seedlings, comparing wild type to knockout effects of several genes involved in transcriptional elongation, RNAPII stalling and pre-mRNA cleavage. The main finding is that there is a genome-wide occurrence of unstable short promoter-proximal RNAs (sppRNA), and that the knockouts of different genes involved in the aforementioned functions result in either the increase or decrease of their level relative to the mRNA expression.
The main findings are of interest to the broader community doing research on promoter function and transcriptional elongation. There are number of issues, though: -The paper is written in a condensed letter format, which suggests that it was originally meant for a different journal with much more severe space restrictions than Nature Communications. In this case, brevity is not helping. The literature review of current knowledge as well as discussion of the implications of the results are rudimentary, and some of the results central to the flow of the paper refer exclusively to supplementary figures. I suggest to convert it into a full length paper to make the readers' orientation easier, and to provide a proper context for the reported results. (Some concrete suggestions follow.) -The paper _needs_ a proper introduction into what is already known about transcript heterogeneity at 5' and 3' ends, both in plants and the parallels with Metazoan genomes. The heterogeneity at the 5' ends is practically universal in Metazoan genomes, and multiple polyadelylation signals are common. As the authors remark in the passing, early polyadenylation signals play a central role in early termination of antisense transcripts in promoter architectures with bidirectional initiation but a functional transcript in only one of the two directions.
-Further, it is well known that promoter-proximal pausing and dispersed transcription initiation positions within a promoter are related to specific (not all) promoter architectures (typically TATAless, broadly expressed and some developmentally regulated promoters). It is a missed opportunity, and not a difficult one to explore, to investigate which promoter elements might correlate with more or less sppRNA production as well as the number of isoforms. If TIF-seq does not have enough coverage for a good single-nucleotide resolution at 5' ends of genes, the TSS-seq could be used to determine dominant TSS positions more precisely.
-Since many sppRNAs are associated with highly expressed genes, at least some of which are ribosomal protein genes and other components of transcriptional machinery, it would be especially interesting to see if they have a separate promoter architecture like they do in Metazoa, including the TCT initiator (for details see e.g. review by Kadonaga, WIRES Dev Biol 2012).
-Do multiple isoforms result in changes in first splice site of the transcript? Do sppRNAs prefer transcripts with longer or shorter first exons? Minor: -Nonsense-nediated decay is a mechanism by which many aberrant transcripts are removed in metazoan transcriptomes. Any hints of its role in sppRNA degradation?
Reviewer #3 (Remarks to the Author): In this manuscript Ard and colleagues analyse the genomic distribution and abundance of RNA polymerase II (Pol II) initiation and termination events in Arabidopsis thaliana. They report evidence for unstable short promoter-proximal RNAs (sppRNAs) at ~14% of expressed genes, and an average of four different transcript isoforms per gene in wild-type (WT) plants. These results were enabled by an improved Transcript Isoform Sequencing (TIF-seq) protocol based on a coauthor's published method (Pelechano et al. Nature 2013;Pelechano et al. Nat Protoc. 2014), which sequences cDNA tags from matching transcription start sites (TSSs) and polyadenylation sites (PASs) of Pol II transcripts. The authors' global analysis of Arabidopsis transcript TSS/PAS pairs, using mutants known to be defective in transcriptional regulation or transcript degradation, offers some new insights that will be of interest to molecular biologists in this field. However, the study has weaknesses that should be remedied prior to consideration for publication.
Performing TIF-seq on the hen2-2 mutant, which is defective for nuclear exosome activity (Lange et al. 2014 PLoS Genet), Ard and colleagues detect evidence for transcripts initiating at the annotated TSS but terminating <100 nt downstream, on average. These sppRNAs were putatively confirmed at the MPK20 gene when a <500 nt smear was detected via northern blot using a single probe (Fig. 2(e)). This seems to substantiate TIF-seq data shown in Fig. 2(d), but to confirm the size-range and gene position of such sppRNAs further experiments are needed (Major concern #1).
Much of the remaining work in this study is of high technical quality, but the manuscript's clarity suffers from the relegation of clear examples to supplemental figures, with more obscure data displays shown as primary figures. For instance, the AT5G51200 ( Fig. S4(g)) and AT4G15260 ( Fig. S4(h)) loci illustrate the authors' point that the FACT complex represses alternative TSSs, whereas the scatterplot of primary Fig. 1(f) requires careful study and a detailed reading of the Methods to interpret (Major concern #2).
Finally, the authors' Page 5 statement that, "most genes with sppRNA show no evidence for gene regulation by selective termination and an equivalent fraction of mRNA without sppRNA are coldinduced" is an accurate summary of the authors' data: the function of such sppRNAs in gene regulation, if any, remains quite enigmatic. The authors should avoid masking this point with speculative conclusions (Major concern #3).
Major concerns: 1) Page 4 and Fig. 2(e): Indistinct smears are frequently detected in northern blots due to unequal loading or other artifacts of RNA preparation. Furthermore, the small expected size of sppRNAs (median 93 nt) means that >50% of the RNAs are shorter than can be resolved via formaldehydeagarose electrophoresis (the technique used here). I suggest that authors reproduce their northern result using polyacrylamide gel electrophoresis (PAGE) with appropriate RNA size standards in order to confirm the size range of sppRNAs. For both the formaldehyde-agarose and PAGE northern blots an additional probe could be hybridized to detect MPK20 mRNAs via a 3' region not overlapping sppRNAs. If the authors' hypothesis is correct, then this second probe should detect full-length MPK20 isoforms in WT and hen2-2 samples but not the putative sppRNAs in hen2-2.
2) Page 3 and Fig. 1: I recommend that supplemental panels Fig. S4(g) and Fig. S4(h) be included in primary Fig. 1, because these clearly illustrate FACT suppression of intragenic Pol II initiation. Conversely, the Fig. 1f scatterplot should be revised because the underlying data and analyses are unclear: precisely how were the two fact mutants (spt16-1 and ssrp1-2) analysed? Did these two mutants differ? Were replicate experiments conducted? How were WT/mutant comparisons handled? How was the threshold for inclusion in the scatterplot chosen? Were any statistical analyses performed? These info should be in the results and legend (not buried in Methods), because they are essential for readers to interpret the figure.
3) Page 5: The authors do not present data supporting the speculative statement that concludes this paragraph: "…promoter-proximal termination is associated with plant gene expression across temperature and may contribute to temperature-dependent gene regulation." The first half of the sentence refers to sppRNAs being co-expressed with mRNAs at cold-induced genes (a simple correlation), but the second half contradicts the overall TIF-seq analysis as presented and summarized by the authors in this same paragraph.
Minor points/corrections 1) Abstract: "… how alternative TSSs connect to variable PASs is unresolved from common transcriptomics methods," would better read, "… how alternative TSSs connect to variable PASs is not resolved by common transcriptomics methods." 2) Page 2: To illustrate functionally distinct mRNA isoforms, I suggest that the authors cite the Nterminal nuclear localisation signal of Dicer-like 4 (DCL4) in A. thaliana. Alternative TSS selection that depends on promoter DNA methylation allows different isoforms to be expressed from the single DCL4 gene (Pumplin et al. 2016 Plant Cell).
The text of the comments we received by the reviewers is marked in blue, our response is marked in black.
Reviewer #1 (Remarks to the Author): In this manuscript, the authors use a new sequencing technique to simultaneously determine the transcription start sites (TSSs) and polyadenylation sites (PASs) of transcripts: Transcript Isoform sequencing (TIF-seq). Current techniques exist to identify the 5' or 3' ends present in a population of transcripts, but it remains difficult to determine exactly which 5' and 3' end site exist in an individual molecule. The authors use TIF-seq to investigate transcription in Arabidopsis, using both wild type and backgrounds with defective RNA degradation. The later allows for the detection of short-lived transcripts that would be otherwise quickly degraded. This allowed for the discovery of ~4 isoforms per expressed gene with ~14% of gene expressing unstable short promoter-proximal RNAs (sppRNAs). Mutations in elongation factors increase the ratio of sppRNAs to full-length mRNAs, suggesting that Pol II stalling may contribute to the production of sppRNAs.
The results presented here make important contributions to our understanding of transcriptional control. In particular, the presence of sppRNAs draws attention to the importance of elongation factors in assisting Pol II in the early phases of transcriptional elongation. The main weakness of the manuscript is that the data is fairly descriptive. It is not clear how sppRNAs might contribute to gene regulation in the normal life cycle or whether sppRNAs are present in other plants or animals. Additional experiments in either of these veins would increase the significance and impact of the work.
We thank reviewer 1 for appreciating the importance of our findings. We were able to strengthen our manuscript further by adding experiments addressing both veins.
1.) To address whether sppRNA equivalents are present in other system we highlight the similarities and differences of sppRNA in other systems more prominently in our revised manuscript. Fortuitously, pre-RNA processing of promoter-proximal RNA species that bear some resemblance to sppRNA were described in drosophila while our manuscript was under review (PMIDs: 31809743, 31530651). We added the relevant citations and highlighted these novel parallels to metazoans, for example in the discussion.
-Since the Integrator complex is linked to promoter-proximal transcriptional termination in drosophila (PMIDs: 31809743, 31530651), we are also able to add experimental support for these similarities. We have added an analysis of sppRNA in two Arabidopsis Integrator mutants in revised Figure 6. Our data support a role for integrator in promoter-proximal termination of RNAPII transcription, adding experimental support for conserved elements mediating promoter-proximal termination to strengthen the manuscript.
-We strengthened our revised manuscript by providing genome-wide support for a contribution of CPSF/CstF in mRNA expression of sppRNA genes. In our initial submission we found that cstf64-2 mutants impaired the ratio of sppRNA/mRNA termination, resulting in increased full length mRNA by RT-qPCR. To test this hypothesis genome-wide, we re-analyzed published PAT-seq data of two CPSF/CstF mutants, known mediators of mRNA polyadenylation (CstF77 and CPSF100). We compared the expression of canonical poly-(A) sites between sppRNA genes and control genes without sppRNAs. We observed a specific genome wide increase of full length mRNA for sppRNAs genes. This data is represented in figure 6 c-d. These data support a role for sppRNA termination on full length mRNA expression, akin to the "attenuation" mechanism suggested for metazoans. Future research will be necessary to fully resolve the contributions of Integrator and CPSF/CstF in sppRNA formation.
-Our analyses of cis-elements suggested by reviewer #2 uncovered and additional similarity to metazoan promoter-proximal transcriptional termination. We uncovered the GAGA-motif in novel computational analysis that is added as revised Figure 5d-g and Supplementary figure 13. The GAGA-motif is linked to promoter-proximal RNAPII stalling in drosophila. We have expanded on these similarities indicating conservation of sppRNA in the revised manuscript text.
2.) To address gene regulation by sppRNA through additional functional data, we assayed the effect of mutating sppRNA in the 5´-UTRs of genes and assayed the effect of reporter gene expression. We added these data as novel figure, revised supplementary figure 10. The data suggest that sppRNA may promote gene expression, consistent with the genome-wide positive correlation between sppRNA detection and gene expression. While further research will be needed to fully resolve the roles of sppRNA in gene regulation in more detail, we hope reviewer #1 can appreciate that these data strengthen our manuscript.
Regarding the writing, the manuscript would be improved by taking advantage of the longer page limits afforded by this journal. A clear introduction and a longer discussion of the significance of the results would be very helpful.
We address this comment with substantial revisions to the text, structure and layout. We believe they capture the essence of this comment and strengthen the manuscript.
Reviewer #2 (Remarks to the Author): The manuscript by Ard et al performs TIF-seq (transcript isoform sequencing, a high throughput sequencing technique that captures both 5' and 3' end of polyadenylated RNA molecules) in whole Arabidopsis seedlings, comparing wild type to knockout effects of several genes involved in transcriptional elongation, RNAPII stalling and pre-mRNA cleavage. The main finding is that there is a genome-wide occurrence of unstable short promoter-proximal RNAs (sppRNA), and that the knockouts of different genes involved in the aforementioned functions result in either the increase or decrease of their level relative to the mRNA expression.
The main findings are of interest to the broader community doing research on promoter function and transcriptional elongation. There are number of issues, though: -The paper is written in a condensed letter format, which suggests that it was originally meant for a different journal with much more severe space restrictions than Nature Communications. In this case, brevity is not helping. The literature review of current knowledge as well as discussion of the implications of the results are rudimentary, and some of the results central to the flow of the paper refer exclusively to supplementary figures. I suggest to convert it into a full length paper to make the readers' orientation easier, and to provide a proper context for the reported results. (Some concrete suggestions follow.) We apologize for the inappropriate manuscript format. We fully followed the suggestions of reviewer #2. We agree that the recommended revisions to our manuscript will make it more accessible to a broad audience.
-The paper _needs_ a proper introduction into what is already known about transcript heterogeneity at 5' and 3' ends, both in plants and the parallels with Metazoan genomes. The heterogeneity at the 5' ends is practically universal in Metazoan genomes, and multiple polyadelylation signals are common. As the authors remark in the passing, early polyadenylation signals play a central role in early termination of antisense transcripts in promoter architectures with bidirectional initiation but a functional transcript in only one of the two directions.
We have expanded the introduction substantially according to the suggestions by reviewer #2. The important parallels to metazoan transcriptional regulation are indeed very informative and are now more clearly accessible in the revised manuscript.
-Further, it is well known that promoter-proximal pausing and dispersed transcription initiation positions within a promoter are related to specific (not all) promoter architectures (typically TATA-less, broadly expressed and some developmentally regulated promoters). It is a missed opportunity, and not a difficult one to explore, to investigate which promoter elements might correlate with more or less sppRNA production as well as the number of isoforms. If TIF-seq does not have enough coverage for a good single-nucleotide resolution at 5' ends of genes, the TSS-seq could be used to determine dominant TSS positions more precisely.
We thank reviewer 2 for this excellent suggestion. The suggested analyses are now included in Figure 5d-g and supplementary figure 13A-F in the revised manuscript. Perhaps surprisingly, we could not uncover differences in the TATA signature. However, we identified that the TCP transcription factor binding motif is enriched upstream of sppRNA genes. Moreover, we find an enrichment of the GAGA-box. Since the GAGA-box is linked to promoter-proximal pausing in metazoans, these analyses represented a nice opportunity to further strengthen the connections to the metazoan literature. Interestingly, the positioning of the GAGA-box is different in metazoans, it is shifted to positions largely downstream of the TSS in plants. The new computational analyses offer new insight and strengthen the revised manuscript.
-Since many sppRNAs are associated with highly expressed genes, at least some of which are ribosomal protein genes and other components of transcriptional machinery, it would be especially interesting to see if they have a separate promoter architecture like they do in Metazoa, including the TCT initiator (for details see e.g. review by Kadonaga, WIRES Dev Biol 2012).
Our analyses could not identify a specific motif such as the TCT initiator. The computational analyses included in our revised manuscript suggest that prompters of genes with sppRNA are enriched for the TCP motif upstream of the TSS, and the GAGA-box largely downstream of the TSS. We hope the additional data clarify some of the questions concerning differences in promoter architecture.
-Do multiple isoforms result in changes in first splice site of the transcript? Do sppRNAs prefer transcripts with longer or shorter first exons?
Unfortunately, TIF-Seq is not well suited to resolve information on splice sites and we were unable to perform the suggested analysis regarding splice sites. To address the question about sppRNA termination, we tested for a biased location of sppRNA termination sites in introns or exons in Figure 1 below. We observed no clear bias in the termination site of sppRNA. sppRNA termination may occur in the 5´-UTR, 1 st exon and 1 st intron (left panel). When plotted, we observe a slightly shorter first exon in sppRNA genes Figure 1 (right panel). It would be interesting to follow-up on this observation in future studies.
Currently, we feel that this information is best released in this document to satisfy the curiosity of reviewer #2. Minor: -Nonsense-nediated decay is a mechanism by which many aberrant transcripts are removed in metazoan transcriptomes. Any hints of its role in sppRNA degradation?
To address this comment we requested and received seeds of Arabidopsis NMD mutants from the Riha lab (PMID: 22379136). Arabidopsis NMD mutants display auto-immunity phenotypes resulting in severe growth defects, confounding simple mutant vs wild type comparisons. The growth defects of NMD mutants are connected to auto-immunity and can be suppressed by blocking disease signaling through mutations in the PAD4 gene. To control for growth defects of NMD mutants, we compared the effect of NMD mutants in the pad4 mutant background (i.e. smg7/pad4 against the pad4 single mutant). We isolated smg7-1/pad4-1 homozygous double mutants and pad4-1 single mutants from a segregating population (as in PMID: 22379136). We extracted RNA from leafs and measured expression levels of mRNA and sppRNA for the target genes described in the manuscript by RT-qPCR. However, we fail to detect specific effects of this NMD mutant in relation with sppRNAs. We include these analyses below in Figure 2 for the information of reviewer #2. smg7-1/pad4-1 and pad4-1 for 4 genes with sppRNAs used in the manuscript (HSC70, MPK20, RLP18e, STV1). smg7-1/pad4-1 and pad4-1 for 4 genes with sppRNAs used in the manuscript (HSC70, MPK20, RLP18e, STV1).
(right) Relative expression normalized to actin of full length mRNA for
In this manuscript Ard and colleagues analyse the genomic distribution and abundance of RNA polymerase II (Pol II) initiation and termination events in Arabidopsis thaliana. They report evidence for unstable short promoter-proximal RNAs (sppRNAs) at ~14% of expressed genes, and an average of four different transcript isoforms per gene in wild-type (WT) plants. These results were enabled by an improved Transcript Isoform Sequencing (TIF-seq) protocol based on a coauthor's published method (Pelechano et al. Nature 2013;Pelechano et al. Nat Protoc. 2014), which sequences cDNA tags from matching transcription start sites (TSSs) and polyadenylation sites (PASs) of Pol II transcripts. The authors' global analysis of Arabidopsis transcript TSS/PAS pairs, using mutants known to be defective in transcriptional regulation or transcript degradation, offers some new insights that will be of interest to molecular biologists in this field. However, the study has weaknesses that should be remedied prior to consideration for publication.
To thank reviewer #3 for the appreciation of our new insights. We are grateful for the clear suggestions and outlined how we have used them to improve our manuscript below.
Performing TIF-seq on the hen2-2 mutant, which is defective for nuclear exosome activity (Lange et al. 2014 PLoS Genet), Ard and colleagues detect evidence for transcripts initiating at the annotated TSS but terminating <100 nt downstream, on average. These sppRNAs were putatively confirmed at the MPK20 gene when a <500 nt smear was detected via northern blot using a single probe (Fig. 2(e)). This seems to substantiate TIF-seq data shown in Fig. 2(d), but to confirm the size-range and gene position of such sppRNAs further experiments are needed (Major concern #1).
Much of the remaining work in this study is of high technical quality, but the manuscript's clarity suffers from the relegation of clear examples to supplemental figures, with more obscure data displays shown as primary figures. For instance, the AT5G51200 ( Fig. S4(g)) and AT4G15260 ( Fig. S4(h)) loci illustrate the authors' point that the FACT complex represses alternative TSSs, whereas the scatterplot of primary Fig. 1(f) requires careful study and a detailed reading of the Methods to interpret (Major concern #2).
Finally, the authors' Page 5 statement that, "most genes with sppRNA show no evidence for gene regulation by selective termination and an equivalent fraction of mRNA without sppRNA are cold-induced" is an accurate summary of the authors' data: the function of such sppRNAs in gene regulation, if any, remains quite enigmatic. The authors should avoid masking this point with speculative conclusions (Major concern #3).
Major concerns: 1) Page 4 and Fig. 2(e): Indistinct smears are frequently detected in northern blots due to unequal loading or other artifacts of RNA preparation. Furthermore, the small expected size of sppRNAs (median 93 nt) means that >50% of the RNAs are shorter than can be resolved via formaldehydeagarose electrophoresis (the technique used here). I suggest that authors reproduce their northern result using polyacrylamide gel electrophoresis (PAGE) with appropriate RNA size standards in order to confirm the size range of sppRNAs. For both the formaldehyde-agarose and PAGE northern blots an additional probe could be hybridized to detect MPK20 mRNAs via a 3' region not overlapping sppRNAs. If the authors' hypothesis is correct, then this second probe should detect full-length MPK20 isoforms in WT and hen2-2 samples but not the putative sppRNAs in hen2-2.
We thank reviewer 1 for pointing out this deficiency of our analysis. We have expanded our characterization of sppRNA by northern blotting. Figure 5i. We improved sample loading and used the additional probe specific to the 3´-end. These data improve our manuscript since sppRNA detection is specific to the probe specific to the 5´-end, as reviewer #3 suggests. Figure 5. We endlabeled a size marker to resolve the size distribution of sppRNA with improved resolution. As reviewer #3 suspected, the PAGE northern resolves sppRNA in a size range consistent with our estimate based on TIF-seq data. Our new experimental data clarify the size range and position of sppRNAs.
2.) We include the requested PAGE northern as panel j of our revised
2) Page 3 and Fig. 1: I recommend that supplemental panels Fig. S4(g) and Fig. S4(h) be included in primary Fig. 1, because these clearly illustrate FACT suppression of intragenic Pol II initiation. Conversely, the Fig. 1f scatterplot should be revised because the underlying data and analyses are unclear: precisely how were the two fact mutants (spt16-1 and ssrp1-2) analysed? Did these two mutants differ? Were replicate experiments conducted? How were WT/mutant comparisons handled? How was the threshold for inclusion in the scatterplot chosen? Were any statistical analyses performed? These info should be in the results and legend (not buried in Methods), because they are essential for readers to interpret the figure.
We analyzed the FACT mutant TIF-Seq datasets the same as the other datasets in the manuscript. This is now more pointed out more clearly in our revised manuscript through an improved description and representation. We followed the excellent suggestion to include the TIF-seq data in FACT mutants as a main Figure. We add an improved representation of these data as revised Figure 2. We improved the legend for Figure 2f as requested, clarified the manuscript text and improved the method description.
3) Page 5: The authors do not present data supporting the speculative statement that concludes this paragraph: "…promoter-proximal termination is associated with plant gene expression across temperature and may contribute to temperature-dependent gene regulation." The first half of the sentence refers to sppRNAs being co-expressed with mRNAs at cold-induced genes (a simple correlation), but the second half contradicts the overall TIF-seq analysis as presented and summarized by the authors in this same paragraph.
We have revised the presentation of these results. Overall, sppRNA formation correlates with nascent transcription level as reviewer #3 points out. However, our analyses addressing potential regulation through selective sppRNA formation during revealed data consistent for some (38 of 1153) loci (line 250). To avoid misleading claims regarding gene regulation by sppRNA in our manuscript, we have revised the presentation of these data. We have included these numbers in the text so that readers will be in the position to judge this for themselves. It would clearly in interesting to explore the function of sppRNA in general and at specific loci in future studies. Figure 6 are consistent with the possibility of mRNA regulation by "attenuation" as suggested in metazoans. The sppRNA deletion experiments provided as revised supplementary Figure 10 support the idea that sppRNA may participate in gene activation. We do not see it as a contradiction that sppRNA may regulate mRNA expression through "attenuation" and may participate in gene activation. We have elaborated on that point in our revised discussion. The purpose of sppRNA will have to be fully resolved in future studies but our revised manuscript offers tantalizing starting hypotheses for the field.
Nevertheless, the PAT-seq data analyses in CstF/CPSF mutants provided in the revised
Minor points/corrections 1) Abstract: "… how alternative TSSs connect to variable PASs is unresolved from common transcriptomics methods," would better read, "… how alternative TSSs connect to variable PASs is not resolved by common transcriptomics methods." We have revised the confusing sentence in the abstract and replaced it with a new sentence line 21. We have included a citation to this excellent publication. Unfortunately, DCL4 expression level in seedlings seems rather low, and our coverage of this locus by TIF-seq does not offer an improved screenshot for the revised manuscript. Since reviewer #3 may be interested in small RNA biogenesis more broadly, we add a TIF-seq data screenshot of AGO1 below. The AGO1 gene represents a sppRNA gene with a relatively high rate of sppRNAs.
3) Page 3: There is a definite article missing here: "The detection of many RNA species that are produced in wild type yet rapidly degraded…", should read, "The detection of many RNA species that are produced in the wild type yet rapidly degraded…".
Thank you for pointing the mistake, we have changed (line 64-65) to "Transcriptome analyses in nuclear exosome mutants facilitate the detection of many cryptic RNA species".
The authors have addressed most of my early concerns adequately. The new analysis of motifs is especially interesting. There are a couple of outstanding issues: -The Introduction has been expanded as other reviewers and myself suggested, but it hasn't been done in the most careful way. For example, the sentence: "The precise positions of TSSs may form a "focused" pattern with one predominant TSS position, or a "dispersed" pattern, where TSSs can be detected within a broader sequence window that is characteristic of housekeeping genes (4)" ends with reference (4), which is completely unrelated to the content of the said sentenceindeed, I couldn't find a single paper in the list of references that deals with dispersed and focused promoters. All references should be checked carefully to make sure they are correct.
-The sentence " RNAPII turnover at PASs as part of transcriptional termination coincides with peaks of RNAPII density , perhaps indicating that RNAPII turnover near promoters may reflect transcriptional termination shortly after transcriptional initiation. " is confusing. The first part refers to PASs, the second to the sites of transcriptional initiation. The authors are trying to say that since there is known RNAPII accumulation at PASs at the ends of genes, it is possible that the accumulation of RNAPII at the pausing sites is a result of the same process, but coupled to promoter-proximal polyadenylation.
-Regarding the intragenic TSS that produce alternative gene isoforms that terminate at the gene end: please check if the TSS initiator signal at such intragenic TSS is of the same kind as that at the actual 5' ends of genes. In metazoa, while the preferred (-1,+1) dinucleotide at promoter TSSes is YR, there are intragenic TSS-like signals whose preferred starting dinucleotide is GG (see Carninci et al Nat Genet 2006).
Minor:
-"Drosophila" should be spelled in uppercase (line 93) -line 117: Eukaryotic primary transcripts can actually be hundreds of kilobases, up to a couple of megabases (dystrophin, titin); I am aware that they are spliced long before transcriptional termination. The size of the gene and number of exons are much more relevant for providing opportunities for regulated gene isoform generation than the length of the mature transcript.
-line 217: it should be "statistically significant effect", not "affect" Reviewer #3 (Remarks to the Author): Thomas and colleagues have thoroughly revised their manuscript on the genomic distribution and abundance of RNA polymerase II (Pol II) initiation and termination events in Arabidopsis thaliana. I had three major concerns about the original manuscript. My first concern was that the size-range and gene position of short promoter-proximal RNAs (sppRNAs) needed to be confirmed via additional northern blot experiments. The authors have successfully resolved both aspects of this concern: they detected sppRNAs with a 5'-end but not with a 3'-end MPK20 gene probe ( Figure 3i), as expected, and they detected sppRNA signals at higher resolution using polyacrylamide gel electrophoresis ( Figure 3j). However, in the latter figure, there is minor technical error that the authors need to fix: the Decade Marker System here consists of radiolabeled RNA molecules, so tick marks to the right of their Figure 3j membrane exposure should be labelled, "150 nt, 90 nt, 80 nt, 60 nt, 50 nt" rather than with "bp" units, which stands for base pairs and is not appropriate in this context.
My second concern was that the data presented in original Figure 1f, supporting the role of the FACT complex in repressing alternative TSSs, was confusing for the reader. The authors' revised Figure 2 is much improved. Inclusion of previously supplemental panels in revised Figure 2c and 2d is a more intuitive look at FACT complex mutant deficiencies. Moreover, revised Figure 2f now condenses the data display of intragenic TSSs detected in TIF-seq in a more approachable way. In my third concern, I had recommended that the authors moderate or remove the speculative statement, "…promoter-proximal termination is associated with plant gene expression across temperature and may contribute to temperature-dependent gene regulation." They have made adjustments to the language in their revised results and conclusions that moderate this claim. In the end, the authors data do suggest that mRNA regulation by "attenuation" could influence plant gene expression, pointing to promising avenues for future experimentation in the months and years to come. | 2019-10-31T09:09:55.563Z | 2019-10-28T00:00:00.000 | {
"year": 2019,
"sha1": "0f8508a258052f6dfe7fc9e645fa15edd925050c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-16390-7.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea06d734ecf3c17716f800ca20dc7fd2060ea789",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
268027719 | pes2o/s2orc | v3-fos-license | Comparison of two approaches for measuring household wealth via an asset-based index in rural and peri-urban settings of Hunan province, China
Background There are growing concerns regarding inequities in health, with poverty being an important determinant of health as well as a product of health status. Within the People's Republic of China (P.R. China), disparities in socio-economic position are apparent, with the rural-urban gap of particular concern. Our aim was to compare direct and proxy methods of estimating household wealth in a rural and a peri-urban setting of Hunan province, P.R. China. Methods We collected data on ownership of household durable assets, housing characteristics, and utility and sanitation variables in two village-wide surveys in Hunan province. We employed principal components analysis (PCA) and principal axis factoring (PAF) to generate household asset-based proxy wealth indices. Households were grouped into quartiles, from 'most wealthy' to 'most poor'. We compared the estimated household wealth for each approach. Asset-based proxy wealth indices were compared to those based on self-reported average annual income and savings at the household level. Results Spearman's rank correlation analysis revealed that PCA and PAF yielded similar results, indicating that either approach may be used for estimating household wealth. In both settings investigated, the two indices were significantly associated with self-reported average annual income and combined income and savings, but not with savings alone. However, low correlation coefficients between the proxy and direct measures of wealth indicated that they are not complementary. We found wide disparities in ownership of household durable assets, and utility and sanitation variables, within and between settings. Conclusion PCA and PAF yielded almost identical results and generated robust proxy wealth indices and categories. Pooled data from the rural and peri-urban settings highlighted structural differences in wealth, most likely a result of localized urbanization and modernization. Further research is needed to improve measurements of wealth in low-income and transitional country contexts.
Introduction
Poverty and people's health status are intimately connected, yet the relationship between them is complex and bi-directional [1,2]. On one hand, ill-health may lead to economic poverty [1], or a decrease in expendable income due to high medical bills and/or via a direct reduction, or loss, of wages throughout an illness [3]. On the other hand, poor health may result from poverty [1], including an inability to afford adequate nutrition, sanitation, housing, education and healthcare, and poverty-related lifestyle factors that increase disease risk and/or decrease access to medical facilities and services [4,5]. In the People's Republic of China (P.R. China), rapid economic growth and human development over the past three decades has brought over 300 million people out of poverty (arbitrarily defined as living on less than US$ 1 per day) and has vastly improved the overall health status of the population [6]. However, it has also affected the course of income distribution such that disparities in socio-economic position (SEP; for a definition, see Appendix) are currently among the most important social policy issues in the country [7]. Inequalities appear to be widening both across and within different provinces in P.R. China, with the ruralurban gap of particular concern [7]. Since SEP is an important determinant of health, it is conceivable that such disparities will lead to large gaps in health care provision within P.R. China [8]. In order to plan, implement and monitor health programs and other publicly or privately provided services in an equitable way, it is necessary to identify the poor, including individuals or households with low SEP, who might be more vulnerable to poor health outcomes [5].
While SEP can be measured on multiple levels [1], in the past it was mostly determined using an individual's education level, sometimes in combination with their occupation. Currently, approaches for measuring household SEP include 'direct' measures of economic status, including (i) income, (ii) expenditure, and (iii) financial assets (e.g., savings and pensions), and 'proxy' measures (e.g., household durable assets (Appendix), housing characteristics and access to utilities and sanitation) developed from the wealth index originally proposed by Rutstein in the mid-1990 s [9]. Direct measurements can be expensive to collect and may require complex statistical analyses that are beyond the scope of many population health studies [5,[10][11][12]. In developing country settings in particular, large seasonal variability in earnings and a high rate of self-employment, together with potential recall bias and false reporting, may render such data inaccurate or even unreliable [10]. Proxy measures are thought to be more reliable, since they require only data collected using readily available household questionnaires supported by direct observation. A study carried out in southeast Nigeria, however, questioned whether proxy measures are indeed more reliable than direct measurements [11]. From a public health point of view, the proxy wealth index approach is more useful than that of direct measures, since it explains the same, or a greater, amount of the differences between households on a set of health indicators than an income/expenditure index, while requiring far less effort from respondents, interviewers, data processors and analysts [10]. Additionally, proxy measures might be more accurate approximations of SEP, as they measure financial stock ('permanent income') rather than flow ('current income'), and hence are less prone to fluctuation [10,[12][13][14].
Due to the large volume of potentially redundant asset data produced, a data reduction technique known as exploratory factor analysis is often utilized. Exploratory factor analysis evaluates the most meaningful basis to re-express a large, pre-determined set of variables, exploring the relationships between them and filtering out noise to reveal indicators that map most strongly to an underlying latent structure. Two common methods of extracting that structure are principal components analysis (PCA; Appendix) and principal axis factoring (PAF; Appendix), which describe variation among the observed variables via a set of derived uncorrelated variables referred to as principal components (PCs) or principal factors (PFs), respectively [15]. Although these two methods often yield similar results, the former is preferred as a method for data reduction, while the latter is widely used for detecting structure within the data. Previously, studies have used either PCA or PAF but comparisons between these two approaches are rare. Based on the inter-relationship between the set of variables, exploratory factor analysis also assigns weights to ownership of the assets. The weights correspond to the factor loadings (eigenvectors; Appendix) of the first derived variable, and are used to generate an index of relative SEP. Using weights derived through exploratory factor analysis may be a more appropriate method of assigning weights to the variables than the more simplistic equal weights method, the complex weighted-by-price-of-item approach or on an ad-hoc basis [16].
Few studies have attempted to verify the extent to which the asset-based index approach is a good proxy for household economic wealth. Concerns include the handling of publicly provided goods and services, and the direct effects of the indicator variables that make up indices, as well as ways of adjusting for household size and age-composition [17,18]. The increasingly widespread use and application of proxy measurements of household economic wealth and SEP, and growing use of exploratory factor analysis, in public health studies calls for further research in this area, particularly in low-income settings and transitional countries.
Here we report the application of exploratory factor analysis to household data that were collected during a survey of parasitic infections in Hunan province, P.R. China. Our aim was to calculate and examine assetbased proxy wealth indices generated by PCA and PAF, and to compare them to other measures of wealth based on purely economic variables, including self-reported annual household income and savings. Results are reported for a rural and a peri-urban (Appendix) setting and aggregated between the two.
Study area and population
The study was carried out in two villages; namely (i) Wuyi, in Hanshou county, southern Dongting Lake area, and (ii) Laogang, in Yueyang county, eastern Dongting Lake area. Both villages are located in Hunan province. The surveys were conducted between November and December 2006. The villages were selected on the basis of previous studies investigating the epidemiology of parasitic infections, including schistosomiasis [19]. Wuyi is situated in a rural area, whereas Laogang is periurban, located on the outskirts of Yueyang, the major city in the Dongting Lake region. All individuals from both villages were invited to participate in the study.
Field procedures
Senior personnel from two local schistosomiasis control stations were involved in co-ordinating the study. Basic demographic information was obtained from a census performed one year previously in both villages. The questionnaire was translated into Mandarin Chinese, back-translated into English, pre-tested in a nearby village and readily adapted to the local setting. It was administered to the heads of households, and included questions on household demographics, the number of wage earners and non-wage earners, annual household income (7 categories: <500; 500-1999; 2000-4999; 5000-9999; 10,000-29,999; 30,000-49,999; ≥50,000 CNY) and savings (6 categories: <500; 500-699; 700-999; 1000-1499; 1500-1999; ≥2000 CNY), the primary and secondary sources of income, ownership of 22 household durable assets (e.g., color TV, washing machine, air conditioner, etc.), 10 housing characteristics (e.g., floor material, wall material, roof material, etc.) and six utility (Appendix) and sanitation variables (e.g., tap water, toilet in house, etc.).
Interviewers were familiar with the local setting and dialect and were acquainted with qualitative methods. The head of each household was invited to respond to the questions; if the household head was absent on the day of interview, the interviewer returned to that residence the following day, for up to a period of 14 days, after which the next of kin was asked to respond.
Consent and ethical approval
Ethical clearance for this study was obtained from the Medical Ethics Committee of Hunan province and Queensland Institute of Medical Research. Village authorities were informed about the aims and procedure of the study and provided written informed consent. Oral informed consent was obtained from each individual.
Data management and statistical analysis Data management
Data were double-entered into a bilingual Microsoft® Access 2002 database, cross-checked and subsequently analyzed with SPSS version 16.0 (Illinois, USA).
Socio-economic data and asset ownership
Household income and savings data were equivalized to adjust for household needs based upon the number of household members (per-capita) and a combination of number and age (per-adult, defined as individuals aged >16 years). This was done using the median value of each income or savings band and dividing by the number of members or adult members per household. Annual per-capita income and per-adult income, primary source of income, ability to save (yes or no binary variable) and annual per-capita and per-adult savings were then examined using a χ 2 test. The Student's t-test statistic was used to compare the mean age and mean household size between the two villages. A stepwise multinomial logistic regression analysis with annual percapita income bands as the dependent variable and ownership of household durable asset as independent binary covariates was used to test the association between household income and asset ownership within each setting separately and for the pooled data from both villages. Covariates were included at a significance level of <0.2. Covariates that were not significantly associated with income were removed in a stepwise backward elimination process. Adjusted odds ratios (OR) and 95% confidence intervals (CI) were computed for associations with p-values <0.05.
Construction of asset-based proxy wealth indices using PCA and PAF
A detailed protocol of how we constructed asset-based proxy wealth indices is given in the Additional File. In brief, the binary data on household durable assets, housing characteristics and utility and sanitation variables were organized into a matrix with m households as rows (where m rural = 258 and m peri-urban = 246) and n variables as columns. The initial n = 38-item correlation matrices for each setting were examined for internal consistency (Table 1). To enable the matrix to be factorable, only variables with sufficient correlation ( > |0.3|) with at least three other variables were included in further analyses. If any variable correlated highly ( > | 0.8|) with other variables, only one variable from the group of correlated variables was arbitrarily selected and included in further analyses, to avoid multicollinearity. Factorability of the m by n matrices was determined using Bartlett's test of sphericity (Appendix) and the Kaiser-Meyer-Olkin (KMO) test (Appendix). Variables were excluded in a stepwise manner until a factorable m by n correlation matrix with a KMO >0.7 was reached, for each village separately. Diagonal and off-diagonal values of the anti-image correlation matrix (Appendix) were used to assess the sampling adequacy.
Next, components and factors were extracted from each of the final two correlation matrices using PCA and PAF, respectively. Components and factors, respectively, were extracted without and with rotation (Appendix) and the best method was selected according to the maximum squared factor loadings and the relative simplicity of the model. In each case, eigenvalues >1 (Appendix), examination of the scree plots and the cumulative proportion of variance explained by each component or factor were taken as criteria for extraction. For simplicity, a cut-off eigenvector > |0.3| was used to signify component or factor loadings of interest and, where variables loaded equally on more than one component or factor, the Cronbach's coefficient α (Appendix) was used to select the component or factor on which to place the variable.
The PC and PF loadings were used to compute standardized indices of relative household wealth within each village, according to the following equation: such that A i is the standardized asset index score per household i, the k s are the factor loadings or weights of each asset k, estimated by either PCA or PAF, and the a ik s are the standardized values of asset k for household i (i.e., x ik is the ownership of asset k by household i, where 0 represents not owning the asset and 1 represents owning the asset, and x k and s k are the sample mean and standard deviation (SD) of asset k for all households).
The association between the PCA-and PAF-based proxy wealth indices was estimated by the Spearman's rank correlation coefficient (Appendix). Based on the overall small sample size of our study, we chose to divide each index into quartiles, rather than the standard quintiles or tertiles, representing: (i) most poor (MP), (ii) below average (BA), (iii) above average (AA), and (iv) most wealthy (MW) households.
Proxy wealth indices and self-reported income and savings
Corresponding wealth quartiles were also generated based on annual household per-capita income and on a combination of household income and savings, as follows: (i) high income (≥4000 CNY per person per year) with savings, (ii) high income without savings, (iii) low income (< 4000 CNY per person per year) with savings, and (iv) low income without savings. Households' categorical position for each respective index was assessed by a Kappa agreement, using the following cut-offs: 0, no agreement; 0.01-0.2, poor agreement; 0.21-0.4, fair agreement; 0.41-0.6, moderate agreement; 0.61-0.8, substantial agreement; 0.81-1, almost perfect agreement [20]. Households that were re-ranked into different quartiles were examined in further detail. Mean scores per category were examined by means of Kruskal-Wallis (Appendix) analyses and a ratio of MW to MP was 3 1.5 (0.7) 1.6 (0. 8) calculated. This entire process was then repeated for the pooled data from both villages (m total = 504).
Study compliance and operational results
From a total of 646 households in both villages, 504 (78.0%) had complete datasets. This corresponded to 258/294 (87.8%) in the rural setting and 246/352 (69.9%) in the peri-urban setting. Demographic variables are summarized in Table 1.
Comparison of income, savings, and possession of assets Wuyi and Laogang, respectively. In Wuyi, younger household heads were more likely to save than their older counterparts (χ 2 test p = 0.006), while there was no significant difference in Laogang. Within both settings, the amount of money saved per capita was also positively associated with annual household per-capita income (χ 2 test p <0.001 and χ 2 test p = 0.001), but not with the primary source of household income or with age for rural and peri-urban settings. Table 3 shows the complete list of household durable assets, housing characteristics and utility and sanitation variables for both settings. Item ownership varied between and within villages. For example, all 246 periurban households but only 5 (1.9%) rural households had tap water in the house. While 229 (88.8%) rural households owned animals, the respective number and percentage was 46 (18.7%) among peri-urban households. Table 4 summarizes all significant associations between annual per-capita household income and ownership of household durable assets across pooled (Table 5).
Bartlett's test of sphericity was significant in both settings (rural: χ 2 test p <0.001 and peri-urban: χ 2 test p <0.001) and for the pooled data (χ 2 test p <0.001), and the respective KMO statistics were 0.788, 0.726 and PCA and PAF revealed four components or factors with eigenvalues >1.0 in the rural setting and three in the peri-urban setting. In each case the first component or factor comprised of several heavily loaded variables (eigenvectors >0.3) and accounted for 24.3% and 27.8% of the variation in the data from Wuyi and Laogang, respectively, while the remaining components or factors had fewer variables and explained a smaller proportion of the variation (Table 5). For the pooled data, three components or factors had eigenvalues >1.0 and the first component or factor accounted for 33.9% of the variation in the data. The un-rotated extraction method was selected for PCA and PAF in both settings and for the pooled data, as rotation did not add measurably to the simplicity or fit of each of the models. The relative magnitude and direction of the weights in the PCA and PAF models are consistent within settings (Table 5) and across pooled data (data not shown).
For both settings, standardized indices of relative wealth were created using heavily loaded variables of the first PC, or the first PF, with variables weighted according to their eigenvector, as in the equation. All four indices showed evidence of clumping and truncation ( Figure 1). The PCA and PAF indices correlated well with each other within each village (for both settings Spearman's rho = 0.99, p <0.001). The Kappa agreement was found to be almost perfect, with values of 0.91 and 0.81 for Wuyi and Laogang, respectively. In Wuyi, 17 (6.6%) households were in different quartiles according to factor extraction method, while this was the case for 35 (14.2%) households in Laogang (Figure 2).
Comparison of proxy wealth indices with self-reported income and savings
Both PCA and PAF indices showed a weak, but significant, positive correlation with annual household percapita income (Spearman's rho = 0.27, p <0.001 for PCA and Spearman's rho = 0.26, p <0.001 for PAF), with annual household per-adult income (Spearman's (Figure 3). We found wide disparities among the asset-based proxy wealth quartiles in mean annual household percapita income and per-adult income. To illustrate this issue, using the PCA extraction method, we found highly significant Kruskal-Wallis test results both for rural and peri-urban settings (annual household percapita income in rural setting Kruskal-Wallis = 14.7, d.f. = 3, p = 0.002 and peri-urban setting Kruskal-Wallis = 21.0, d.f. = 3, p <0.001 and for per-adult income in rural setting Kruskal-Wallis = 23.7, d.f. = 3, p = 0.001 and peri-urban setting Kruskal-Wallis = 35.1, d.f. = 3, p <0.001). Similarly, we found disparities among wealth quartiles in a household's ability to save for both settings (χ 2 test p = 0.014 and χ 2 test p <0.001 for rural and peri-urban settings, respectively) ( Table 6, rural setting only). This pattern was also confirmed when comparing mean annual household per-capita savings among wealth quartiles for the peri-urban setting (Kruskal-Wallis = 17.3, d.f. = 3, p <0.001) but not for the rural setting (Kruskal-Wallis = 6.9, d.f. = 3, p = 0.077). Disparities in a combination of annual household income and saving were also apparent between MW and MP quartiles, in both settings ( Table 6, rural setting only).
Low income without savings
When the analyses were repeated for pooled data, we found that households from each setting were highly unequally distributed among the proxy wealth quartiles (Figure 4). Both PCA-and PAF-based indices showed weak, but significant, positive correlations with annual household per-capita income (Spearman's rho = 0.27, p <0.001 for PCA and Spearman's rho = 0.28, p <0.001 for PAF), per-adult income (Spearman's rho = 0.21, p <0.001 for PCA and Spearman's rho = 0.22, p <0.001 for PAF) and per-capita savings (Spearman's rho = 0.26, p <0.001 for PCA and Spearman's rho = 0.27, p <0.001 for PAF). Kappa agreements of the PCA and PAF indices with the index based on per-capita income were poor (0.12 and 0.13, respectively). Wide disparities in household durable assets, housing characteristics, utilities and sanitation were clear among the four proxy wealth categories. Disparities in a combination of annual household income and saving were also apparent between MW and MP quartiles (Table 7).
Discussion
This study contributes methodologically and analytically to research into measurements of wealth and SEP in a country undergoing rapid social, economic, demographic and health transitions [21]. Using household-level data collected with a pre-tested and standardized questionnaire in a rural and a peri-urban setting in Hunan province, P.R. China, we examined asset-based proxy wealth measurements constructed by two common exploratory factor analysis approaches. Our results confirm that, although they have different underlying theoretical assumptions, both PCA and PAF are equally effective statistical techniques in evaluating relative wealth among households. Consistent with the proxy wealth indices derived in the Demographic and Health Surveys (DHS) [9], we selected the first un-rotated component/factor, which accounted for 24.3% (rural) and 27.8% (peri-urban) of the overall variation in the data. Proxy wealth index scores were significantly associated with wealth quartiles based on a household's selfreported annual income, and a combination of income and savings, but not savings alone. We found large discrepancies between MW and MP households within and between the two study villages. However, further analyses of pooled results suggest that when combining data from the two settings, these differences may be structural, owing more to urbanization, modernization and accessibility of goods and services, rather than wealth per se. This may be particularly true for P.R. China, which is undergoing a long-term, yet spatially heterogenous, period of industrialization and development [22][23][24].
Salaries in the rural setting were frequently at the low (e.g., CNY <2000) and the high (e.g., CNY >7000) ends of the spectrum, while those in the peri-urban setting seemed clustered in the middle. This is possibly explained by reporting bias, as many of the peri-urban respondents were the next of kin and not the household head, or, by unaccounted externalities (e.g., government policies) imposing a spatial correlation on household income [25]. As in most of rural P.R. China, household income was predominantly (in the case of 220, or 85.3%, households) sourced from fishing and/or farming activities [26]. However, in the peri-urban setting our questionnaire survey failed to capture the most common source of primary income (175, or 71.1%, peri-urban household respondents reported 'other' primary source of income), although anecdotal evidence suggests that these are mainly remittances from non-resident household members and occasionally government payments or basic pension schemes. Saving was more commonly reported in the rural setting which, assuming no reporting bias, is likely a result of less secure employment, and hence greater income uncertainty [27], and a weakened social security system bringing about high user charges for public services [28]. We found that age was an important factor in saving patterns of rural households, implying that younger households smooth consumption, perhaps in order to invest so that their living standards can be enhanced in the future. Furthermore, stronger Table 6 The relationship between the proxy wealth index generated using principal components analysis (PCA) and income and savings, among households in a rural (Wuyi village) setting, Hunan province, China Table 7 The relationship between the proxy wealth index generated using principal components analysis (PCA) and income and savings, among households in rural (Wuyi village) and peri-urban (Laogan village) settings, Hunan province, China social networks in the rural setting may impact on decision making behavior such as household expenditure patterns, while costs of basic needs may also be substantially lower in rural areas [29,30]. Though proxy measures of wealth are welcome tools in international health research [15], the construction of indices based on exploratory factor analysis has been criticized for being subjective and unstandardized [12,31]. Conversely, several studies have reported that the assetbased index is a more accurate indicator of long-term wealth than income and consumption data [14,15]. Nonetheless, the reliability of the asset-based index has also been questioned by some authors [11]. Indeed, using binary data, such as ownership of a particular asset, may violate underlying assumptions that the measured variables are related in a linear fashion to the underlying latent constructs (i.e., wealth). Our results confirm those in other settings [15,32], indicating internal consistency and robustness in both methods, particularly for higher ranking households. While household income showed a significant association with ownership of numerous household durable assets, the correlation between the asset-based proxy wealth indices and the direct measures of wealth was low. The proxy wealth models explained a higher proportion of data in the peri-urban setting than the rural setting (27.8% vs. 24.3% for PCA), which may add strength to the concern that an asset-based index is a more 'appropriate' measure of wealth in urban areas compared with rural areas [18,31]. To increase these percentages, other data analysis tools such as the modified hierarchical ordered probit (HOPIT) model [33] and multiple correspondence analysis (MCA) [34] may be used to weight the indicators and should be explored further in subsequent studies.
Questions remain regarding the choice and number of variables to be included, although it has been suggested that the data should comprise of 10-15 subjects per variable [35]. With 15 and 11 variables in the rural and peri-urban villages, respectively, our sample size of 246-258 households was satisfactory. Sampling adequacy was further confirmed by the KMO measure (KMO >0.7 is said to be 'meritious'), and by Bartlett's test of sphericity, which indicated that the correlation matrices were not identity matrices, and hence the factor model was appropriate. Retaining only components or factors with eigenvalues >1.0 ensured that they explained at least as much variance in the data as one measured variable, since the variance accounted for by each of the components is its associated eigenvalue. However, Cronbach's coefficient α was just below 0.7 for each setting, indicating that up to 50% of the variance in the items may be attributable to measurement error. Similar to other studies, we found that the first PC and the first PF only explained a low percentage of variation in the data (20-30%). This finding suggests that, while the derived indices do provide a proxy measure of wealth, it is estimated with a considerable level of inaccuracy [31,32,36]. Although inclusion of the remaining components or factors helps explain some of the remaining variation, it is unclear if, and how, this should be done [37]. Consistent with findings from other studies, both PCA and PAF showed signs of clumping and truncation, hindering their ability to accurately classify wealth quartile borderline households, although this was less obvious in the rural data [18,32,34] (see Figure 1). Clumping may be a statistical phenomenon caused by a lack of input variables that can adequately distinguish between households of a similar economic status [18], or it may be a product of social and economic homogeneity stemming from half a century of socialist rule in P.R. China [27,31]. In the peri-urban setting, including ownership of computers and household Internet service may have helped further differentiate between the AA and MW. Differentiating between the age, price, condition and quantity of specific assets may reduce the effect of clumping and/or truncation and should be explored in greater detail, although results from previous studies imply that this information may not add to the accuracy or robustness of the index [14,38]. Furthermore, in our study the few households which were re-classified into different quartiles according to the factor extraction method employed only moved to immediately adjacent quartiles.
Notably, our asset-based proxy wealth index includes utility and sanitation variables, which can have direct effects on health, hence making it difficult to separate out indirect effects on health, via improved living conditions, from direct ones. Furthermore, a distinction should be made between variables that may be determinants of wealth, such as means of production, communication or transport, and those that are purely indicators of wealth, such as certain leisure goods [18]. Where quantifying the extent of inequality is the major goal, the concentration index and its associated concentration curve may be used [39,40]. Alternative approaches to measuring wealth, for example participatory wealth ranking (PWR), may be borrowed from development studies or from econometrics, potentially providing new insight for public health researchers [41].
An important drawback of the household survey method employed in our study is that the population sample did not include migrant populations, who tend to be the most poor and socially disadvantaged households in society [42], or information on informal remittances from temporary migrating household members [31]. Furthermore, the census data employed were obtained one year before our survey, and may have become inaccurate in the fast-changing living environment of contemporary P.R. China. Finally, compliance in the peri-urban setting was considerably lower than in the rural setting (70% vs. 88%) and no further information was available on non-compliant individuals for comparison [43]. Including the migrant population may have significantly altered the patterns emerging from our aggregated data and the apparently wide systemic rural to peri-urban gap [17].
While it is beyond the scope of this paper to comprehensively explore the factors behind the disparities both within and between settings, we call upon further research into the complex interactions between these and other assets such as human capital, public capital and land assets [44,45]. This would help to establish the driving forces of the observed differences between direct and proxy measures of wealth and to further examine how these differences impact on health service utilization, research and health policy [44,45]. Improved living conditions and diminished inequality gaps are not only important as distal and proximal determinants of health, but are also vital factors for national and regional sociopolitical stability [29]. Closing the rural to urban gap in particular is currently a top policy priority in P.R. China, with the 11 th Five Year Plan (2006-2010) having introduced the "Building a Socialist New Countryside" campaign [46]. In order to monitor and evaluate this campaign, however, it is crucial to have a time-and cost-effective appraisal of relative SEP [47]. This paper supports the use of the asset-based index as a proxy measure of wealth, with weights derived from either PCA or PAF, although we recommend caution when comparing aggregated data from various settings. Given the renewed interest in the role of inequalities on economic inefficiency [48], and the important role of P.R. China in achieving the Millennium Development Goals (MDGs) [49], it is conceivable that these methods will be of use in numerous other applications, as well as in other geographical locations.
Anti-image correlation matrix
A matrix containing the negatives of the partial correlation coefficients. Most of the off-diagonal elements should be small in a good factor model.
Asset
An item of ownership convertible into cash.
Bartlett's test of sphericity
A method to test whether the correlation matrix is an identity matrix, which would indicate that the factor model is inappropriate.
Cronbach's coefficient a
A method of assessing the internal consistency, or reliability, of a set of items, where [(1-α 2 ) × 100] indicates the percent of variance in the items that could be attributed to measurement error.
Durables
Manufactured products such as an automobile or a household appliance that can be used over a relatively long period without being depleted or consumed.
Eigenvalue
The scalar of the associated eigenvector, indicating the amount of variance explained by each PC or each PF.
Eigenvector
A vector that results in a scalar multiple of itself when multiplied by a matrix. It corresponds to the weights in a linear transformation when computing PCA and PAF.
Kaiser-Meyer-Olkin (KMO)
A measure of sampling adequacy which tests whether the partial correlations among items are small.
Kruskal-Wallis
A non-parametric method for testing equality of population medians among groups.
Peri-urban
Immediately adjoining an urban area; between the suburbs and the countryside.
Principal axis factoring (PAF)
A data reduction technique which uses squared multiple correlations as initial estimates of the communalities. The communalities are entered into the diagonals of the correlation matrix before factors are extracted from the matrix, allowing the variance of each item to be a function of both item communality and non-zero unique item variance.
Principal components analysis (PCA)
A data reduction technique using the principle components model. It assumes that components are uncorrelated and that the communality of each item sums to one for all components, therefore implying that each item has zero unique variance.
Rotation
Turning the reference axes of the factors about their origin in order to achieve a simpler and theoretically more meaningful factor solution than is produced by the unrotated factor solution; the positions of the items | 2016-05-12T22:15:10.714Z | 2010-09-03T00:00:00.000 | {
"year": 2010,
"sha1": "41905e0e07dd0773209cf341b79784a38c0564b3",
"oa_license": "CCBY",
"oa_url": "https://ete-online.biomedcentral.com/counter/pdf/10.1186/1742-7622-7-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2499b26f899840481d89fd14f11e41c40d1888c8",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
251886447 | pes2o/s2orc | v3-fos-license | Acupuncture combined with metformin versus metformin alone to improve pregnancy rate in polycystic ovary syndrome: A systematic review and meta-analysis
Objective The aim of this study was to evaluate the comparison between acupuncture combined with metformin versus metformin alone in improving the pregnancy rate of people with polycystic ovary syndrome (PCOS). Methods A literature search of eight databases resulted in nine randomized controlled trials (RCTs) that assessed the effect of acupuncture combined with metformin on pregnancy rate in PCOS patients compared with metformin alone. Subsequently, data extraction and analysis were conducted to evaluate the quality and risk of bias of the methodological design of the study, and meta-analysis was conducted on the RCT data. Results Nine RCTs and 1,159 women were included. Acupuncture can improve pregnancy rate. It was analyzed according to the diagnostic criteria of PCOS [Z = 2.72, p = 0.007, relative risk (RR) 1.31, 95% CI 1.08 to 1.60, p = 0.15, I 2 = 41%]. Analysis was performed according to different diagnostic criteria of pregnancy (Z = 3.22, p = 0.001, RR 1.35, 95% CI 1.13 to 1.63, p = 0.12, I 2 = 42%). Acupuncture can improve ovulation rate. Subgroup analysis was performed according to the number of ovulation patients (Z = 2.67, p = 0.008, RR 1.31, 95% CI 1.07 to 1.59, p = 0.04, I 2 = 63%) and ovulation cycle (Z = 3.57; p = 0.0004, RR 1.18, 95% CI 1.08 to 1.29, p = 0.57, I 2 = 0%). Statistical analysis also showed that acupuncture combined with metformin could improve homeostatic model assessment of insulin resistance (HOMA-IR) [mean difference (MD) −0.68, 95% CI −1.01 to −0.35, p = 0.003, I 2 = 83%]. Conclusions Based on the results of this study, compared with metformin alone, acupuncture combined with metformin has a positive effect on pregnancy rate, ovulation rate, and insulin resistance in PCOS. However, due to the limitations regarding the number and quality of the included studies, the above conclusions need to be verified by further high-quality studies. Systematic Review Registration https://www.crd.york.ac.uk/PROSPERO/#myprospero.
Introduction
Polycystic ovary syndrome (PCOS) is the most common hormonal disorder in women and is also one of the most common factors that cause infertility (1). With the opening of the three-child policy in China, the reproductive needs of women of childbearing age are increasingly urgent, but the incidence of PCOS in these women is as high as 9% to 18% (2), which has a great impact on pregnancy. Studies have shown that insulin resistance (IR) is a key feature of the pathophysiology of PCOS (3), with 85% of patients being affected by IR. IR disrupts the follicular environment (4) by leading to hyperandrogenemia (5), affecting follicular development and ovulation, which is not conducive to pregnancy. In addition, people with obesity account for 35%-60% (6) of the population with PCOS, which is closely related to IR and the pathological mechanism of PCOS (7). Weight gain has been shown to further aggravate IR (8,9). The above factors affect women's health.
Acupuncture has become more and more popular in the world as a complementary and alternative therapy for infertility. In 2010, a study in the United States showed that 29% of patients used complementary and alternative drugs with the aim to treat infertility, of whom 22% chose acupuncture (10). In China, traditional medicine is even more popular. A large number of clinical and animal experiments have shown that acupuncture has significant effects in the treatment of infertility and anovulation caused by PCOS, including improving clinical pregnancy rate, ovulation rate, live birth rate, insulin resistance, menstruation, hormone levels, follicular development, and hyperandrogenaemia, and regulating the secretory function of hypothalamic pituitary ovarian axis (HPOA) (11)(12)(13)(14)(15)(16)(17) but it may also cause subcutaneous bleeding or pain and other mild adverse reactions. Studies have shown that metformin is one of the most important drugs for reducing insulin resistance in PCOS patients (18). Such drugs have been shown to improve clinical pregnancy rate and ovulation rate, and have positive effects on hyperinsulinemia and ovarian androgen hypersecretion.
There are no detailed and systematic methodological evaluations and data consolidations between acupuncture combined with metformin versus metformin alone. The main objective of this study was to conduct a systematic review and meta-analysis comparing whether acupuncture combined with metformin can further improve pregnancy rates compared to metformin alone, thus providing a more effective treatment for this population. "Acupuncture" and "Metformin" and "Polycystic Ovary Syndrome" and "Infertility" and "Randomized Controlled Trial". See appendix for detailed search strategies.
Study selection and data extraction
Two researchers (YL and HYL) independently screened the retrieved articles, read the titles and abstracts, and then excluded the repeated studies and irrelevant articles. According to the inclusion criteria and exclusion criteria, eligible studies were identified, data were extracted and cross-checked, and any ambiguity was resolved through discussion and consensus. If no consensus was reached, a third researcher (QLX) was asked to adjudicate. Any literature that is removed will be recorded. The experimental groups received acupuncture (acupuncture, moxibustion, electroacupuncture, acupoint embedding therapy, acupoint injection, acupuncture ear, warm needling, fire-needle, or floating needle) combined with metformin; the control groups were treated with metformin alone. Inclusion criteria were as follows: (a) subjects were diagnosed with PCOS; (b) the treatment group used acupuncture combined with metformin, while the control group used only metformin; and (c) the study type was a randomized controlled trial. Exclusion criteria were as follows: (a) subjects were treated with drugs other than metformin; (b) the use of traditional Chinese medicine; (c) the study was conducted on animals; and (d) the study was not reported in Chinese or English. This study was divided into two groups: the acupuncture combined with metformin group and the metformin alone group. Data were extracted independently by two researchers (YL and HYL), and checked by another researcher (XC) to extract information that might be related to the research results, as follows: First author, year of publication, number of participants, age, infertility duration, treatment duration, interventions, diagnostic criteria, outcome indicators, side effects and adverse events, and other information. The results were recorded in an Excel spreadsheet ( Table 1).
Risk of bias assessment
Cochrane RoB2.0 was used to evaluate risk of bias in the individual studies (20). The following six items were extracted from each of the RCTs for evaluation: (a) randomization process; (b) deviations from intended interventions; (c) missing outcome data; (d) measurement of the outcome; (e) selection of the reported result; and (f) overall. When the appropriate method is used and described appropriately and clearly, the study was considered to be low risk; otherwise, it was rated as high risk, or some concerns if the method could not be accurately judged. Two researchers (YL and HYL) independently assessed these factors and, if necessary, a third researcher (QLX) was consulted to resolve disagreements (Figures 1, 2).
Outcomes
Main outcome measures: pregnancy rate: positive morning urine beta human chorionic gonadotropin (b-hCG) or blood b-hCG, basal body temperature (BBT) for more than 3 weeks, pregnancy sac detected by color ultrasonography, or fetal bud and heartbeat detected by colour ultrasonography at 7 weeks of gestation.
Statistical analysis
RevMan5.3 software was used for statistical analysis. Relative risk (RR) and its 95% confidence interval (CI) were used for dichotomous variables, MD or standardized mean difference (SMD) were used for continuity variables, and 95% CI was given. p < 0.05 was considered as a statistically significant difference. According to the study of the object of study, observation group intervention, and control group, judging whether the concrete application of similar clinical outcome index, according to the result of the I 2 test to determine the statistical heterogeneity. I 2 >50% is considered as high heterogeneity. The methods of subgroup analysis and sensitivity analysis were used, and the study adopts a random-effects model.
Studies retrieved
In total, 330 related literatures were screened initially, and nine studies (21−29) ultimately met our inclusion criteria after the different rounds of screening. A total of 1,159 patients with PCOS who received acupuncture or acupuncture combined with metformin were identified from nine randomized controlled trials ( Figure 3).
Quality of the evidence: Summary of findings table
We used the GRADE method to present a "Summary of Findings" table. The quality of evidence for outcome measures (pregnancy rate, ovulation rate, and HOMA-IR) was evaluated for a subject review comparison (acupuncture combined with metformin vs. metformin alone). We used the GRADE criteria to assess the quality of evidence, study limitations, inconsistencies, inaccuracies, and publication bias. The two researchers (ZL and QLX) independently judged the quality of the evidence (high, moderate, low, or very low) and resolved differences through discussion Table 2.
Pregnancy
Six studies of acupuncture combined with metformin reported pregnancy rates (22,23,(25)(26)(27)29). In terms of pregnancy rates, the results were statistically significantly different between the two groups (Z = 3.22, p = 0.001, RR 1.35, 95% CI 1.13 to 1.63, p = 0.12, I 2 = 42%), indicating that acupuncture combined with metformin was superior to metformin alone in improving pregnancy. We found that different diagnostic criteria were adopted in clinical studies on pregnancy rate; thus, we conducted subgroup analysis and used B-ultrasound for diagnosis (Z = 2.39, p = 0.02, RR 1.49, 95% CI 1.07 to 2.07, p = 0.03, I 2 = 66%). B-ultrasound was not used for the diagnostic group (Z = 2.14, p = 0.03, RR 1.30, 95% CI 1.02 to 1.65, p = 0.64, I 2 = 0%), indicating that the heterogeneity of the test was mainly derived from the criteria of pregnancy ( Figure 4). Additionally, there are different diagnostic criteria for PCOS; when summarizing the characteristics, we found that there are two different sets of diagnostic criteria. The first is the Rotterdam standard, and the second is a set of PCOS diagnosis and treatment guidelines in China. We also performed subgroup analysis of the above two kinds of diagnostic criteria, finding that heterogeneity when using the Rotterdam criteria (I 2 = 25%) was significantly lower than the overall heterogeneity (I 2 = 41%) ( Figure 5).
Ovulation
Seven studies of acupuncture combined with metformin reported ovulation rates, calculated by number of ovulations in four studies (21,24,25,29) and ovulation cycles in three studies (23,26,27). We analyzed these studies separately according to their different calculation methods.
Calculated by the number of ovulation: 532 participants in these studies, 224 women in the trial group and 180 women in the control group ovulated. The results were statistically significant (Z = 2.67, p = 0.008, RR 1.31, 95% CI 1.07 to 1.59, Risk of bias summary. (Figure 6).
Homa-Ir
HOMA-IR was reported in three studies of acupuncture combined with metformin (21-23) with a total of 282 participants, 144 in the experimental group and 138 in the control group. The results showed that the difference in HOMA-IR between the two groups was statistically significant (Z = 4.02; p < 0.0001), with high heterogeneity (MD −0.68, 95% CI −1.01 to −0.35, p = 0.003, I 2 = 83%). After sensitivity analysis (p = 0.34, I 2 = 0%), heterogeneity was derived from a research (22), and it was found that the sample size of the study was only 30 cases. Therefore, sample size may be the main source of heterogeneity ( Figure 7).
Side effects and adverse events
The incidence of side effects and adverse events was reported in two trials. A total of 21 patients treated with acupuncture combined with metformin had gastrointestinal reactions and 2 had menstrual abnormalities. After metformin alone treatment, a total of 23 patients had gastrointestinal reactions and 6 patients had menstrual abnormalities.
Discussion
In this systematic review and meta-analysis, acupuncture combined with metformin was suggested to have a positive effect on pregnancy rate, ovulation rate, and HOMA-IR in patients FIGURE 3 Flowchart of the study selection process. Risk of bias graph. *The basis for the assumed risk (e.g. the median control group risk across studies) is provided in footnotes. The corresponding risk (and its 95% confidence interval) is based on the assumed risk in the comparison group and the relative effect of the intervention (and its 95% CI). CI, Confidence interval; RR, Risk ratio; GRADE Working Group grades of evidence High quality, Further research is very unlikely to change our confidence in the estimate of effect. Moderate quality, Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate. Low quality, Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate. Very low quality, We are very uncertain about the estimate. 1 Evidence downgraded by two levels for serious risk of bias, the majority of the RCTs have unclear or high risk of bias. 2 Evidence downgraded by one level for serious inconsistency 50% <I2<75%. 3 Evidence downgraded by one level for serious imprecision, low number of events (total number of events < 300). 4 Evidence downgraded by two levels for serious inconsistency I2≥75%.
with PCOS compared to metformin alone. Subgroup analysis showed that the causes of heterogeneity were related to diagnostic criteria and random methods. These findings are consistent with previous systematic evaluation and RCT results. Acupuncture alone or in combination with Western medicine in treating PCOS infertility can improve pregnancy rate, ovulation rate, hormone level, ovarian function, insulin resistance, and obesity (15, 30-33). However, the research results of Wu et al. were contrary to this (34). This study reported that acupuncture was not effective at treating infertility in PCOS patients. From the perspective of trial design, this study broke the traditional "step-by-step" evidence-based medicine research mode; that is, literature studies, observational studies, and small RCTs were not performed before performing a large-sample, multicenter RCT (35). This differs from the studies we have included in this review. Among the specific therapeutic methods, the choice of needles, acupoints, stimulation intensity, qi generation, treatment frequency, and course of treatment all have an impact on the curative effect. Therefore, the different results of acupuncture efficacy may be related to the lack of uniformity and objectivity of the current relevant clinical research standards.
Secondly, through meta-analysis, it was found that different diagnostic criteria of PCOS can cause differences in heterogeneity, and the heterogeneity of Rotterdam diagnostic criteria (I 2 = 25%) was significantly lower than the overall heterogeneity (I 2 = 41%). Currently, the Rotterdam standard is generally accepted and internationally recognized. In this study, five articles (23,24,(27)(28)(29) adopted this standard. Fifteen years later, based on the disease characteristics of Han women in China, epidemiological investigation and research on the Chinese PCOS population was conducted. The 2018 "Guidelines for Diagnosis and Treatment of Polycystic Ovarian Syndrome" in China were formulated; two of the included studies were based on this standard (21,25). When comparing the two criteria, the Rotterdam diagnostic criteria have a FIGURE 4 Forest plot of effects of acupuncture combined with metformin versus metformin alone on pregnancy rate (diagnostic criteria of pregnancy). Forest plot of effects of acupuncture combined with metformin versus metformin alone on pregnancy rate (diagnostic criteria of PCOS).
wider range than China's, but China's criteria are more detailed. Chinese standards put forward the concept of "suspected PCOS" as the first step of diagnosis, with the second step being to confirm PCOS. As a result, the clinical studies using the Chinese standard included both patients with early suspected status and confirmed status. However, as the original data did not separate these two groups of patients, there was heterogeneity in the analysis. However, this improvement to the guidelines raises the bar for future health risk assessment, long-term clinical management, and pregnancy assistance strategies in patients who have not been fully diagnosed in the early stages of the condition.
Through different diagnostic criteria for pregnancy, our subgroup analysis found that the heterogeneity of pregnancy diagnosis by ultrasound was lower than that by laboratory indicators (hCG detection only in serum or urine). We think more about the diagnosis of pregnancy. Based on the last 3 years of studies retrieved from Clinical Trials.gov, the Chinese Clinical Trial Register and Prosper, in studies of PCOS and in vitro fertilization/intra-cytoplasmic sperm injection (IVF/ICSI) (36-39), ultrasound was widely used as the diagnostic standard for pregnancy. In the subgroup analysis, it was found that clinical pregnancy criteria were generally adopted relatively recently, indicating that Chinese clinical studies were closer to international studies.
Quality of the evidence
In this review, we included only nine RCTs, most of which had small sample sizes. The quality of the evidence was low or very low. The main problems are risk of bias, imprecision, and inconsistency of the research results (Table 2).
Limitations
The limitations of this systematic evaluation are as follows (1): the included intervention measures, such as acupuncture forms, acupoint selection, treatment frequency, and course of treatment, vary greatly, and further subgroup analysis cannot be carried out due to the limited number of studies, affecting the accuracy of the results; and (2) due to incomplete information about the authors of most of the studies, we were only able to contact four of the authors by email, and did not receive any replies. It can be seen that the author of the original study is not very positive about the return visit of the author of the systematic evaluation. It may be because the author doesnot commonly use this contact method, or the recognition and research significance of the systematic evaluation are not high within the industry, and the author is not confident in his own scheme. Forest plot of effects of acupuncture combined with metformin versus metformin alone on ovulation rate. Forest plot of effects of acupuncture combined with metformin versus metformin alone on HOMA-IR.
Conclusion Implications for practice
We cannot exclude clinically relevant differences in pregnancy rate, ovulation rate, LH/FSH, HOMA-IR, and FPG for acupuncture combined with metformin versus metformin. The pregnancy rate, ovulation rate and HOMA-IR of participants receiving acupuncture combined with metformin may have improved compared to metformin alone. Due to differences in pregnancy diagnostic criteria, we are not sure if this is effective in studies without a definitive B-ultrasound diagnosis. Due to the low quality of evidence and the limited number of RCTs available in this area, our ability to determine whether acupuncture combined with metformin is more effective at treating PCOS than metformin alone is limited.
Implications for research
It is hoped that acupuncture combined with metformin will improve the pregnancy rate of PCOS women.Further welldesigned and well-performed randomized controlled trials are needed to definitively answer this question. Under uniform diagnostic criteria, a standard set of acupuncture points and stimulation methods should be considered, and the control group should receive the same metformin regimen as the acupuncture group.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.
Author contributions
YL and HYL conducted literature searches, evaluated study inclusion, and extracted data. XC and JLY analyzed data and drafted the manuscript. QLX's assessment was incorporated into the study and cross-checked with XC. YL revised the language and the article. JW, XYZ, YMZ, CYL, MJW, and ZL conceived the study and revised the manuscript. All authors read and approved the final version of the manuscript. | 2022-08-29T13:22:22.219Z | 2022-08-29T00:00:00.000 | {
"year": 2022,
"sha1": "3d79b701cb02ebee165fc4dccacfef42de5e4193",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "3d79b701cb02ebee165fc4dccacfef42de5e4193",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
216535783 | pes2o/s2orc | v3-fos-license | Anterior Cruciate Ligament Repair Using a Knotless Suture Implant
Recent orthopedic literature has shown that primary repair for femoral-sided avulsion tears of the anterior cruciate ligament (ACL) can be successful. Primary ACL repair avoids invasive reconstruction techniques, graft-site morbidity, and the loss of native anatomy while producing excellent results in appropriately selected patients. Here we describe our patient selection parameters, ACL repair technique, and rehabilitation protocol.
As the popularity of primary ACL repair grows, new surgical techniques will allow for more successful repair of the ACL. The goal of this paper is to outline a technique for ACL repair using a knotless suture-based implant.
Surgical Technique (With Video Illustration)
The patient is positioned in the supine position with the use of a lateral post for standard knee arthroscopy (Video 1). Anterolateral viewing and anteromedial working portals are established. In addition, a far anteromedial portal also can be added as necessary. The ACL is probed to judge tissue quality and tear pattern (Fig 1). Intact periligamentous synovial sheath, which contains penetrating blood vessels of the middle geniculate artery, is frequently seen in these types of injuries. Maintenance of the synovial sheath's femoral attachment allows blood supply to ACL, lending confidence to the repair. 7 If a femoral avulsion tear with good tissue quality and vascularity is confirmed, we will proceed with primary repair.
Anchor Placement
First, marrow venting is performed at the femoral attachment site of the ACL (PowerPick; Arthrex, Naples, FL) to promote a biologic environment conducive to tissue healing. Next, a 3-cm  10-mm passport canula (Arthrex) is inserted through the anteromedial portal. Using this portal, the appropriate drill is used to create a site for anchor placement at the native ACL femoral footprint. Without removing the drill guide, a 2.6-mm knotless suture implant (FiberTak; Arthrex) is gently impacted into place (Fig 2A). A gentle pull on the repair suture helps seat this implant (Fig 2B).
Suture Passage and ACL Fixation
The repair suture of the anchor is loaded onto an antegrade self-retrieving suture passer (Scorpion; Arthrex). Approaching from the anteromedial portal, the suture is passed across the ACL from proximal to the middle one third and from lateral to medial ( Fig 3A). The retrieved tail is again loaded into the selfretrieving suture passer and is passed back across the ACL from middle one third to proximal and from medial to lateral ( Fig 3B). Next, the retrieved repair suture (blue suture) is shuttled through the implant using the shuttle suture (black-white speckled). This is accomplished using short quick tugs that are in line with the implant (Fig 4A). Once the repair stitch has been passed through the anchor, it can be pulled tightly to completely reduce the ACL to the femur. Given that this is a direct repair of an ACL avulsion, over-or undertensioning of the ACL has not been a concern. Finally, the excess suture is cut using an arthroscopic suture cutter ( Fig 4B). The final repaired ACL construct is probed and noted to be stable ( Fig 1B).
Rehabilitation
For the first 4 weeks postoperatively, weight bearing as tolerated is allowed with the knee locked in full extension using a hinged knee brace. Immediate range of motion from 0 to 90 is encouraged. After 4 weeks, the patient begins progressive range of motion and strengthening as tolerated. At 3 months postoperatively, neuromuscular and return to sport training are initiated as tolerated.
Discussion
Paired with the proper indications, ACL repair has proven itself to be a viable treatment in the surgical management of primary ACL injuries. 5,6 Furthermore, ACL repair possess several advantages over ACL reconstruction. ACL repair avoids the need for graft harvest, which can produce discomfort and disability for patients. Autograft patellar harvesting has been associated with significant anterior knee pain, whereas hamstring harvest may lead to weakening of the ACL-protective knee flexor musculature. 8 There is evidence that proprioception correlates better with postoperative function and satisfaction than mechanical stability. 10 The native ACL has proprioceptive receptors, 11,12 and patients with ACL-deficient knees have known loss of proprioception. 13-15 ACL repair, as opposed to reconstruction, has the advantage of retaining native tissue and therefore proprioceptive fibers.
From a biological perspective, there are apparent advantages of ACL repair over reconstruction. In fact, there have been previous positive outcomes reported with marrow stimulation in conjunction with repair of partial ACL tears. 6 There is also emerging evidence from a basic science perspective that the ACL has an inherent ability to heal. Murray et al. 16,17 have noted that an egress of cells when human ACL tissue is placed in culture. In addition, there is evidence that mesenchymal stem cells reside in the collagenous matrix and adjacent to small blood vessels around the ACL. These stem cells have to potential to provide a superior basis for biological repair. 18 Recently, good outcomes with no re-rupture rate have been reported in bridge-enhanced ACL repairs at 2-year follow-up. 19 In the unfortunate but possible scenario that subsequent surgery is required, revision of a failed ACL reconstruction may require extensive removal of hardware, bone grafting, surgical staging, and lesser outcomes. Using the described all-inside, knotless suture anchor repair technique, these issues are avoided.
We believe the major limitation of this technique is patient selection (Table 1). Only a fraction of patients with ACL injury are eligible for repair. Improper selection of these patients may lead to poor results, as demonstrated in early ACL repair cohorts. 2,20 With the advent of new arthroscopic techniques and the understanding of the underlying ACL biology, primary ACL repair has re-emerged as a viable option to treat appropriately indicated ACL ruptures. We believe our technique provides advantages over ACL reconstruction when treating acute femoral avulsion tears of the ACL. The use of a knotless suture implant allows for efficient and minimally invasive repair of the ACL in cases of femoral avulsion tears with good remnant tissue quality. | 2020-04-16T09:14:39.851Z | 2020-04-10T00:00:00.000 | {
"year": 2020,
"sha1": "c781141f14bf23bf09384b8d11dae8571f984652",
"oa_license": "CCBYNCND",
"oa_url": "http://www.arthroscopytechniques.org/article/S2212628720300190/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8f6eeed6bba865347c1bf1d3ae3e4db597cd860",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.